Test Report: Hyper-V_Windows 18384

                    
                      818397ea37b8941bfdd3d988b855153c5c099b26:2024-03-14:33567
                    
                

Test fail (13/217)

x
+
TestAddons/parallel/Registry (64.7s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:330: registry stabilized in 23.4724ms
addons_test.go:332: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-9g2gl" [a3d1b2c5-1dbe-465c-a3cb-5e2c60dfc6aa] Running
addons_test.go:332: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.026286s
addons_test.go:335: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-98xjp" [cb78e5bd-1c28-45cc-b020-51f1a27eeb0a] Running
addons_test.go:335: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.0114185s
addons_test.go:340: (dbg) Run:  kubectl --context addons-953400 delete po -l run=registry-test --now
addons_test.go:345: (dbg) Run:  kubectl --context addons-953400 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:345: (dbg) Done: kubectl --context addons-953400 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.4122632s)
addons_test.go:359: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-953400 ip
addons_test.go:359: (dbg) Done: out/minikube-windows-amd64.exe -p addons-953400 ip: (2.4473105s)
addons_test.go:364: expected stderr to be -empty- but got: *"W0314 17:48:32.577744    8776 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube7\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\n"* .  args "out/minikube-windows-amd64.exe -p addons-953400 ip"
2024/03/14 17:48:34 [DEBUG] GET http://172.17.87.211:5000
addons_test.go:388: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-953400 addons disable registry --alsologtostderr -v=1
addons_test.go:388: (dbg) Done: out/minikube-windows-amd64.exe -p addons-953400 addons disable registry --alsologtostderr -v=1: (14.2915454s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p addons-953400 -n addons-953400
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p addons-953400 -n addons-953400: (11.8624165s)
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-953400 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p addons-953400 logs -n 25: (8.3016289s)
helpers_test.go:252: TestAddons/parallel/Registry logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |       User        | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| start   | -o=json --download-only                                                                     | download-only-677800 | minikube7\jenkins | v1.32.0 | 14 Mar 24 17:40 UTC |                     |
	|         | -p download-only-677800                                                                     |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                                                                |                      |                   |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                      |                   |         |                     |                     |
	|         | --driver=hyperv                                                                             |                      |                   |         |                     |                     |
	| delete  | --all                                                                                       | minikube             | minikube7\jenkins | v1.32.0 | 14 Mar 24 17:41 UTC | 14 Mar 24 17:41 UTC |
	| delete  | -p download-only-677800                                                                     | download-only-677800 | minikube7\jenkins | v1.32.0 | 14 Mar 24 17:41 UTC | 14 Mar 24 17:41 UTC |
	| start   | -o=json --download-only                                                                     | download-only-065000 | minikube7\jenkins | v1.32.0 | 14 Mar 24 17:41 UTC |                     |
	|         | -p download-only-065000                                                                     |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                                                                |                      |                   |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                      |                   |         |                     |                     |
	|         | --driver=hyperv                                                                             |                      |                   |         |                     |                     |
	| delete  | --all                                                                                       | minikube             | minikube7\jenkins | v1.32.0 | 14 Mar 24 17:41 UTC | 14 Mar 24 17:41 UTC |
	| delete  | -p download-only-065000                                                                     | download-only-065000 | minikube7\jenkins | v1.32.0 | 14 Mar 24 17:41 UTC | 14 Mar 24 17:41 UTC |
	| start   | -o=json --download-only                                                                     | download-only-788200 | minikube7\jenkins | v1.32.0 | 14 Mar 24 17:41 UTC |                     |
	|         | -p download-only-788200                                                                     |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                                                           |                      |                   |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                      |                   |         |                     |                     |
	|         | --driver=hyperv                                                                             |                      |                   |         |                     |                     |
	| delete  | --all                                                                                       | minikube             | minikube7\jenkins | v1.32.0 | 14 Mar 24 17:41 UTC | 14 Mar 24 17:41 UTC |
	| delete  | -p download-only-788200                                                                     | download-only-788200 | minikube7\jenkins | v1.32.0 | 14 Mar 24 17:41 UTC | 14 Mar 24 17:41 UTC |
	| delete  | -p download-only-677800                                                                     | download-only-677800 | minikube7\jenkins | v1.32.0 | 14 Mar 24 17:41 UTC | 14 Mar 24 17:41 UTC |
	| delete  | -p download-only-065000                                                                     | download-only-065000 | minikube7\jenkins | v1.32.0 | 14 Mar 24 17:41 UTC | 14 Mar 24 17:41 UTC |
	| delete  | -p download-only-788200                                                                     | download-only-788200 | minikube7\jenkins | v1.32.0 | 14 Mar 24 17:41 UTC | 14 Mar 24 17:41 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-808300 | minikube7\jenkins | v1.32.0 | 14 Mar 24 17:41 UTC |                     |
	|         | binary-mirror-808300                                                                        |                      |                   |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |                   |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |                   |         |                     |                     |
	|         | http://127.0.0.1:50434                                                                      |                      |                   |         |                     |                     |
	|         | --driver=hyperv                                                                             |                      |                   |         |                     |                     |
	| delete  | -p binary-mirror-808300                                                                     | binary-mirror-808300 | minikube7\jenkins | v1.32.0 | 14 Mar 24 17:41 UTC | 14 Mar 24 17:42 UTC |
	| addons  | disable dashboard -p                                                                        | addons-953400        | minikube7\jenkins | v1.32.0 | 14 Mar 24 17:42 UTC |                     |
	|         | addons-953400                                                                               |                      |                   |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-953400        | minikube7\jenkins | v1.32.0 | 14 Mar 24 17:42 UTC |                     |
	|         | addons-953400                                                                               |                      |                   |         |                     |                     |
	| start   | -p addons-953400 --wait=true                                                                | addons-953400        | minikube7\jenkins | v1.32.0 | 14 Mar 24 17:42 UTC | 14 Mar 24 17:48 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |                   |         |                     |                     |
	|         | --addons=registry                                                                           |                      |                   |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |                   |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |                   |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |                   |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |                   |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |                   |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |                   |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |                   |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |                   |         |                     |                     |
	|         | --addons=yakd --driver=hyperv                                                               |                      |                   |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |                   |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |                   |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |                   |         |                     |                     |
	| addons  | addons-953400 addons                                                                        | addons-953400        | minikube7\jenkins | v1.32.0 | 14 Mar 24 17:48 UTC | 14 Mar 24 17:48 UTC |
	|         | disable metrics-server                                                                      |                      |                   |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |                   |         |                     |                     |
	| ssh     | addons-953400 ssh cat                                                                       | addons-953400        | minikube7\jenkins | v1.32.0 | 14 Mar 24 17:48 UTC | 14 Mar 24 17:48 UTC |
	|         | /opt/local-path-provisioner/pvc-9b8b44a0-12f8-43a2-8d75-342adde9e68c_default_test-pvc/file1 |                      |                   |         |                     |                     |
	| ip      | addons-953400 ip                                                                            | addons-953400        | minikube7\jenkins | v1.32.0 | 14 Mar 24 17:48 UTC | 14 Mar 24 17:48 UTC |
	| addons  | addons-953400 addons disable                                                                | addons-953400        | minikube7\jenkins | v1.32.0 | 14 Mar 24 17:48 UTC | 14 Mar 24 17:48 UTC |
	|         | registry --alsologtostderr                                                                  |                      |                   |         |                     |                     |
	|         | -v=1                                                                                        |                      |                   |         |                     |                     |
	| addons  | addons-953400 addons disable                                                                | addons-953400        | minikube7\jenkins | v1.32.0 | 14 Mar 24 17:48 UTC | 14 Mar 24 17:48 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |                   |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |                   |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-953400        | minikube7\jenkins | v1.32.0 | 14 Mar 24 17:48 UTC |                     |
	|         | -p addons-953400                                                                            |                      |                   |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-953400        | minikube7\jenkins | v1.32.0 | 14 Mar 24 17:48 UTC |                     |
	|         | addons-953400                                                                               |                      |                   |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/14 17:42:00
	Running on machine: minikube7
	Binary: Built with gc go1.22.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0314 17:42:00.637026    6448 out.go:291] Setting OutFile to fd 808 ...
	I0314 17:42:00.637726    6448 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 17:42:00.637726    6448 out.go:304] Setting ErrFile to fd 876...
	I0314 17:42:00.637726    6448 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 17:42:00.656642    6448 out.go:298] Setting JSON to false
	I0314 17:42:00.659637    6448 start.go:129] hostinfo: {"hostname":"minikube7","uptime":59925,"bootTime":1710378195,"procs":194,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4046 Build 19045.4046","kernelVersion":"10.0.19045.4046 Build 19045.4046","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f66ed2ea-6c04-4a6b-8eea-b2fb0953e990"}
	W0314 17:42:00.659637    6448 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0314 17:42:00.665433    6448 out.go:177] * [addons-953400] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4046 Build 19045.4046
	I0314 17:42:00.669676    6448 notify.go:220] Checking for updates...
	I0314 17:42:00.672865    6448 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0314 17:42:00.676795    6448 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0314 17:42:00.679260    6448 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	I0314 17:42:00.681627    6448 out.go:177]   - MINIKUBE_LOCATION=18384
	I0314 17:42:00.683380    6448 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0314 17:42:00.686102    6448 driver.go:392] Setting default libvirt URI to qemu:///system
	I0314 17:42:05.799422    6448 out.go:177] * Using the hyperv driver based on user configuration
	I0314 17:42:05.801640    6448 start.go:297] selected driver: hyperv
	I0314 17:42:05.801640    6448 start.go:901] validating driver "hyperv" against <nil>
	I0314 17:42:05.802049    6448 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0314 17:42:05.850856    6448 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0314 17:42:05.851873    6448 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0314 17:42:05.851873    6448 cni.go:84] Creating CNI manager for ""
	I0314 17:42:05.851873    6448 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0314 17:42:05.852400    6448 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0314 17:42:05.852476    6448 start.go:340] cluster config:
	{Name:addons-953400 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-953400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 17:42:05.852476    6448 iso.go:125] acquiring lock: {Name:mk1b3e73402180391a20a865a9454da445c269fc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0314 17:42:05.855370    6448 out.go:177] * Starting "addons-953400" primary control-plane node in "addons-953400" cluster
	I0314 17:42:05.858772    6448 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0314 17:42:05.858772    6448 preload.go:147] Found local preload: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I0314 17:42:05.858772    6448 cache.go:56] Caching tarball of preloaded images
	I0314 17:42:05.859783    6448 preload.go:173] Found C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0314 17:42:05.859783    6448 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0314 17:42:05.859783    6448 profile.go:142] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-953400\config.json ...
	I0314 17:42:05.860643    6448 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-953400\config.json: {Name:mk6353ba3cc11f6cc872b03b2da1517f0638c6ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 17:42:05.860825    6448 start.go:360] acquireMachinesLock for addons-953400: {Name:mk814f158b6187cc9297257c36fdbe0d2871c950 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0314 17:42:05.861751    6448 start.go:364] duration metric: took 0s to acquireMachinesLock for "addons-953400"
	I0314 17:42:05.861893    6448 start.go:93] Provisioning new machine with config: &{Name:addons-953400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.28.4 ClusterName:addons-953400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0314 17:42:05.861893    6448 start.go:125] createHost starting for "" (driver="hyperv")
	I0314 17:42:05.864254    6448 out.go:204] * Creating hyperv VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0314 17:42:05.864508    6448 start.go:159] libmachine.API.Create for "addons-953400" (driver="hyperv")
	I0314 17:42:05.864508    6448 client.go:168] LocalClient.Create starting
	I0314 17:42:05.865226    6448 main.go:141] libmachine: Creating CA: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem
	I0314 17:42:06.060846    6448 main.go:141] libmachine: Creating client certificate: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem
	I0314 17:42:06.231200    6448 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0314 17:42:08.252940    6448 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0314 17:42:08.252940    6448 main.go:141] libmachine: [stderr =====>] : 
	I0314 17:42:08.253771    6448 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0314 17:42:09.903628    6448 main.go:141] libmachine: [stdout =====>] : False
	
	I0314 17:42:09.903628    6448 main.go:141] libmachine: [stderr =====>] : 
	I0314 17:42:09.904208    6448 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0314 17:42:11.342636    6448 main.go:141] libmachine: [stdout =====>] : True
	
	I0314 17:42:11.342715    6448 main.go:141] libmachine: [stderr =====>] : 
	I0314 17:42:11.342787    6448 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0314 17:42:14.908282    6448 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0314 17:42:14.908375    6448 main.go:141] libmachine: [stderr =====>] : 
	I0314 17:42:14.910418    6448 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube7/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.32.1-1710348681-18375-amd64.iso...
	I0314 17:42:15.218134    6448 main.go:141] libmachine: Creating SSH key...
	I0314 17:42:15.307658    6448 main.go:141] libmachine: Creating VM...
	I0314 17:42:15.307658    6448 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0314 17:42:17.917833    6448 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0314 17:42:17.917833    6448 main.go:141] libmachine: [stderr =====>] : 
	I0314 17:42:17.918078    6448 main.go:141] libmachine: Using switch "Default Switch"
	I0314 17:42:17.918179    6448 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0314 17:42:19.563445    6448 main.go:141] libmachine: [stdout =====>] : True
	
	I0314 17:42:19.563445    6448 main.go:141] libmachine: [stderr =====>] : 
	I0314 17:42:19.563445    6448 main.go:141] libmachine: Creating VHD
	I0314 17:42:19.564533    6448 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\addons-953400\fixed.vhd' -SizeBytes 10MB -Fixed
	I0314 17:42:23.145368    6448 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube7
	Path                    : C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\addons-953400\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 333BB2FA-4BFB-4F1B-BA2C-D555B96DE6A4
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0314 17:42:23.145548    6448 main.go:141] libmachine: [stderr =====>] : 
	I0314 17:42:23.145548    6448 main.go:141] libmachine: Writing magic tar header
	I0314 17:42:23.145629    6448 main.go:141] libmachine: Writing SSH key tar header
	I0314 17:42:23.153302    6448 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\addons-953400\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\addons-953400\disk.vhd' -VHDType Dynamic -DeleteSource
	I0314 17:42:26.198418    6448 main.go:141] libmachine: [stdout =====>] : 
	I0314 17:42:26.198980    6448 main.go:141] libmachine: [stderr =====>] : 
	I0314 17:42:26.198980    6448 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\addons-953400\disk.vhd' -SizeBytes 20000MB
	I0314 17:42:28.733343    6448 main.go:141] libmachine: [stdout =====>] : 
	I0314 17:42:28.733343    6448 main.go:141] libmachine: [stderr =====>] : 
	I0314 17:42:28.734008    6448 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM addons-953400 -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\addons-953400' -SwitchName 'Default Switch' -MemoryStartupBytes 4000MB
	I0314 17:42:32.240860    6448 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	addons-953400 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0314 17:42:32.240860    6448 main.go:141] libmachine: [stderr =====>] : 
	I0314 17:42:32.240860    6448 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName addons-953400 -DynamicMemoryEnabled $false
	I0314 17:42:34.317691    6448 main.go:141] libmachine: [stdout =====>] : 
	I0314 17:42:34.317691    6448 main.go:141] libmachine: [stderr =====>] : 
	I0314 17:42:34.318437    6448 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor addons-953400 -Count 2
	I0314 17:42:36.338162    6448 main.go:141] libmachine: [stdout =====>] : 
	I0314 17:42:36.338162    6448 main.go:141] libmachine: [stderr =====>] : 
	I0314 17:42:36.338528    6448 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName addons-953400 -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\addons-953400\boot2docker.iso'
	I0314 17:42:38.733517    6448 main.go:141] libmachine: [stdout =====>] : 
	I0314 17:42:38.733517    6448 main.go:141] libmachine: [stderr =====>] : 
	I0314 17:42:38.733752    6448 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName addons-953400 -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\addons-953400\disk.vhd'
	I0314 17:42:41.167706    6448 main.go:141] libmachine: [stdout =====>] : 
	I0314 17:42:41.167706    6448 main.go:141] libmachine: [stderr =====>] : 
	I0314 17:42:41.167706    6448 main.go:141] libmachine: Starting VM...
	I0314 17:42:41.167706    6448 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM addons-953400
	I0314 17:42:44.129276    6448 main.go:141] libmachine: [stdout =====>] : 
	I0314 17:42:44.129326    6448 main.go:141] libmachine: [stderr =====>] : 
	I0314 17:42:44.129326    6448 main.go:141] libmachine: Waiting for host to start...
	I0314 17:42:44.129326    6448 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-953400 ).state
	I0314 17:42:46.221105    6448 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 17:42:46.221105    6448 main.go:141] libmachine: [stderr =====>] : 
	I0314 17:42:46.221105    6448 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-953400 ).networkadapters[0]).ipaddresses[0]
	I0314 17:42:48.521620    6448 main.go:141] libmachine: [stdout =====>] : 
	I0314 17:42:48.521620    6448 main.go:141] libmachine: [stderr =====>] : 
	I0314 17:42:49.536070    6448 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-953400 ).state
	I0314 17:42:51.570352    6448 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 17:42:51.570352    6448 main.go:141] libmachine: [stderr =====>] : 
	I0314 17:42:51.570551    6448 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-953400 ).networkadapters[0]).ipaddresses[0]
	I0314 17:42:53.880505    6448 main.go:141] libmachine: [stdout =====>] : 
	I0314 17:42:53.880505    6448 main.go:141] libmachine: [stderr =====>] : 
	I0314 17:42:54.889375    6448 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-953400 ).state
	I0314 17:42:56.924250    6448 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 17:42:56.924330    6448 main.go:141] libmachine: [stderr =====>] : 
	I0314 17:42:56.924330    6448 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-953400 ).networkadapters[0]).ipaddresses[0]
	I0314 17:42:59.251045    6448 main.go:141] libmachine: [stdout =====>] : 
	I0314 17:42:59.251193    6448 main.go:141] libmachine: [stderr =====>] : 
	I0314 17:43:00.262895    6448 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-953400 ).state
	I0314 17:43:02.291200    6448 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 17:43:02.292246    6448 main.go:141] libmachine: [stderr =====>] : 
	I0314 17:43:02.292292    6448 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-953400 ).networkadapters[0]).ipaddresses[0]
	I0314 17:43:04.629735    6448 main.go:141] libmachine: [stdout =====>] : 
	I0314 17:43:04.629807    6448 main.go:141] libmachine: [stderr =====>] : 
	I0314 17:43:05.636462    6448 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-953400 ).state
	I0314 17:43:07.718780    6448 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 17:43:07.719792    6448 main.go:141] libmachine: [stderr =====>] : 
	I0314 17:43:07.719837    6448 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-953400 ).networkadapters[0]).ipaddresses[0]
	I0314 17:43:10.136364    6448 main.go:141] libmachine: [stdout =====>] : 172.17.87.211
	
	I0314 17:43:10.136364    6448 main.go:141] libmachine: [stderr =====>] : 
	I0314 17:43:10.136685    6448 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-953400 ).state
	I0314 17:43:12.159169    6448 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 17:43:12.159169    6448 main.go:141] libmachine: [stderr =====>] : 
	I0314 17:43:12.159435    6448 machine.go:94] provisionDockerMachine start ...
	I0314 17:43:12.159588    6448 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-953400 ).state
	I0314 17:43:14.162504    6448 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 17:43:14.162504    6448 main.go:141] libmachine: [stderr =====>] : 
	I0314 17:43:14.163472    6448 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-953400 ).networkadapters[0]).ipaddresses[0]
	I0314 17:43:16.533714    6448 main.go:141] libmachine: [stdout =====>] : 172.17.87.211
	
	I0314 17:43:16.533714    6448 main.go:141] libmachine: [stderr =====>] : 
	I0314 17:43:16.538595    6448 main.go:141] libmachine: Using SSH client type: native
	I0314 17:43:16.547891    6448 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xac9f80] 0xaccb60 <nil>  [] 0s} 172.17.87.211 22 <nil> <nil>}
	I0314 17:43:16.547891    6448 main.go:141] libmachine: About to run SSH command:
	hostname
	I0314 17:43:16.680087    6448 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0314 17:43:16.680169    6448 buildroot.go:166] provisioning hostname "addons-953400"
	I0314 17:43:16.680169    6448 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-953400 ).state
	I0314 17:43:18.678785    6448 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 17:43:18.678785    6448 main.go:141] libmachine: [stderr =====>] : 
	I0314 17:43:18.678865    6448 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-953400 ).networkadapters[0]).ipaddresses[0]
	I0314 17:43:21.040293    6448 main.go:141] libmachine: [stdout =====>] : 172.17.87.211
	
	I0314 17:43:21.040293    6448 main.go:141] libmachine: [stderr =====>] : 
	I0314 17:43:21.045652    6448 main.go:141] libmachine: Using SSH client type: native
	I0314 17:43:21.046360    6448 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xac9f80] 0xaccb60 <nil>  [] 0s} 172.17.87.211 22 <nil> <nil>}
	I0314 17:43:21.046360    6448 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-953400 && echo "addons-953400" | sudo tee /etc/hostname
	I0314 17:43:21.202052    6448 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-953400
	
	I0314 17:43:21.202209    6448 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-953400 ).state
	I0314 17:43:23.204393    6448 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 17:43:23.204393    6448 main.go:141] libmachine: [stderr =====>] : 
	I0314 17:43:23.204393    6448 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-953400 ).networkadapters[0]).ipaddresses[0]
	I0314 17:43:25.586261    6448 main.go:141] libmachine: [stdout =====>] : 172.17.87.211
	
	I0314 17:43:25.586261    6448 main.go:141] libmachine: [stderr =====>] : 
	I0314 17:43:25.590294    6448 main.go:141] libmachine: Using SSH client type: native
	I0314 17:43:25.590690    6448 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xac9f80] 0xaccb60 <nil>  [] 0s} 172.17.87.211 22 <nil> <nil>}
	I0314 17:43:25.590690    6448 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-953400' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-953400/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-953400' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0314 17:43:25.725233    6448 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0314 17:43:25.725233    6448 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube7\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube7\minikube-integration\.minikube}
	I0314 17:43:25.725233    6448 buildroot.go:174] setting up certificates
	I0314 17:43:25.725233    6448 provision.go:84] configureAuth start
	I0314 17:43:25.725233    6448 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-953400 ).state
	I0314 17:43:27.718647    6448 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 17:43:27.718647    6448 main.go:141] libmachine: [stderr =====>] : 
	I0314 17:43:27.719774    6448 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-953400 ).networkadapters[0]).ipaddresses[0]
	I0314 17:43:30.087357    6448 main.go:141] libmachine: [stdout =====>] : 172.17.87.211
	
	I0314 17:43:30.087357    6448 main.go:141] libmachine: [stderr =====>] : 
	I0314 17:43:30.087357    6448 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-953400 ).state
	I0314 17:43:32.086302    6448 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 17:43:32.087298    6448 main.go:141] libmachine: [stderr =====>] : 
	I0314 17:43:32.087499    6448 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-953400 ).networkadapters[0]).ipaddresses[0]
	I0314 17:43:34.502894    6448 main.go:141] libmachine: [stdout =====>] : 172.17.87.211
	
	I0314 17:43:34.503851    6448 main.go:141] libmachine: [stderr =====>] : 
	I0314 17:43:34.503885    6448 provision.go:143] copyHostCerts
	I0314 17:43:34.503885    6448 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0314 17:43:34.503885    6448 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem (1679 bytes)
	I0314 17:43:34.503885    6448 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0314 17:43:34.503885    6448 provision.go:117] generating server cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.addons-953400 san=[127.0.0.1 172.17.87.211 addons-953400 localhost minikube]
	I0314 17:43:35.051813    6448 provision.go:177] copyRemoteCerts
	I0314 17:43:35.062117    6448 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0314 17:43:35.062117    6448 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-953400 ).state
	I0314 17:43:37.066783    6448 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 17:43:37.066783    6448 main.go:141] libmachine: [stderr =====>] : 
	I0314 17:43:37.066783    6448 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-953400 ).networkadapters[0]).ipaddresses[0]
	I0314 17:43:39.464847    6448 main.go:141] libmachine: [stdout =====>] : 172.17.87.211
	
	I0314 17:43:39.464906    6448 main.go:141] libmachine: [stderr =====>] : 
	I0314 17:43:39.464959    6448 sshutil.go:53] new ssh client: &{IP:172.17.87.211 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\addons-953400\id_rsa Username:docker}
	I0314 17:43:39.561583    6448 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.4991273s)
	I0314 17:43:39.562061    6448 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0314 17:43:39.610559    6448 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0314 17:43:39.652385    6448 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0314 17:43:39.695860    6448 provision.go:87] duration metric: took 13.96943s to configureAuth
	I0314 17:43:39.695860    6448 buildroot.go:189] setting minikube options for container-runtime
	I0314 17:43:39.695860    6448 config.go:182] Loaded profile config "addons-953400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0314 17:43:39.695860    6448 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-953400 ).state
	I0314 17:43:41.689651    6448 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 17:43:41.689651    6448 main.go:141] libmachine: [stderr =====>] : 
	I0314 17:43:41.689651    6448 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-953400 ).networkadapters[0]).ipaddresses[0]
	I0314 17:43:44.046410    6448 main.go:141] libmachine: [stdout =====>] : 172.17.87.211
	
	I0314 17:43:44.046410    6448 main.go:141] libmachine: [stderr =====>] : 
	I0314 17:43:44.050258    6448 main.go:141] libmachine: Using SSH client type: native
	I0314 17:43:44.050788    6448 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xac9f80] 0xaccb60 <nil>  [] 0s} 172.17.87.211 22 <nil> <nil>}
	I0314 17:43:44.050788    6448 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0314 17:43:44.182459    6448 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0314 17:43:44.182459    6448 buildroot.go:70] root file system type: tmpfs
	I0314 17:43:44.182459    6448 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0314 17:43:44.182987    6448 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-953400 ).state
	I0314 17:43:46.164403    6448 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 17:43:46.164403    6448 main.go:141] libmachine: [stderr =====>] : 
	I0314 17:43:46.164403    6448 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-953400 ).networkadapters[0]).ipaddresses[0]
	I0314 17:43:48.537136    6448 main.go:141] libmachine: [stdout =====>] : 172.17.87.211
	
	I0314 17:43:48.537316    6448 main.go:141] libmachine: [stderr =====>] : 
	I0314 17:43:48.541545    6448 main.go:141] libmachine: Using SSH client type: native
	I0314 17:43:48.541930    6448 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xac9f80] 0xaccb60 <nil>  [] 0s} 172.17.87.211 22 <nil> <nil>}
	I0314 17:43:48.542022    6448 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0314 17:43:48.684232    6448 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0314 17:43:48.684232    6448 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-953400 ).state
	I0314 17:43:50.640211    6448 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 17:43:50.640936    6448 main.go:141] libmachine: [stderr =====>] : 
	I0314 17:43:50.640936    6448 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-953400 ).networkadapters[0]).ipaddresses[0]
	I0314 17:43:53.044810    6448 main.go:141] libmachine: [stdout =====>] : 172.17.87.211
	
	I0314 17:43:53.045459    6448 main.go:141] libmachine: [stderr =====>] : 
	I0314 17:43:53.049321    6448 main.go:141] libmachine: Using SSH client type: native
	I0314 17:43:53.049686    6448 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xac9f80] 0xaccb60 <nil>  [] 0s} 172.17.87.211 22 <nil> <nil>}
	I0314 17:43:53.049686    6448 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0314 17:43:55.170518    6448 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0314 17:43:55.170518    6448 machine.go:97] duration metric: took 43.0078507s to provisionDockerMachine
	I0314 17:43:55.170518    6448 client.go:171] duration metric: took 1m49.2977775s to LocalClient.Create
	I0314 17:43:55.170518    6448 start.go:167] duration metric: took 1m49.2977775s to libmachine.API.Create "addons-953400"
	I0314 17:43:55.170518    6448 start.go:293] postStartSetup for "addons-953400" (driver="hyperv")
	I0314 17:43:55.170518    6448 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0314 17:43:55.180686    6448 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0314 17:43:55.180686    6448 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-953400 ).state
	I0314 17:43:57.223267    6448 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 17:43:57.223314    6448 main.go:141] libmachine: [stderr =====>] : 
	I0314 17:43:57.223368    6448 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-953400 ).networkadapters[0]).ipaddresses[0]
	I0314 17:43:59.617331    6448 main.go:141] libmachine: [stdout =====>] : 172.17.87.211
	
	I0314 17:43:59.617331    6448 main.go:141] libmachine: [stderr =====>] : 
	I0314 17:43:59.618002    6448 sshutil.go:53] new ssh client: &{IP:172.17.87.211 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\addons-953400\id_rsa Username:docker}
	I0314 17:43:59.727950    6448 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.5469228s)
	I0314 17:43:59.735928    6448 ssh_runner.go:195] Run: cat /etc/os-release
	I0314 17:43:59.742882    6448 info.go:137] Remote host: Buildroot 2023.02.9
	I0314 17:43:59.742882    6448 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\addons for local assets ...
	I0314 17:43:59.743859    6448 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\files for local assets ...
	I0314 17:43:59.744215    6448 start.go:296] duration metric: took 4.5732829s for postStartSetup
	I0314 17:43:59.746779    6448 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-953400 ).state
	I0314 17:44:01.720016    6448 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 17:44:01.720016    6448 main.go:141] libmachine: [stderr =====>] : 
	I0314 17:44:01.720016    6448 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-953400 ).networkadapters[0]).ipaddresses[0]
	I0314 17:44:04.114487    6448 main.go:141] libmachine: [stdout =====>] : 172.17.87.211
	
	I0314 17:44:04.114744    6448 main.go:141] libmachine: [stderr =====>] : 
	I0314 17:44:04.114744    6448 profile.go:142] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-953400\config.json ...
	I0314 17:44:04.117473    6448 start.go:128] duration metric: took 1m58.2466031s to createHost
	I0314 17:44:04.117669    6448 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-953400 ).state
	I0314 17:44:06.092232    6448 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 17:44:06.093009    6448 main.go:141] libmachine: [stderr =====>] : 
	I0314 17:44:06.093086    6448 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-953400 ).networkadapters[0]).ipaddresses[0]
	I0314 17:44:08.473391    6448 main.go:141] libmachine: [stdout =====>] : 172.17.87.211
	
	I0314 17:44:08.473641    6448 main.go:141] libmachine: [stderr =====>] : 
	I0314 17:44:08.477409    6448 main.go:141] libmachine: Using SSH client type: native
	I0314 17:44:08.477961    6448 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xac9f80] 0xaccb60 <nil>  [] 0s} 172.17.87.211 22 <nil> <nil>}
	I0314 17:44:08.478036    6448 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0314 17:44:08.615614    6448 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710438248.868691717
	
	I0314 17:44:08.615614    6448 fix.go:216] guest clock: 1710438248.868691717
	I0314 17:44:08.615614    6448 fix.go:229] Guest: 2024-03-14 17:44:08.868691717 +0000 UTC Remote: 2024-03-14 17:44:04.1175549 +0000 UTC m=+123.612006901 (delta=4.751136817s)
	I0314 17:44:08.615614    6448 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-953400 ).state
	I0314 17:44:10.647805    6448 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 17:44:10.647805    6448 main.go:141] libmachine: [stderr =====>] : 
	I0314 17:44:10.648410    6448 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-953400 ).networkadapters[0]).ipaddresses[0]
	I0314 17:44:13.033719    6448 main.go:141] libmachine: [stdout =====>] : 172.17.87.211
	
	I0314 17:44:13.033719    6448 main.go:141] libmachine: [stderr =====>] : 
	I0314 17:44:13.038070    6448 main.go:141] libmachine: Using SSH client type: native
	I0314 17:44:13.038282    6448 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xac9f80] 0xaccb60 <nil>  [] 0s} 172.17.87.211 22 <nil> <nil>}
	I0314 17:44:13.038282    6448 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1710438248
	I0314 17:44:13.173682    6448 main.go:141] libmachine: SSH cmd err, output: <nil>: Thu Mar 14 17:44:08 UTC 2024
	
	I0314 17:44:13.173682    6448 fix.go:236] clock set: Thu Mar 14 17:44:08 UTC 2024
	 (err=<nil>)
	I0314 17:44:13.173744    6448 start.go:83] releasing machines lock for "addons-953400", held for 2m7.3024112s
	I0314 17:44:13.173912    6448 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-953400 ).state
	I0314 17:44:15.161521    6448 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 17:44:15.161521    6448 main.go:141] libmachine: [stderr =====>] : 
	I0314 17:44:15.161521    6448 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-953400 ).networkadapters[0]).ipaddresses[0]
	I0314 17:44:17.564019    6448 main.go:141] libmachine: [stdout =====>] : 172.17.87.211
	
	I0314 17:44:17.564019    6448 main.go:141] libmachine: [stderr =====>] : 
	I0314 17:44:17.566746    6448 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0314 17:44:17.567362    6448 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-953400 ).state
	I0314 17:44:17.575016    6448 ssh_runner.go:195] Run: cat /version.json
	I0314 17:44:17.575016    6448 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-953400 ).state
	I0314 17:44:19.564567    6448 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 17:44:19.564756    6448 main.go:141] libmachine: [stderr =====>] : 
	I0314 17:44:19.564756    6448 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-953400 ).networkadapters[0]).ipaddresses[0]
	I0314 17:44:19.566407    6448 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 17:44:19.566447    6448 main.go:141] libmachine: [stderr =====>] : 
	I0314 17:44:19.566447    6448 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-953400 ).networkadapters[0]).ipaddresses[0]
	I0314 17:44:21.961629    6448 main.go:141] libmachine: [stdout =====>] : 172.17.87.211
	
	I0314 17:44:21.961629    6448 main.go:141] libmachine: [stderr =====>] : 
	I0314 17:44:21.962214    6448 sshutil.go:53] new ssh client: &{IP:172.17.87.211 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\addons-953400\id_rsa Username:docker}
	I0314 17:44:21.985269    6448 main.go:141] libmachine: [stdout =====>] : 172.17.87.211
	
	I0314 17:44:21.985269    6448 main.go:141] libmachine: [stderr =====>] : 
	I0314 17:44:21.985269    6448 sshutil.go:53] new ssh client: &{IP:172.17.87.211 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\addons-953400\id_rsa Username:docker}
	I0314 17:44:22.053596    6448 ssh_runner.go:235] Completed: cat /version.json: (4.477726s)
	I0314 17:44:22.063602    6448 ssh_runner.go:195] Run: systemctl --version
	I0314 17:44:22.184213    6448 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.6165994s)
	I0314 17:44:22.196290    6448 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0314 17:44:22.204665    6448 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0314 17:44:22.213731    6448 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0314 17:44:22.242909    6448 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0314 17:44:22.242909    6448 start.go:494] detecting cgroup driver to use...
	I0314 17:44:22.243537    6448 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0314 17:44:22.284805    6448 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0314 17:44:22.310905    6448 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0314 17:44:22.328595    6448 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0314 17:44:22.339133    6448 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0314 17:44:22.366517    6448 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0314 17:44:22.396871    6448 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0314 17:44:22.426115    6448 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0314 17:44:22.453292    6448 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0314 17:44:22.481627    6448 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0314 17:44:22.508946    6448 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0314 17:44:22.535531    6448 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0314 17:44:22.560219    6448 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 17:44:22.756532    6448 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0314 17:44:22.793673    6448 start.go:494] detecting cgroup driver to use...
	I0314 17:44:22.805769    6448 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0314 17:44:22.837372    6448 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0314 17:44:22.869080    6448 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0314 17:44:22.903305    6448 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0314 17:44:22.935951    6448 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0314 17:44:22.966398    6448 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0314 17:44:23.029585    6448 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0314 17:44:23.052148    6448 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0314 17:44:23.092784    6448 ssh_runner.go:195] Run: which cri-dockerd
	I0314 17:44:23.107093    6448 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0314 17:44:23.124173    6448 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0314 17:44:23.162158    6448 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0314 17:44:23.347490    6448 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0314 17:44:23.517455    6448 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0314 17:44:23.517455    6448 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0314 17:44:23.557384    6448 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 17:44:23.742835    6448 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0314 17:44:26.256081    6448 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.513058s)
	I0314 17:44:26.264576    6448 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0314 17:44:26.296053    6448 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0314 17:44:26.325446    6448 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0314 17:44:26.514390    6448 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0314 17:44:26.700967    6448 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 17:44:26.890873    6448 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0314 17:44:26.929757    6448 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0314 17:44:26.963528    6448 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 17:44:27.147004    6448 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0314 17:44:27.260165    6448 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0314 17:44:27.268278    6448 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0314 17:44:27.277919    6448 start.go:562] Will wait 60s for crictl version
	I0314 17:44:27.286474    6448 ssh_runner.go:195] Run: which crictl
	I0314 17:44:27.300892    6448 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0314 17:44:27.378186    6448 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  25.0.4
	RuntimeApiVersion:  v1
	I0314 17:44:27.387129    6448 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0314 17:44:27.428719    6448 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0314 17:44:27.472641    6448 out.go:204] * Preparing Kubernetes v1.28.4 on Docker 25.0.4 ...
	I0314 17:44:27.473588    6448 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0314 17:44:27.479608    6448 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0314 17:44:27.479608    6448 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0314 17:44:27.479608    6448 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0314 17:44:27.479608    6448 ip.go:207] Found interface: {Index:7 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:82:e8:09 Flags:up|broadcast|multicast|running}
	I0314 17:44:27.481994    6448 ip.go:210] interface addr: fe80::e3be:cf7e:6bd2:b964/64
	I0314 17:44:27.481994    6448 ip.go:210] interface addr: 172.17.80.1/20
	I0314 17:44:27.490695    6448 ssh_runner.go:195] Run: grep 172.17.80.1	host.minikube.internal$ /etc/hosts
	I0314 17:44:27.497666    6448 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.17.80.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0314 17:44:27.518741    6448 kubeadm.go:877] updating cluster {Name:addons-953400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.2
8.4 ClusterName:addons-953400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.87.211 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0314 17:44:27.518741    6448 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0314 17:44:27.525907    6448 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0314 17:44:27.550029    6448 docker.go:685] Got preloaded images: 
	I0314 17:44:27.550029    6448 docker.go:691] registry.k8s.io/kube-apiserver:v1.28.4 wasn't preloaded
	I0314 17:44:27.559396    6448 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0314 17:44:27.589018    6448 ssh_runner.go:195] Run: which lz4
	I0314 17:44:27.604555    6448 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0314 17:44:27.611894    6448 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0314 17:44:27.611894    6448 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (423165415 bytes)
	I0314 17:44:29.204302    6448 docker.go:649] duration metric: took 1.6089564s to copy over tarball
	I0314 17:44:29.213489    6448 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0314 17:44:36.177660    6448 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (6.9635551s)
	I0314 17:44:36.177714    6448 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0314 17:44:36.257812    6448 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0314 17:44:37.501637    6448 ssh_runner.go:235] Completed: sudo cat /var/lib/docker/image/overlay2/repositories.json: (1.2437315s)
	I0314 17:44:37.501881    6448 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2629 bytes)
	I0314 17:44:37.544999    6448 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 17:44:37.727607    6448 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0314 17:44:42.751710    6448 ssh_runner.go:235] Completed: sudo systemctl restart docker: (5.0237277s)
	I0314 17:44:42.759261    6448 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0314 17:44:42.786511    6448 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.4
	registry.k8s.io/kube-proxy:v1.28.4
	registry.k8s.io/kube-controller-manager:v1.28.4
	registry.k8s.io/kube-scheduler:v1.28.4
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0314 17:44:42.786614    6448 cache_images.go:84] Images are preloaded, skipping loading
	I0314 17:44:42.786614    6448 kubeadm.go:928] updating node { 172.17.87.211 8443 v1.28.4 docker true true} ...
	I0314 17:44:42.786849    6448 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-953400 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.17.87.211
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:addons-953400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0314 17:44:42.793365    6448 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0314 17:44:42.826580    6448 cni.go:84] Creating CNI manager for ""
	I0314 17:44:42.826580    6448 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0314 17:44:42.826580    6448 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0314 17:44:42.826580    6448 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.17.87.211 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-953400 NodeName:addons-953400 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.17.87.211"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.17.87.211 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/ku
bernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0314 17:44:42.827119    6448 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.17.87.211
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-953400"
	  kubeletExtraArgs:
	    node-ip: 172.17.87.211
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.17.87.211"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0314 17:44:42.839252    6448 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0314 17:44:42.855754    6448 binaries.go:44] Found k8s binaries, skipping transfer
	I0314 17:44:42.866422    6448 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0314 17:44:42.883635    6448 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0314 17:44:42.917265    6448 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0314 17:44:42.950315    6448 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I0314 17:44:42.990450    6448 ssh_runner.go:195] Run: grep 172.17.87.211	control-plane.minikube.internal$ /etc/hosts
	I0314 17:44:42.999438    6448 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.17.87.211	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0314 17:44:43.030879    6448 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 17:44:43.215938    6448 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0314 17:44:43.242957    6448 certs.go:68] Setting up C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-953400 for IP: 172.17.87.211
	I0314 17:44:43.242990    6448 certs.go:194] generating shared ca certs ...
	I0314 17:44:43.243041    6448 certs.go:226] acquiring lock for ca certs: {Name:mk3bd6e475aadf28590677101b36f61c132d4290 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 17:44:43.243592    6448 certs.go:240] generating "minikubeCA" ca cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key
	I0314 17:44:43.352102    6448 crypto.go:156] Writing cert to C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt ...
	I0314 17:44:43.352102    6448 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt: {Name:mkfaab427ca81a644dd8158f14f3f807f65e8ec2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 17:44:43.352698    6448 crypto.go:164] Writing key to C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key ...
	I0314 17:44:43.353698    6448 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key: {Name:mke77f92a4900f4ba92d06a20a85ddb2e967d43b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 17:44:43.353898    6448 certs.go:240] generating "proxyClientCA" ca cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key
	I0314 17:44:43.429897    6448 crypto.go:156] Writing cert to C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.crt ...
	I0314 17:44:43.430897    6448 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.crt: {Name:mk06242bb3e648e29b1f160fecc7578d1c3ccbe2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 17:44:43.431337    6448 crypto.go:164] Writing key to C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key ...
	I0314 17:44:43.431337    6448 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key: {Name:mk9dbfc690f0c353aa1a789ba901364f0646dd1e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 17:44:43.432455    6448 certs.go:256] generating profile certs ...
	I0314 17:44:43.433723    6448 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-953400\client.key
	I0314 17:44:43.433723    6448 crypto.go:68] Generating cert C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-953400\client.crt with IP's: []
	I0314 17:44:43.508340    6448 crypto.go:156] Writing cert to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-953400\client.crt ...
	I0314 17:44:43.508340    6448 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-953400\client.crt: {Name:mk17f99396e4fb29d5e95ad2d6bb0735fae1f922 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 17:44:43.509868    6448 crypto.go:164] Writing key to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-953400\client.key ...
	I0314 17:44:43.509868    6448 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-953400\client.key: {Name:mkb5c79291bfb91d36587e3637d90797984458c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 17:44:43.510192    6448 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-953400\apiserver.key.e31c22e9
	I0314 17:44:43.510192    6448 crypto.go:68] Generating cert C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-953400\apiserver.crt.e31c22e9 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.17.87.211]
	I0314 17:44:43.686371    6448 crypto.go:156] Writing cert to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-953400\apiserver.crt.e31c22e9 ...
	I0314 17:44:43.686371    6448 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-953400\apiserver.crt.e31c22e9: {Name:mkdd4a3384b20e6307460988f51b25f4cab48fc7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 17:44:43.687463    6448 crypto.go:164] Writing key to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-953400\apiserver.key.e31c22e9 ...
	I0314 17:44:43.687463    6448 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-953400\apiserver.key.e31c22e9: {Name:mkaa517942ada3c9d1e3a43ab48dd18ed0bbe6ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 17:44:43.688425    6448 certs.go:381] copying C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-953400\apiserver.crt.e31c22e9 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-953400\apiserver.crt
	I0314 17:44:43.698648    6448 certs.go:385] copying C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-953400\apiserver.key.e31c22e9 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-953400\apiserver.key
	I0314 17:44:43.700678    6448 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-953400\proxy-client.key
	I0314 17:44:43.700880    6448 crypto.go:68] Generating cert C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-953400\proxy-client.crt with IP's: []
	I0314 17:44:44.101020    6448 crypto.go:156] Writing cert to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-953400\proxy-client.crt ...
	I0314 17:44:44.101020    6448 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-953400\proxy-client.crt: {Name:mka2916d8c56fec6ce145123058215fac925e88b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 17:44:44.103182    6448 crypto.go:164] Writing key to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-953400\proxy-client.key ...
	I0314 17:44:44.103182    6448 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-953400\proxy-client.key: {Name:mk7c8e843b840dc257f702a51ad4918757e339c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 17:44:44.114833    6448 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0314 17:44:44.114833    6448 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0314 17:44:44.114833    6448 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0314 17:44:44.114833    6448 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0314 17:44:44.116903    6448 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0314 17:44:44.159369    6448 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0314 17:44:44.200829    6448 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0314 17:44:44.241402    6448 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0314 17:44:44.284673    6448 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-953400\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0314 17:44:44.325236    6448 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-953400\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0314 17:44:44.368750    6448 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-953400\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0314 17:44:44.409574    6448 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-953400\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0314 17:44:44.454678    6448 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0314 17:44:44.495262    6448 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0314 17:44:44.534917    6448 ssh_runner.go:195] Run: openssl version
	I0314 17:44:44.558045    6448 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0314 17:44:44.586067    6448 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0314 17:44:44.592605    6448 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 14 17:44 /usr/share/ca-certificates/minikubeCA.pem
	I0314 17:44:44.601845    6448 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0314 17:44:44.619662    6448 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0314 17:44:44.649087    6448 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0314 17:44:44.655479    6448 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0314 17:44:44.655769    6448 kubeadm.go:391] StartCluster: {Name:addons-953400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4
ClusterName:addons-953400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.87.211 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 17:44:44.662009    6448 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0314 17:44:44.700625    6448 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0314 17:44:44.727977    6448 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0314 17:44:44.752663    6448 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0314 17:44:44.769071    6448 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0314 17:44:44.769071    6448 kubeadm.go:156] found existing configuration files:
	
	I0314 17:44:44.778483    6448 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0314 17:44:44.792471    6448 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0314 17:44:44.801515    6448 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0314 17:44:44.832866    6448 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0314 17:44:44.849603    6448 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0314 17:44:44.859626    6448 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0314 17:44:44.884165    6448 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0314 17:44:44.901919    6448 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0314 17:44:44.911152    6448 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0314 17:44:44.937896    6448 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0314 17:44:44.954793    6448 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0314 17:44:44.963776    6448 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0314 17:44:44.980643    6448 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0314 17:44:45.241388    6448 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0314 17:44:58.144435    6448 kubeadm.go:309] [init] Using Kubernetes version: v1.28.4
	I0314 17:44:58.144435    6448 kubeadm.go:309] [preflight] Running pre-flight checks
	I0314 17:44:58.144435    6448 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0314 17:44:58.145012    6448 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0314 17:44:58.145012    6448 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0314 17:44:58.145012    6448 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0314 17:44:58.148014    6448 out.go:204]   - Generating certificates and keys ...
	I0314 17:44:58.148014    6448 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0314 17:44:58.148538    6448 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0314 17:44:58.148680    6448 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0314 17:44:58.148680    6448 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0314 17:44:58.148680    6448 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0314 17:44:58.149206    6448 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0314 17:44:58.149458    6448 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0314 17:44:58.149458    6448 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [addons-953400 localhost] and IPs [172.17.87.211 127.0.0.1 ::1]
	I0314 17:44:58.149458    6448 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0314 17:44:58.150113    6448 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [addons-953400 localhost] and IPs [172.17.87.211 127.0.0.1 ::1]
	I0314 17:44:58.150240    6448 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0314 17:44:58.150301    6448 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0314 17:44:58.150301    6448 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0314 17:44:58.150301    6448 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0314 17:44:58.150301    6448 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0314 17:44:58.150908    6448 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0314 17:44:58.150908    6448 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0314 17:44:58.150908    6448 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0314 17:44:58.150908    6448 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0314 17:44:58.150908    6448 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0314 17:44:58.153471    6448 out.go:204]   - Booting up control plane ...
	I0314 17:44:58.153471    6448 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0314 17:44:58.153471    6448 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0314 17:44:58.154087    6448 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0314 17:44:58.154175    6448 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0314 17:44:58.154175    6448 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0314 17:44:58.154175    6448 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0314 17:44:58.154907    6448 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0314 17:44:58.155054    6448 kubeadm.go:309] [apiclient] All control plane components are healthy after 7.503149 seconds
	I0314 17:44:58.155452    6448 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0314 17:44:58.155804    6448 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0314 17:44:58.155943    6448 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0314 17:44:58.155943    6448 kubeadm.go:309] [mark-control-plane] Marking the node addons-953400 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0314 17:44:58.156475    6448 kubeadm.go:309] [bootstrap-token] Using token: 3fnfw4.59el6kxmkuoepszs
	I0314 17:44:58.163202    6448 out.go:204]   - Configuring RBAC rules ...
	I0314 17:44:58.163257    6448 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0314 17:44:58.163257    6448 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0314 17:44:58.163947    6448 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0314 17:44:58.163998    6448 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0314 17:44:58.163998    6448 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0314 17:44:58.164562    6448 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0314 17:44:58.164562    6448 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0314 17:44:58.164562    6448 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0314 17:44:58.164562    6448 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0314 17:44:58.164562    6448 kubeadm.go:309] 
	I0314 17:44:58.165180    6448 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0314 17:44:58.165233    6448 kubeadm.go:309] 
	I0314 17:44:58.165233    6448 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0314 17:44:58.165233    6448 kubeadm.go:309] 
	I0314 17:44:58.165233    6448 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0314 17:44:58.165233    6448 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0314 17:44:58.165233    6448 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0314 17:44:58.165233    6448 kubeadm.go:309] 
	I0314 17:44:58.165841    6448 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0314 17:44:58.165841    6448 kubeadm.go:309] 
	I0314 17:44:58.165841    6448 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0314 17:44:58.165841    6448 kubeadm.go:309] 
	I0314 17:44:58.165841    6448 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0314 17:44:58.165841    6448 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0314 17:44:58.166400    6448 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0314 17:44:58.166400    6448 kubeadm.go:309] 
	I0314 17:44:58.166400    6448 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0314 17:44:58.166400    6448 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0314 17:44:58.166400    6448 kubeadm.go:309] 
	I0314 17:44:58.167002    6448 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 3fnfw4.59el6kxmkuoepszs \
	I0314 17:44:58.167170    6448 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:d6cca618a9b89cd38081689b02ff56bf93130e2cdd7cca4172dee33bbb1d34eb \
	I0314 17:44:58.167170    6448 kubeadm.go:309] 	--control-plane 
	I0314 17:44:58.167170    6448 kubeadm.go:309] 
	I0314 17:44:58.167170    6448 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0314 17:44:58.167170    6448 kubeadm.go:309] 
	I0314 17:44:58.167735    6448 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 3fnfw4.59el6kxmkuoepszs \
	I0314 17:44:58.167769    6448 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:d6cca618a9b89cd38081689b02ff56bf93130e2cdd7cca4172dee33bbb1d34eb 
	I0314 17:44:58.167769    6448 cni.go:84] Creating CNI manager for ""
	I0314 17:44:58.167769    6448 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0314 17:44:58.170565    6448 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0314 17:44:58.181176    6448 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0314 17:44:58.213278    6448 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0314 17:44:58.269284    6448 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0314 17:44:58.281105    6448 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 17:44:58.281105    6448 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-953400 minikube.k8s.io/updated_at=2024_03_14T17_44_58_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=c6f78a3db54ac629870afb44fb5bc8be9e04a8c7 minikube.k8s.io/name=addons-953400 minikube.k8s.io/primary=true
	I0314 17:44:58.317633    6448 ops.go:34] apiserver oom_adj: -16
	I0314 17:44:58.550224    6448 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 17:44:59.058342    6448 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 17:44:59.553230    6448 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 17:45:00.059589    6448 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 17:45:00.562615    6448 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 17:45:01.062770    6448 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 17:45:01.562546    6448 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 17:45:02.049718    6448 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 17:45:02.552142    6448 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 17:45:03.052739    6448 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 17:45:03.559294    6448 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 17:45:04.048236    6448 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 17:45:04.553785    6448 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 17:45:05.060052    6448 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 17:45:05.560413    6448 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 17:45:06.062926    6448 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 17:45:06.548925    6448 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 17:45:07.052163    6448 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 17:45:07.553816    6448 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 17:45:08.062068    6448 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 17:45:08.563288    6448 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 17:45:09.051818    6448 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 17:45:09.554048    6448 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 17:45:10.055248    6448 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 17:45:10.559311    6448 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 17:45:11.050315    6448 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 17:45:11.203454    6448 kubeadm.go:1106] duration metric: took 12.9331215s to wait for elevateKubeSystemPrivileges
	W0314 17:45:11.203454    6448 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0314 17:45:11.203454    6448 kubeadm.go:393] duration metric: took 26.5457058s to StartCluster
	I0314 17:45:11.203454    6448 settings.go:142] acquiring lock: {Name:mk2f48f1c2db86c45c5c20d13312e07e9c171d09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 17:45:11.203454    6448 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0314 17:45:11.204415    6448 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\kubeconfig: {Name:mk3b9816be6135fef42a073128e9ed356868417a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 17:45:11.205941    6448 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0314 17:45:11.205941    6448 start.go:234] Will wait 6m0s for node &{Name: IP:172.17.87.211 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0314 17:45:11.209410    6448 out.go:177] * Verifying Kubernetes components...
	I0314 17:45:11.205941    6448 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true yakd:true]
	I0314 17:45:11.206815    6448 config.go:182] Loaded profile config "addons-953400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0314 17:45:11.213471    6448 addons.go:69] Setting helm-tiller=true in profile "addons-953400"
	I0314 17:45:11.213520    6448 addons.go:69] Setting ingress=true in profile "addons-953400"
	I0314 17:45:11.213520    6448 addons.go:69] Setting metrics-server=true in profile "addons-953400"
	I0314 17:45:11.213520    6448 addons.go:69] Setting yakd=true in profile "addons-953400"
	I0314 17:45:11.213575    6448 addons.go:234] Setting addon metrics-server=true in "addons-953400"
	I0314 17:45:11.213575    6448 addons.go:69] Setting storage-provisioner=true in profile "addons-953400"
	I0314 17:45:11.213575    6448 addons.go:234] Setting addon ingress=true in "addons-953400"
	I0314 17:45:11.213575    6448 addons.go:69] Setting gcp-auth=true in profile "addons-953400"
	I0314 17:45:11.213631    6448 addons.go:234] Setting addon storage-provisioner=true in "addons-953400"
	I0314 17:45:11.213657    6448 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-953400"
	I0314 17:45:11.213738    6448 addons.go:69] Setting volumesnapshots=true in profile "addons-953400"
	I0314 17:45:11.213738    6448 addons.go:234] Setting addon volumesnapshots=true in "addons-953400"
	I0314 17:45:11.213657    6448 mustload.go:65] Loading cluster: addons-953400
	I0314 17:45:11.213575    6448 addons.go:234] Setting addon yakd=true in "addons-953400"
	I0314 17:45:11.213877    6448 host.go:66] Checking if "addons-953400" exists ...
	I0314 17:45:11.213877    6448 host.go:66] Checking if "addons-953400" exists ...
	I0314 17:45:11.213520    6448 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-953400"
	I0314 17:45:11.213967    6448 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-953400"
	I0314 17:45:11.213575    6448 addons.go:69] Setting default-storageclass=true in profile "addons-953400"
	I0314 17:45:11.214105    6448 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-953400"
	I0314 17:45:11.214159    6448 host.go:66] Checking if "addons-953400" exists ...
	I0314 17:45:11.214159    6448 host.go:66] Checking if "addons-953400" exists ...
	I0314 17:45:11.214362    6448 config.go:182] Loaded profile config "addons-953400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0314 17:45:11.213738    6448 host.go:66] Checking if "addons-953400" exists ...
	I0314 17:45:11.213631    6448 addons.go:69] Setting ingress-dns=true in profile "addons-953400"
	I0314 17:45:11.215119    6448 addons.go:234] Setting addon ingress-dns=true in "addons-953400"
	I0314 17:45:11.213471    6448 addons.go:69] Setting cloud-spanner=true in profile "addons-953400"
	I0314 17:45:11.213575    6448 addons.go:234] Setting addon helm-tiller=true in "addons-953400"
	I0314 17:45:11.215357    6448 host.go:66] Checking if "addons-953400" exists ...
	I0314 17:45:11.215357    6448 addons.go:234] Setting addon cloud-spanner=true in "addons-953400"
	I0314 17:45:11.213575    6448 addons.go:69] Setting registry=true in profile "addons-953400"
	I0314 17:45:11.215667    6448 host.go:66] Checking if "addons-953400" exists ...
	I0314 17:45:11.215667    6448 host.go:66] Checking if "addons-953400" exists ...
	I0314 17:45:11.215764    6448 addons.go:234] Setting addon registry=true in "addons-953400"
	I0314 17:45:11.216364    6448 host.go:66] Checking if "addons-953400" exists ...
	I0314 17:45:11.213471    6448 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-953400"
	I0314 17:45:11.213738    6448 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-953400"
	I0314 17:45:11.213877    6448 host.go:66] Checking if "addons-953400" exists ...
	I0314 17:45:11.218357    6448 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-953400 ).state
	I0314 17:45:11.213471    6448 addons.go:69] Setting inspektor-gadget=true in profile "addons-953400"
	I0314 17:45:11.218672    6448 addons.go:234] Setting addon inspektor-gadget=true in "addons-953400"
	I0314 17:45:11.218782    6448 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-953400 ).state
	I0314 17:45:11.218907    6448 host.go:66] Checking if "addons-953400" exists ...
	I0314 17:45:11.216364    6448 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-953400"
	I0314 17:45:11.219103    6448 host.go:66] Checking if "addons-953400" exists ...
	I0314 17:45:11.221095    6448 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-953400 ).state
	I0314 17:45:11.221796    6448 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-953400 ).state
	I0314 17:45:11.222064    6448 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-953400 ).state
	I0314 17:45:11.222160    6448 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-953400 ).state
	I0314 17:45:11.222427    6448 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-953400 ).state
	I0314 17:45:11.222427    6448 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-953400 ).state
	I0314 17:45:11.222972    6448 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-953400 ).state
	I0314 17:45:11.224208    6448 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-953400 ).state
	I0314 17:45:11.224896    6448 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-953400 ).state
	I0314 17:45:11.224976    6448 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-953400 ).state
	I0314 17:45:11.225806    6448 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-953400 ).state
	I0314 17:45:11.225806    6448 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-953400 ).state
	I0314 17:45:11.226822    6448 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-953400 ).state
	I0314 17:45:11.232815    6448 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 17:45:11.920992    6448 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.17.80.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0314 17:45:12.173446    6448 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0314 17:45:15.984824    6448 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (3.8110945s)
	I0314 17:45:15.990825    6448 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.17.80.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (4.0695305s)
	I0314 17:45:15.990825    6448 start.go:948] {"host.minikube.internal": 172.17.80.1} host record injected into CoreDNS's ConfigMap
	I0314 17:45:15.996828    6448 node_ready.go:35] waiting up to 6m0s for node "addons-953400" to be "Ready" ...
	I0314 17:45:16.079828    6448 node_ready.go:49] node "addons-953400" has status "Ready":"True"
	I0314 17:45:16.079828    6448 node_ready.go:38] duration metric: took 82.9942ms for node "addons-953400" to be "Ready" ...
	I0314 17:45:16.079828    6448 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0314 17:45:16.288705    6448 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-68dzl" in "kube-system" namespace to be "Ready" ...
	I0314 17:45:16.576057    6448 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-953400" context rescaled to 1 replicas
	I0314 17:45:17.035191    6448 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 17:45:17.035191    6448 main.go:141] libmachine: [stderr =====>] : 
	I0314 17:45:17.039097    6448 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0314 17:45:17.041100    6448 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0314 17:45:17.041100    6448 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0314 17:45:17.042109    6448 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-953400 ).state
	I0314 17:45:17.221114    6448 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 17:45:17.222112    6448 main.go:141] libmachine: [stderr =====>] : 
	I0314 17:45:17.225123    6448 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0314 17:45:17.238125    6448 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0314 17:45:17.238125    6448 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0314 17:45:17.238125    6448 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-953400 ).state
	I0314 17:45:17.229123    6448 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 17:45:17.245163    6448 main.go:141] libmachine: [stderr =====>] : 
	I0314 17:45:17.247145    6448 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.14.5
	I0314 17:45:17.251141    6448 addons.go:426] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0314 17:45:17.251141    6448 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0314 17:45:17.251141    6448 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-953400 ).state
	I0314 17:45:17.324407    6448 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 17:45:17.324811    6448 main.go:141] libmachine: [stderr =====>] : 
	I0314 17:45:17.325384    6448 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 17:45:17.325384    6448 main.go:141] libmachine: [stderr =====>] : 
	I0314 17:45:17.325384    6448 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 17:45:17.325384    6448 main.go:141] libmachine: [stderr =====>] : 
	I0314 17:45:17.328458    6448 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0314 17:45:17.326624    6448 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 17:45:17.327674    6448 addons.go:234] Setting addon default-storageclass=true in "addons-953400"
	I0314 17:45:17.331585    6448 host.go:66] Checking if "addons-953400" exists ...
	I0314 17:45:17.331585    6448 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-953400"
	I0314 17:45:17.331585    6448 host.go:66] Checking if "addons-953400" exists ...
	I0314 17:45:17.332244    6448 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0314 17:45:17.332244    6448 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0314 17:45:17.332244    6448 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-953400 ).state
	I0314 17:45:17.332244    6448 main.go:141] libmachine: [stderr =====>] : 
	I0314 17:45:17.336111    6448 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0314 17:45:17.332927    6448 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-953400 ).state
	I0314 17:45:17.332927    6448 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-953400 ).state
	I0314 17:45:17.339727    6448 addons.go:426] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0314 17:45:17.339727    6448 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0314 17:45:17.339727    6448 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-953400 ).state
	I0314 17:45:17.339727    6448 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 17:45:17.340099    6448 main.go:141] libmachine: [stderr =====>] : 
	I0314 17:45:17.342380    6448 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.4
	I0314 17:45:17.346667    6448 addons.go:426] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0314 17:45:17.346667    6448 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0314 17:45:17.345043    6448 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 17:45:17.346667    6448 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-953400 ).state
	I0314 17:45:17.346667    6448 main.go:141] libmachine: [stderr =====>] : 
	I0314 17:45:17.364159    6448 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.14
	I0314 17:45:17.348626    6448 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 17:45:17.348626    6448 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 17:45:17.355114    6448 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 17:45:17.355493    6448 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 17:45:17.355611    6448 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 17:45:17.371172    6448 main.go:141] libmachine: [stderr =====>] : 
	I0314 17:45:17.376171    6448 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.0
	I0314 17:45:17.371172    6448 main.go:141] libmachine: [stderr =====>] : 
	I0314 17:45:17.371172    6448 main.go:141] libmachine: [stderr =====>] : 
	I0314 17:45:17.371172    6448 main.go:141] libmachine: [stderr =====>] : 
	I0314 17:45:17.371172    6448 main.go:141] libmachine: [stderr =====>] : 
	I0314 17:45:17.372187    6448 addons.go:426] installing /etc/kubernetes/addons/deployment.yaml
	I0314 17:45:17.383169    6448 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0314 17:45:17.383169    6448 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-953400 ).state
	I0314 17:45:17.391355    6448 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 17:45:17.391355    6448 main.go:141] libmachine: [stderr =====>] : 
	I0314 17:45:17.397332    6448 out.go:177]   - Using image docker.io/registry:2.8.3
	I0314 17:45:17.401277    6448 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I0314 17:45:17.397332    6448 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0314 17:45:17.397332    6448 host.go:66] Checking if "addons-953400" exists ...
	I0314 17:45:17.406201    6448 addons.go:426] installing /etc/kubernetes/addons/registry-rc.yaml
	I0314 17:45:17.413929    6448 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0314 17:45:17.413929    6448 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.26.0
	I0314 17:45:17.418563    6448 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.10.0
	I0314 17:45:17.419095    6448 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0314 17:45:17.419095    6448 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0314 17:45:17.419095    6448 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-953400 ).state
	I0314 17:45:17.431610    6448 addons.go:426] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0314 17:45:17.425976    6448 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-953400 ).state
	I0314 17:45:17.431610    6448 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0314 17:45:17.431610    6448 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-953400 ).state
	I0314 17:45:17.425976    6448 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0314 17:45:17.439600    6448 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0314 17:45:17.444606    6448 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0314 17:45:17.449573    6448 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0314 17:45:17.447304    6448 addons.go:426] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0314 17:45:17.453577    6448 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0314 17:45:17.453577    6448 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-953400 ).state
	I0314 17:45:17.459572    6448 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0314 17:45:17.463353    6448 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0314 17:45:17.478414    6448 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0314 17:45:17.491426    6448 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0314 17:45:17.516426    6448 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0314 17:45:17.531459    6448 addons.go:426] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0314 17:45:17.531459    6448 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0314 17:45:17.531459    6448 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-953400 ).state
	I0314 17:45:18.902809    6448 pod_ready.go:102] pod "coredns-5dd5756b68-68dzl" in "kube-system" namespace has status "Ready":"False"
	I0314 17:45:21.239775    6448 pod_ready.go:102] pod "coredns-5dd5756b68-68dzl" in "kube-system" namespace has status "Ready":"False"
	I0314 17:45:22.458349    6448 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 17:45:22.458349    6448 main.go:141] libmachine: [stderr =====>] : 
	I0314 17:45:22.458349    6448 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-953400 ).networkadapters[0]).ipaddresses[0]
	I0314 17:45:22.507858    6448 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 17:45:22.508872    6448 main.go:141] libmachine: [stderr =====>] : 
	I0314 17:45:22.508872    6448 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-953400 ).networkadapters[0]).ipaddresses[0]
	I0314 17:45:22.539728    6448 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 17:45:22.539728    6448 main.go:141] libmachine: [stderr =====>] : 
	I0314 17:45:22.539973    6448 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-953400 ).networkadapters[0]).ipaddresses[0]
	I0314 17:45:22.613075    6448 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 17:45:22.613075    6448 main.go:141] libmachine: [stderr =====>] : 
	I0314 17:45:22.613075    6448 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-953400 ).networkadapters[0]).ipaddresses[0]
	I0314 17:45:22.728172    6448 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 17:45:22.728172    6448 main.go:141] libmachine: [stderr =====>] : 
	I0314 17:45:22.728172    6448 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-953400 ).networkadapters[0]).ipaddresses[0]
	I0314 17:45:22.752513    6448 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 17:45:22.752513    6448 main.go:141] libmachine: [stderr =====>] : 
	I0314 17:45:22.752513    6448 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-953400 ).networkadapters[0]).ipaddresses[0]
	I0314 17:45:22.788572    6448 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 17:45:22.788572    6448 main.go:141] libmachine: [stderr =====>] : 
	I0314 17:45:22.788572    6448 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-953400 ).networkadapters[0]).ipaddresses[0]
	I0314 17:45:22.834341    6448 pod_ready.go:92] pod "coredns-5dd5756b68-68dzl" in "kube-system" namespace has status "Ready":"True"
	I0314 17:45:22.834341    6448 pod_ready.go:81] duration metric: took 6.5451486s for pod "coredns-5dd5756b68-68dzl" in "kube-system" namespace to be "Ready" ...
	I0314 17:45:22.834341    6448 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-bbqcf" in "kube-system" namespace to be "Ready" ...
	I0314 17:45:22.881769    6448 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 17:45:22.881769    6448 main.go:141] libmachine: [stderr =====>] : 
	I0314 17:45:22.882886    6448 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-953400 ).networkadapters[0]).ipaddresses[0]
	I0314 17:45:22.882886    6448 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 17:45:22.882886    6448 main.go:141] libmachine: [stderr =====>] : 
	I0314 17:45:22.882886    6448 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-953400 ).networkadapters[0]).ipaddresses[0]
	I0314 17:45:22.910398    6448 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 17:45:22.910398    6448 main.go:141] libmachine: [stderr =====>] : 
	I0314 17:45:22.910398    6448 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0314 17:45:22.910398    6448 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0314 17:45:22.910398    6448 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-953400 ).state
	I0314 17:45:22.924722    6448 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 17:45:22.924831    6448 main.go:141] libmachine: [stderr =====>] : 
	I0314 17:45:22.929531    6448 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0314 17:45:22.933256    6448 out.go:177]   - Using image docker.io/busybox:stable
	I0314 17:45:22.942689    6448 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0314 17:45:22.942689    6448 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0314 17:45:22.942689    6448 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-953400 ).state
	I0314 17:45:23.187576    6448 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 17:45:23.187576    6448 main.go:141] libmachine: [stderr =====>] : 
	I0314 17:45:23.187576    6448 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-953400 ).networkadapters[0]).ipaddresses[0]
	I0314 17:45:23.274928    6448 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 17:45:23.274928    6448 main.go:141] libmachine: [stderr =====>] : 
	I0314 17:45:23.274928    6448 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-953400 ).networkadapters[0]).ipaddresses[0]
	I0314 17:45:23.944420    6448 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 17:45:23.944420    6448 main.go:141] libmachine: [stderr =====>] : 
	I0314 17:45:23.945407    6448 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-953400 ).networkadapters[0]).ipaddresses[0]
	I0314 17:45:24.903887    6448 pod_ready.go:102] pod "coredns-5dd5756b68-bbqcf" in "kube-system" namespace has status "Ready":"False"
	I0314 17:45:24.911416    6448 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0314 17:45:24.911416    6448 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-953400 ).state
	I0314 17:45:27.372603    6448 pod_ready.go:102] pod "coredns-5dd5756b68-bbqcf" in "kube-system" namespace has status "Ready":"False"
	I0314 17:45:28.436534    6448 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 17:45:28.436608    6448 main.go:141] libmachine: [stderr =====>] : 
	I0314 17:45:28.436687    6448 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-953400 ).networkadapters[0]).ipaddresses[0]
	I0314 17:45:28.708258    6448 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 17:45:28.708258    6448 main.go:141] libmachine: [stderr =====>] : 
	I0314 17:45:28.709305    6448 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-953400 ).networkadapters[0]).ipaddresses[0]
	I0314 17:45:28.897206    6448 main.go:141] libmachine: [stdout =====>] : 172.17.87.211
	
	I0314 17:45:28.897206    6448 main.go:141] libmachine: [stderr =====>] : 
	I0314 17:45:28.898204    6448 sshutil.go:53] new ssh client: &{IP:172.17.87.211 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\addons-953400\id_rsa Username:docker}
	I0314 17:45:28.954142    6448 main.go:141] libmachine: [stdout =====>] : 172.17.87.211
	
	I0314 17:45:28.954142    6448 main.go:141] libmachine: [stderr =====>] : 
	I0314 17:45:28.955146    6448 sshutil.go:53] new ssh client: &{IP:172.17.87.211 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\addons-953400\id_rsa Username:docker}
	I0314 17:45:29.018486    6448 main.go:141] libmachine: [stdout =====>] : 172.17.87.211
	
	I0314 17:45:29.018486    6448 main.go:141] libmachine: [stderr =====>] : 
	I0314 17:45:29.019479    6448 sshutil.go:53] new ssh client: &{IP:172.17.87.211 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\addons-953400\id_rsa Username:docker}
	I0314 17:45:29.165712    6448 main.go:141] libmachine: [stdout =====>] : 172.17.87.211
	
	I0314 17:45:29.165791    6448 main.go:141] libmachine: [stderr =====>] : 
	I0314 17:45:29.166342    6448 sshutil.go:53] new ssh client: &{IP:172.17.87.211 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\addons-953400\id_rsa Username:docker}
	I0314 17:45:29.233990    6448 main.go:141] libmachine: [stdout =====>] : 172.17.87.211
	
	I0314 17:45:29.234065    6448 main.go:141] libmachine: [stderr =====>] : 
	I0314 17:45:29.234147    6448 sshutil.go:53] new ssh client: &{IP:172.17.87.211 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\addons-953400\id_rsa Username:docker}
	I0314 17:45:29.344513    6448 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0314 17:45:29.344513    6448 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0314 17:45:29.348517    6448 pod_ready.go:97] pod "coredns-5dd5756b68-bbqcf" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-03-14 17:45:11 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-03-14 17:45:11 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-03-14 17:45:11 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-03-14 17:45:11 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.87.211 HostIPs:[] PodIP: PodIPs:[] StartTime:2024-03-14 17:45:11 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerSt
ateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-03-14 17:45:18 +0000 UTC,FinishedAt:2024-03-14 17:45:28 +0000 UTC,ContainerID:docker://e042f1c391363141424cc7c1c64aa3f12cfe0b6cd3c66015ed4eaed789da5003,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.10.1 ImageID:docker-pullable://registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e ContainerID:docker://e042f1c391363141424cc7c1c64aa3f12cfe0b6cd3c66015ed4eaed789da5003 Started:0xc002a15e40 AllocatedResources:map[] Resources:nil}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0314 17:45:29.348517    6448 pod_ready.go:81] duration metric: took 6.513692s for pod "coredns-5dd5756b68-bbqcf" in "kube-system" namespace to be "Ready" ...
	E0314 17:45:29.348517    6448 pod_ready.go:66] WaitExtra: waitPodCondition: pod "coredns-5dd5756b68-bbqcf" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-03-14 17:45:11 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-03-14 17:45:11 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-03-14 17:45:11 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-03-14 17:45:11 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.87.211 HostIPs:[] PodIP: PodIPs:[] StartTime:2024-03-14 17:45:11 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Runnin
g:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-03-14 17:45:18 +0000 UTC,FinishedAt:2024-03-14 17:45:28 +0000 UTC,ContainerID:docker://e042f1c391363141424cc7c1c64aa3f12cfe0b6cd3c66015ed4eaed789da5003,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.10.1 ImageID:docker-pullable://registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e ContainerID:docker://e042f1c391363141424cc7c1c64aa3f12cfe0b6cd3c66015ed4eaed789da5003 Started:0xc002a15e40 AllocatedResources:map[] Resources:nil}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0314 17:45:29.348517    6448 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-953400" in "kube-system" namespace to be "Ready" ...
	I0314 17:45:29.356529    6448 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0314 17:45:29.359525    6448 pod_ready.go:92] pod "etcd-addons-953400" in "kube-system" namespace has status "Ready":"True"
	I0314 17:45:29.359525    6448 pod_ready.go:81] duration metric: took 11.0072ms for pod "etcd-addons-953400" in "kube-system" namespace to be "Ready" ...
	I0314 17:45:29.359525    6448 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-953400" in "kube-system" namespace to be "Ready" ...
	I0314 17:45:29.373555    6448 main.go:141] libmachine: [stdout =====>] : 172.17.87.211
	
	I0314 17:45:29.373555    6448 main.go:141] libmachine: [stderr =====>] : 
	I0314 17:45:29.373555    6448 pod_ready.go:92] pod "kube-apiserver-addons-953400" in "kube-system" namespace has status "Ready":"True"
	I0314 17:45:29.373555    6448 pod_ready.go:81] duration metric: took 14.0287ms for pod "kube-apiserver-addons-953400" in "kube-system" namespace to be "Ready" ...
	I0314 17:45:29.373555    6448 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-953400" in "kube-system" namespace to be "Ready" ...
	I0314 17:45:29.374458    6448 sshutil.go:53] new ssh client: &{IP:172.17.87.211 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\addons-953400\id_rsa Username:docker}
	I0314 17:45:29.393221    6448 pod_ready.go:92] pod "kube-controller-manager-addons-953400" in "kube-system" namespace has status "Ready":"True"
	I0314 17:45:29.393221    6448 pod_ready.go:81] duration metric: took 19.665ms for pod "kube-controller-manager-addons-953400" in "kube-system" namespace to be "Ready" ...
	I0314 17:45:29.393221    6448 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-kddsj" in "kube-system" namespace to be "Ready" ...
	I0314 17:45:29.412186    6448 pod_ready.go:92] pod "kube-proxy-kddsj" in "kube-system" namespace has status "Ready":"True"
	I0314 17:45:29.412241    6448 pod_ready.go:81] duration metric: took 19.0178ms for pod "kube-proxy-kddsj" in "kube-system" namespace to be "Ready" ...
	I0314 17:45:29.412241    6448 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-953400" in "kube-system" namespace to be "Ready" ...
	I0314 17:45:29.432323    6448 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0314 17:45:29.432323    6448 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0314 17:45:29.458234    6448 main.go:141] libmachine: [stdout =====>] : 172.17.87.211
	
	I0314 17:45:29.458305    6448 main.go:141] libmachine: [stderr =====>] : 
	I0314 17:45:29.459051    6448 sshutil.go:53] new ssh client: &{IP:172.17.87.211 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\addons-953400\id_rsa Username:docker}
	I0314 17:45:29.532964    6448 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0314 17:45:29.558028    6448 main.go:141] libmachine: [stdout =====>] : 172.17.87.211
	
	I0314 17:45:29.558095    6448 main.go:141] libmachine: [stderr =====>] : 
	I0314 17:45:29.558333    6448 sshutil.go:53] new ssh client: &{IP:172.17.87.211 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\addons-953400\id_rsa Username:docker}
	I0314 17:45:29.585925    6448 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0314 17:45:29.585925    6448 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0314 17:45:29.606884    6448 main.go:141] libmachine: [stdout =====>] : 172.17.87.211
	
	I0314 17:45:29.606884    6448 main.go:141] libmachine: [stderr =====>] : 
	I0314 17:45:29.607885    6448 sshutil.go:53] new ssh client: &{IP:172.17.87.211 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\addons-953400\id_rsa Username:docker}
	I0314 17:45:29.610880    6448 addons.go:426] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0314 17:45:29.610880    6448 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0314 17:45:29.652623    6448 addons.go:426] installing /etc/kubernetes/addons/registry-svc.yaml
	I0314 17:45:29.652623    6448 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0314 17:45:29.656722    6448 main.go:141] libmachine: [stdout =====>] : 172.17.87.211
	
	I0314 17:45:29.656823    6448 main.go:141] libmachine: [stderr =====>] : 
	I0314 17:45:29.657438    6448 sshutil.go:53] new ssh client: &{IP:172.17.87.211 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\addons-953400\id_rsa Username:docker}
	I0314 17:45:29.714915    6448 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0314 17:45:29.726362    6448 main.go:141] libmachine: [stdout =====>] : 172.17.87.211
	
	I0314 17:45:29.726362    6448 main.go:141] libmachine: [stderr =====>] : 
	I0314 17:45:29.726842    6448 sshutil.go:53] new ssh client: &{IP:172.17.87.211 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\addons-953400\id_rsa Username:docker}
	I0314 17:45:29.748240    6448 pod_ready.go:92] pod "kube-scheduler-addons-953400" in "kube-system" namespace has status "Ready":"True"
	I0314 17:45:29.748240    6448 pod_ready.go:81] duration metric: took 335.9749ms for pod "kube-scheduler-addons-953400" in "kube-system" namespace to be "Ready" ...
	I0314 17:45:29.748240    6448 pod_ready.go:38] duration metric: took 13.6673959s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0314 17:45:29.748240    6448 api_server.go:52] waiting for apiserver process to appear ...
	I0314 17:45:29.760255    6448 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 17:45:29.760255    6448 main.go:141] libmachine: [stderr =====>] : 
	I0314 17:45:29.761247    6448 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-953400 ).networkadapters[0]).ipaddresses[0]
	I0314 17:45:29.763253    6448 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 17:45:29.802165    6448 addons.go:426] installing /etc/kubernetes/addons/ig-role.yaml
	I0314 17:45:29.802165    6448 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0314 17:45:29.840451    6448 addons.go:426] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0314 17:45:29.840451    6448 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0314 17:45:29.952592    6448 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0314 17:45:29.983569    6448 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0314 17:45:29.983569    6448 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0314 17:45:30.013248    6448 addons.go:426] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0314 17:45:30.013248    6448 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0314 17:45:30.019893    6448 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0314 17:45:30.193133    6448 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0314 17:45:30.193133    6448 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0314 17:45:30.197134    6448 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0314 17:45:30.197134    6448 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0314 17:45:30.202134    6448 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0314 17:45:30.205134    6448 main.go:141] libmachine: [stdout =====>] : 172.17.87.211
	
	I0314 17:45:30.205134    6448 main.go:141] libmachine: [stderr =====>] : 
	I0314 17:45:30.206129    6448 sshutil.go:53] new ssh client: &{IP:172.17.87.211 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\addons-953400\id_rsa Username:docker}
	I0314 17:45:30.284715    6448 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0314 17:45:30.298460    6448 addons.go:426] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0314 17:45:30.298563    6448 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0314 17:45:30.360482    6448 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0314 17:45:30.360582    6448 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0314 17:45:30.392563    6448 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0314 17:45:30.392631    6448 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0314 17:45:30.403887    6448 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0314 17:45:30.403887    6448 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0314 17:45:30.522703    6448 addons.go:426] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0314 17:45:30.522703    6448 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0314 17:45:30.553452    6448 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0314 17:45:30.553452    6448 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0314 17:45:30.559333    6448 addons.go:426] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0314 17:45:30.559387    6448 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0314 17:45:30.592475    6448 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0314 17:45:30.711544    6448 addons.go:426] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0314 17:45:30.711544    6448 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0314 17:45:30.726804    6448 addons.go:426] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0314 17:45:30.726804    6448 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0314 17:45:30.744467    6448 addons.go:426] installing /etc/kubernetes/addons/ig-crd.yaml
	I0314 17:45:30.744467    6448 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0314 17:45:30.758803    6448 addons.go:426] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0314 17:45:30.758882    6448 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0314 17:45:30.945747    6448 addons.go:426] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0314 17:45:30.945747    6448 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0314 17:45:30.964993    6448 addons.go:426] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0314 17:45:30.964993    6448 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0314 17:45:31.024326    6448 addons.go:426] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0314 17:45:31.024326    6448 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0314 17:45:31.031275    6448 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0314 17:45:31.128789    6448 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0314 17:45:31.163549    6448 addons.go:426] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0314 17:45:31.163621    6448 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0314 17:45:31.239040    6448 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0314 17:45:31.375117    6448 main.go:141] libmachine: [stdout =====>] : 172.17.87.211
	
	I0314 17:45:31.375117    6448 main.go:141] libmachine: [stderr =====>] : 
	I0314 17:45:31.376590    6448 sshutil.go:53] new ssh client: &{IP:172.17.87.211 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\addons-953400\id_rsa Username:docker}
	I0314 17:45:31.422472    6448 addons.go:426] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0314 17:45:31.422472    6448 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0314 17:45:31.427474    6448 main.go:141] libmachine: [stdout =====>] : 172.17.87.211
	
	I0314 17:45:31.428053    6448 main.go:141] libmachine: [stderr =====>] : 
	I0314 17:45:31.429009    6448 sshutil.go:53] new ssh client: &{IP:172.17.87.211 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\addons-953400\id_rsa Username:docker}
	I0314 17:45:31.480677    6448 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (2.1239906s)
	I0314 17:45:31.714463    6448 addons.go:426] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0314 17:45:31.714463    6448 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0314 17:45:32.051322    6448 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0314 17:45:32.051379    6448 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0314 17:45:32.067955    6448 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0314 17:45:32.274977    6448 main.go:141] libmachine: [stdout =====>] : 172.17.87.211
	
	I0314 17:45:32.274977    6448 main.go:141] libmachine: [stderr =====>] : 
	I0314 17:45:32.275530    6448 sshutil.go:53] new ssh client: &{IP:172.17.87.211 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\addons-953400\id_rsa Username:docker}
	I0314 17:45:32.322751    6448 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0314 17:45:32.424370    6448 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0314 17:45:32.424370    6448 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0314 17:45:32.708239    6448 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0314 17:45:32.708239    6448 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0314 17:45:32.981479    6448 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0314 17:45:32.985192    6448 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0314 17:45:32.985192    6448 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0314 17:45:33.247060    6448 addons.go:234] Setting addon gcp-auth=true in "addons-953400"
	I0314 17:45:33.247748    6448 host.go:66] Checking if "addons-953400" exists ...
	I0314 17:45:33.248421    6448 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-953400 ).state
	I0314 17:45:33.250839    6448 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0314 17:45:33.250872    6448 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0314 17:45:33.508603    6448 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0314 17:45:33.524271    6448 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.9910114s)
	I0314 17:45:35.441055    6448 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 17:45:35.441055    6448 main.go:141] libmachine: [stderr =====>] : 
	I0314 17:45:35.449808    6448 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0314 17:45:35.449808    6448 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-953400 ).state
	I0314 17:45:35.720344    6448 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (5.9566486s)
	I0314 17:45:35.720344    6448 api_server.go:72] duration metric: took 24.5125803s to wait for apiserver process to appear ...
	I0314 17:45:35.720344    6448 api_server.go:88] waiting for apiserver healthz status ...
	I0314 17:45:35.720344    6448 api_server.go:253] Checking apiserver healthz at https://172.17.87.211:8443/healthz ...
	I0314 17:45:35.720344    6448 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.0049836s)
	I0314 17:45:35.720344    6448 addons.go:470] Verifying addon metrics-server=true in "addons-953400"
	I0314 17:45:35.739763    6448 api_server.go:279] https://172.17.87.211:8443/healthz returned 200:
	ok
	I0314 17:45:35.757300    6448 api_server.go:141] control plane version: v1.28.4
	I0314 17:45:35.757300    6448 api_server.go:131] duration metric: took 36.9527ms to wait for apiserver health ...
	I0314 17:45:35.757300    6448 system_pods.go:43] waiting for kube-system pods to appear ...
	I0314 17:45:35.959524    6448 system_pods.go:59] 11 kube-system pods found
	I0314 17:45:35.959524    6448 system_pods.go:61] "coredns-5dd5756b68-68dzl" [b7bc787f-11dc-4162-8032-084efecbb988] Running
	I0314 17:45:35.959524    6448 system_pods.go:61] "etcd-addons-953400" [a40f77a3-680e-47be-8301-caa7e95329dc] Running
	I0314 17:45:35.959524    6448 system_pods.go:61] "kube-apiserver-addons-953400" [6d07ed61-e73d-4d08-8429-c7d8daeaa0c8] Running
	I0314 17:45:35.959524    6448 system_pods.go:61] "kube-controller-manager-addons-953400" [1215ca8b-6a2a-4028-b09a-33fa60074ff5] Running
	I0314 17:45:35.959524    6448 system_pods.go:61] "kube-ingress-dns-minikube" [1284c06a-5346-4f0d-87cd-c03756700ec1] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0314 17:45:35.959524    6448 system_pods.go:61] "kube-proxy-kddsj" [afb51731-214b-44ad-a6b3-b4b908db21ff] Running
	I0314 17:45:35.959524    6448 system_pods.go:61] "kube-scheduler-addons-953400" [7e25db50-ac34-4eb8-9a74-e21df7f69928] Running
	I0314 17:45:35.959524    6448 system_pods.go:61] "metrics-server-69cf46c98-z95zf" [2fb36680-9447-477d-abd8-ef22bac39ee7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0314 17:45:35.959524    6448 system_pods.go:61] "nvidia-device-plugin-daemonset-k2kqr" [93445059-341c-47bd-aac9-8a1887ea3d53] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0314 17:45:35.959524    6448 system_pods.go:61] "registry-9g2gl" [a3d1b2c5-1dbe-465c-a3cb-5e2c60dfc6aa] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0314 17:45:35.959524    6448 system_pods.go:61] "registry-proxy-98xjp" [cb78e5bd-1c28-45cc-b020-51f1a27eeb0a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0314 17:45:35.959524    6448 system_pods.go:74] duration metric: took 202.2093ms to wait for pod list to return data ...
	I0314 17:45:35.959524    6448 default_sa.go:34] waiting for default service account to be created ...
	I0314 17:45:36.043346    6448 default_sa.go:45] found service account: "default"
	I0314 17:45:36.043346    6448 default_sa.go:55] duration metric: took 83.8155ms for default service account to be created ...
	I0314 17:45:36.043346    6448 system_pods.go:116] waiting for k8s-apps to be running ...
	I0314 17:45:36.066556    6448 system_pods.go:86] 11 kube-system pods found
	I0314 17:45:36.066556    6448 system_pods.go:89] "coredns-5dd5756b68-68dzl" [b7bc787f-11dc-4162-8032-084efecbb988] Running
	I0314 17:45:36.066556    6448 system_pods.go:89] "etcd-addons-953400" [a40f77a3-680e-47be-8301-caa7e95329dc] Running
	I0314 17:45:36.066556    6448 system_pods.go:89] "kube-apiserver-addons-953400" [6d07ed61-e73d-4d08-8429-c7d8daeaa0c8] Running
	I0314 17:45:36.066556    6448 system_pods.go:89] "kube-controller-manager-addons-953400" [1215ca8b-6a2a-4028-b09a-33fa60074ff5] Running
	I0314 17:45:36.066556    6448 system_pods.go:89] "kube-ingress-dns-minikube" [1284c06a-5346-4f0d-87cd-c03756700ec1] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0314 17:45:36.066556    6448 system_pods.go:89] "kube-proxy-kddsj" [afb51731-214b-44ad-a6b3-b4b908db21ff] Running
	I0314 17:45:36.067099    6448 system_pods.go:89] "kube-scheduler-addons-953400" [7e25db50-ac34-4eb8-9a74-e21df7f69928] Running
	I0314 17:45:36.067099    6448 system_pods.go:89] "metrics-server-69cf46c98-z95zf" [2fb36680-9447-477d-abd8-ef22bac39ee7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0314 17:45:36.067099    6448 system_pods.go:89] "nvidia-device-plugin-daemonset-k2kqr" [93445059-341c-47bd-aac9-8a1887ea3d53] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0314 17:45:36.067159    6448 system_pods.go:89] "registry-9g2gl" [a3d1b2c5-1dbe-465c-a3cb-5e2c60dfc6aa] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0314 17:45:36.067159    6448 system_pods.go:89] "registry-proxy-98xjp" [cb78e5bd-1c28-45cc-b020-51f1a27eeb0a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0314 17:45:36.067159    6448 system_pods.go:126] duration metric: took 23.8119ms to wait for k8s-apps to be running ...
	I0314 17:45:36.067212    6448 system_svc.go:44] waiting for kubelet service to be running ....
	I0314 17:45:36.075127    6448 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0314 17:45:36.548183    6448 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.5951002s)
	I0314 17:45:36.548280    6448 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (6.527902s)
	I0314 17:45:36.548280    6448 addons.go:470] Verifying addon registry=true in "addons-953400"
	I0314 17:45:36.552801    6448 out.go:177] * Verifying registry addon...
	I0314 17:45:36.560413    6448 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0314 17:45:36.594363    6448 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0314 17:45:36.594414    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 17:45:37.179818    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 17:45:37.581476    6448 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 17:45:37.581476    6448 main.go:141] libmachine: [stderr =====>] : 
	I0314 17:45:37.581476    6448 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-953400 ).networkadapters[0]).ipaddresses[0]
	I0314 17:45:37.603845    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 17:45:38.130174    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 17:45:38.602821    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 17:45:39.143306    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 17:45:39.586267    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 17:45:40.071743    6448 main.go:141] libmachine: [stdout =====>] : 172.17.87.211
	
	I0314 17:45:40.071743    6448 main.go:141] libmachine: [stderr =====>] : 
	I0314 17:45:40.071743    6448 sshutil.go:53] new ssh client: &{IP:172.17.87.211 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\addons-953400\id_rsa Username:docker}
	I0314 17:45:40.091716    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 17:45:40.637406    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 17:45:40.699557    6448 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (10.4966429s)
	I0314 17:45:40.699557    6448 addons.go:470] Verifying addon ingress=true in "addons-953400"
	I0314 17:45:40.699557    6448 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (10.4140676s)
	I0314 17:45:40.703564    6448 out.go:177] * Verifying ingress addon...
	I0314 17:45:40.699557    6448 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (10.1063309s)
	I0314 17:45:40.700554    6448 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (9.6674783s)
	I0314 17:45:40.700554    6448 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (9.5710545s)
	I0314 17:45:40.700554    6448 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (9.4608115s)
	I0314 17:45:40.700554    6448 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (8.6319583s)
	I0314 17:45:40.700554    6448 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (8.3771803s)
	W0314 17:45:40.706561    6448 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0314 17:45:40.706561    6448 retry.go:31] will retry after 223.289818ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0314 17:45:40.711576    6448 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-953400 service yakd-dashboard -n yakd-dashboard
	
	I0314 17:45:40.708553    6448 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0314 17:45:40.775913    6448 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0314 17:45:40.775913    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0314 17:45:40.795624    6448 out.go:239] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0314 17:45:40.942075    6448 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0314 17:45:41.080728    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 17:45:41.230804    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:45:41.592852    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 17:45:41.751686    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:45:42.078683    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 17:45:42.240336    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:45:42.588725    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 17:45:42.740026    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:45:43.048937    6448 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (9.5396262s)
	I0314 17:45:43.048937    6448 addons.go:470] Verifying addon csi-hostpath-driver=true in "addons-953400"
	I0314 17:45:43.048937    6448 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (6.9732926s)
	I0314 17:45:43.048937    6448 system_svc.go:56] duration metric: took 6.9812595s WaitForService to wait for kubelet
	I0314 17:45:43.052237    6448 out.go:177] * Verifying csi-hostpath-driver addon...
	I0314 17:45:43.048937    6448 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (7.5985656s)
	I0314 17:45:43.048937    6448 kubeadm.go:576] duration metric: took 31.8406292s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0314 17:45:43.057700    6448 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0314 17:45:43.055338    6448 node_conditions.go:102] verifying NodePressure condition ...
	I0314 17:45:43.057072    6448 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0314 17:45:43.061808    6448 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0314 17:45:43.070771    6448 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0314 17:45:43.070771    6448 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0314 17:45:43.093984    6448 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0314 17:45:43.093984    6448 node_conditions.go:123] node cpu capacity is 2
	I0314 17:45:43.093984    6448 node_conditions.go:105] duration metric: took 36.2813ms to run NodePressure ...
	I0314 17:45:43.093984    6448 start.go:240] waiting for startup goroutines ...
	I0314 17:45:43.119468    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 17:45:43.126897    6448 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0314 17:45:43.126897    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:45:43.210083    6448 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0314 17:45:43.210160    6448 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0314 17:45:43.275088    6448 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0314 17:45:43.275158    6448 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0314 17:45:43.340623    6448 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0314 17:45:43.433749    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:45:43.576525    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 17:45:43.580246    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:45:43.737847    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:45:44.095156    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:45:44.104564    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 17:45:44.228807    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:45:44.395227    6448 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.4524834s)
	I0314 17:45:44.575910    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:45:44.576489    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 17:45:44.732124    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:45:45.092076    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:45:45.097835    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 17:45:45.224161    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:45:45.455711    6448 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (2.1149308s)
	I0314 17:45:45.465450    6448 addons.go:470] Verifying addon gcp-auth=true in "addons-953400"
	I0314 17:45:45.469450    6448 out.go:177] * Verifying gcp-auth addon...
	I0314 17:45:45.474438    6448 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0314 17:45:45.493156    6448 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0314 17:45:45.493156    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:45:45.569268    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 17:45:45.574887    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:45:45.728454    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:45:45.981964    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:45:46.080779    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 17:45:46.081818    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:45:46.238322    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:45:46.486265    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:45:46.570309    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 17:45:46.574176    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:45:46.727334    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:45:46.979571    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:45:47.077254    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 17:45:47.077941    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:45:47.231053    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:45:47.485634    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:45:47.582385    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 17:45:47.585363    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:45:47.722981    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:45:47.990360    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:45:48.074585    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:45:48.074585    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 17:45:48.231212    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:45:48.482002    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:45:48.579372    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 17:45:48.580478    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:45:48.736187    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:45:48.991778    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:45:49.074460    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 17:45:49.076685    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:45:49.231482    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:45:49.481139    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:45:49.578136    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 17:45:49.581829    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:45:49.735151    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:45:49.988500    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:45:50.069950    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 17:45:50.071003    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:45:50.228152    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:45:50.491910    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:45:50.574010    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 17:45:50.575189    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:45:50.733752    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:45:50.986736    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:45:51.070732    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 17:45:51.077045    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:45:51.230706    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:45:51.497263    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:45:51.574824    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 17:45:51.576769    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:45:51.730897    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:45:51.982814    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:45:52.079035    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 17:45:52.085375    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:45:52.237757    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:45:52.486470    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:45:52.569398    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 17:45:52.571432    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:45:52.726165    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:45:52.992907    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:45:53.074117    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 17:45:53.075210    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:45:53.611930    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:45:53.614319    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:45:53.614811    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 17:45:53.616440    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:45:53.728766    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:45:53.979746    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:45:54.079850    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:45:54.082191    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 17:45:54.235517    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:45:55.135087    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:45:55.137509    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:45:55.138389    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:45:55.143171    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 17:45:55.145856    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:45:55.146519    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:45:55.147920    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 17:45:55.237994    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:45:55.491641    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:45:55.577561    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 17:45:55.581988    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:45:55.926627    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:45:55.982086    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:45:56.079542    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 17:45:56.086634    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:45:56.236515    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:45:56.619870    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:45:56.620985    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 17:45:56.622921    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:45:56.733578    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:45:56.985409    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:45:57.082948    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:45:57.088205    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 17:45:57.222674    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:45:57.488091    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:45:57.573972    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 17:45:57.580705    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:45:57.724135    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:45:57.993232    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:45:58.072702    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 17:45:58.074045    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:45:58.231429    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:45:58.482651    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:45:58.579102    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 17:45:58.580950    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:45:58.735260    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:45:58.987623    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:45:59.084645    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 17:45:59.087175    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:45:59.226702    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:45:59.494679    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:45:59.577522    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 17:45:59.579939    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:45:59.735305    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:45:59.988898    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:46:00.070807    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 17:46:00.070807    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:46:00.226005    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:46:00.493219    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:46:00.575509    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 17:46:00.576113    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:46:00.731488    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:46:00.982621    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:46:01.079791    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:46:01.081619    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 17:46:01.233632    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:46:01.489782    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:46:01.569233    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 17:46:01.572966    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:46:01.728897    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:46:01.997153    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:46:02.080737    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 17:46:02.081826    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:46:02.236177    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:46:02.485214    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:46:02.582149    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 17:46:02.582398    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:46:02.725216    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:46:02.992096    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:46:03.074011    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:46:03.074606    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 17:46:03.231312    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:46:03.484947    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:46:03.583605    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 17:46:03.585505    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:46:03.725261    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:46:03.994057    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:46:04.083701    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 17:46:04.084742    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:46:04.231791    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:46:04.481492    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:46:04.579843    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 17:46:04.585050    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:46:04.734637    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:46:04.988588    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:46:05.068823    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 17:46:05.073821    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:46:05.231840    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:46:05.485296    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:46:05.582577    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:46:05.582577    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 17:46:05.730493    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:46:05.982372    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:46:06.080433    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 17:46:06.081225    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:46:06.234355    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:46:06.487211    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:46:06.568812    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 17:46:06.574077    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:46:06.737642    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:46:06.990396    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:46:07.072786    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 17:46:07.072786    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:46:07.227364    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:46:07.495873    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:46:07.578308    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 17:46:07.578912    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:46:07.736435    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:46:07.994014    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:46:08.075335    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 17:46:08.076568    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:46:08.225132    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:46:08.494308    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:46:08.574361    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 17:46:08.575894    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:46:08.734585    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:46:08.985588    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:46:09.082522    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 17:46:09.083520    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:46:09.224010    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:46:09.493777    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:46:09.574722    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 17:46:09.576133    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:46:09.731911    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:46:09.982995    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:46:10.080404    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 17:46:10.084521    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:46:10.241353    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:46:10.489025    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:46:10.573000    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 17:46:10.573692    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:46:10.727368    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:46:10.993571    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:46:11.077347    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 17:46:11.078985    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:46:11.234791    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:46:11.487736    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:46:11.584260    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 17:46:11.585085    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:46:12.127633    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:46:12.128333    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:46:12.128955    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 17:46:12.132542    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:46:12.477417    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:46:13.672444    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:46:13.672517    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:46:13.678138    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 17:46:13.693470    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:46:13.693758    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:46:13.693758    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 17:46:13.694284    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:46:13.700803    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:46:13.728373    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:46:13.997133    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:46:14.082165    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 17:46:14.082513    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:46:14.235851    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:46:14.498462    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:46:14.591868    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 17:46:14.597300    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:46:14.727748    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:46:14.992752    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:46:15.075631    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 17:46:15.076315    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:46:15.232917    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:46:15.483637    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:46:15.588198    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:46:15.591546    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 17:46:15.736958    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:46:15.988461    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:46:16.070341    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 17:46:16.075254    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:46:16.228887    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:46:16.493210    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:46:16.575107    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 17:46:16.576980    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:46:16.730297    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:46:16.982157    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:46:17.078432    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 17:46:17.084511    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:46:17.235585    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:46:17.488629    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:46:17.587262    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:46:17.589728    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 17:46:17.728018    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:46:17.996384    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:46:18.079476    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:46:18.082102    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 17:46:18.235734    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:46:18.487853    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:46:18.583954    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 17:46:18.585357    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:46:18.726203    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:46:18.993047    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:46:19.075301    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 17:46:19.078261    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:46:19.231504    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:46:19.484101    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:46:19.582654    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:46:19.584142    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 17:46:19.737275    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:46:19.985251    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:46:20.085127    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 17:46:20.085163    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:46:20.236032    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:46:20.484246    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:46:20.812860    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 17:46:20.813716    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:46:20.814714    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:46:20.984048    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:46:21.084272    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 17:46:21.087264    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:46:21.236477    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:46:21.492184    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:46:21.583134    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:46:21.584474    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 17:46:21.725043    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:46:21.988726    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:46:22.084356    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 17:46:22.085739    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:46:22.237867    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:46:22.488148    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:46:22.584744    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 17:46:22.586274    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:46:22.739294    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:46:22.988570    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:46:23.091363    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 17:46:23.093804    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:46:23.228166    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:46:23.495369    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:46:23.577190    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 17:46:23.580536    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:46:23.733150    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:46:23.986789    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:46:24.082089    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 17:46:24.082241    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:46:24.237932    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:46:24.633119    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 17:46:24.633119    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:46:24.633171    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:46:24.737057    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:46:24.990228    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:46:25.083584    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 17:46:25.084767    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:46:25.238601    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:46:25.488526    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:46:25.585474    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 17:46:25.587072    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:46:25.725826    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:46:25.989268    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:46:26.083962    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 17:46:26.085855    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:46:26.224906    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:46:26.496926    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:46:26.574789    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 17:46:26.577509    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:46:26.738182    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:46:26.983078    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:46:27.080970    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 17:46:27.083958    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:46:27.224556    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:46:27.484766    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:46:27.580900    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 17:46:27.581661    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:46:27.736338    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:46:27.988725    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:46:28.085036    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 17:46:28.085299    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:46:28.224712    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:46:28.494495    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:46:28.575454    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 17:46:28.577835    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:46:28.732324    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:46:28.986413    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:46:29.082519    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 17:46:29.083467    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:46:29.238764    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:46:29.491644    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:46:29.585760    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 17:46:29.588045    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:46:29.725131    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:46:29.988071    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:46:30.087628    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 17:46:30.088230    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:46:30.224521    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:46:30.493249    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:46:30.582045    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:46:30.595790    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 17:46:30.739129    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:46:30.994803    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:46:31.083567    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 17:46:31.092002    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:46:31.236643    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:46:31.489932    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:46:31.598822    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 17:46:31.600027    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:46:31.727050    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:46:31.995657    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:46:32.076673    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 17:46:32.078241    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:46:32.231777    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:46:32.483799    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:46:32.580693    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 17:46:32.581383    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:46:32.740689    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:46:32.995281    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:46:33.073805    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 17:46:33.074393    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:46:33.231455    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:46:33.496132    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:46:33.576824    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 17:46:33.577002    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:46:33.733023    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:46:33.986426    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:46:34.084994    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:46:34.085172    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 17:46:34.224293    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:46:34.495105    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:46:34.576232    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 17:46:34.576607    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:46:34.734418    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:46:34.986714    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:46:35.086206    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:46:35.091634    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 17:46:35.226711    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:46:35.494665    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:46:35.577954    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 17:46:35.578722    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:46:35.733236    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:46:35.984934    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:46:36.083115    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 17:46:36.083115    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:46:36.237872    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:46:36.488506    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:46:36.584854    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 17:46:36.586195    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:46:36.726434    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:46:36.993159    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:46:37.073918    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 17:46:37.073918    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:46:37.232402    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:46:37.497538    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:46:37.579762    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 17:46:37.581616    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:46:37.745345    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:46:37.991865    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:46:38.072366    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 17:46:38.074728    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:46:38.228676    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:46:38.496901    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:46:38.577710    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:46:38.579712    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 17:46:38.735153    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:46:38.985403    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:46:39.086605    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 17:46:39.088908    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:46:39.237239    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:46:39.489139    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:46:39.584591    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:46:39.586346    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 17:46:39.737927    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:46:39.987391    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:46:40.082875    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 17:46:40.084764    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:46:40.237789    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:46:40.486340    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:46:40.581939    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 17:46:40.584696    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:46:40.730999    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:46:40.988402    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:46:41.169638    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 17:46:41.170614    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:46:41.236948    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:46:41.495219    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:46:41.585763    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 17:46:41.586067    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:46:41.729748    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:46:41.995628    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:46:42.078254    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:46:42.080298    6448 kapi.go:107] duration metric: took 1m5.5150368s to wait for kubernetes.io/minikube-addons=registry ...
	I0314 17:46:42.233814    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:46:42.485600    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:46:42.584617    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:46:42.726122    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:46:42.991130    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:46:43.088627    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:46:43.228894    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:46:43.496876    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:46:43.577452    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:46:43.732198    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:46:43.985931    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:46:44.083003    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:46:44.241521    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:46:44.827313    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:46:44.831087    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:46:44.832976    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:46:45.279905    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:46:45.280555    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:46:45.281680    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:46:45.490109    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:46:45.584511    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:46:45.739569    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:46:45.989608    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:46:46.088121    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:46:46.228095    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:46:46.496926    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:46:46.587432    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:46:46.734528    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:46:46.986997    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:46:47.086151    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:46:47.244559    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:46:47.497339    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:46:47.579890    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:46:47.737700    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:46:47.990837    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:46:48.086311    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:46:48.226346    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:46:48.498322    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:46:48.581876    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:46:48.738785    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:46:48.992736    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:46:49.084965    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:46:49.240395    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:46:49.492262    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:46:49.590246    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:46:49.730494    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:46:49.998283    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:46:50.081137    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:46:50.238210    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:46:50.492145    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:46:50.588783    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:46:50.734696    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:46:50.995234    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:46:51.077140    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:46:51.233975    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:46:51.532046    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:46:51.581583    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:46:51.881103    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:46:51.994409    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:46:52.077209    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:46:52.230877    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:46:52.497499    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:46:52.579321    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:46:52.737959    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:46:52.992130    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:46:53.934776    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:46:53.940240    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:46:53.941435    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:46:53.947690    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:46:54.123993    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:46:54.124599    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:46:54.127253    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:46:54.579017    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:46:54.581070    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:46:54.583478    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:46:54.738182    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:46:54.989051    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:46:55.084162    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:46:55.240947    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:46:55.490766    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:46:55.594306    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:46:55.735800    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:46:55.995900    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:46:56.079414    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:46:56.239268    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:46:56.485558    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:46:56.584456    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:46:56.740236    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:46:56.994142    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:46:57.095430    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:46:57.230538    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:46:57.497112    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:46:57.584117    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:46:57.737360    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:46:57.990002    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:46:58.088355    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:46:58.228144    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:46:58.495138    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:46:58.577000    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:46:58.731025    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:46:58.984920    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:46:59.082609    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:46:59.240137    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:46:59.492822    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:46:59.575484    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:46:59.733178    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:46:59.987250    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:47:00.085975    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:47:00.430731    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:47:00.488896    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:47:00.786286    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:47:00.787849    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:47:00.993868    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:47:01.088281    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:47:01.232693    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:47:01.492665    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:47:01.589126    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:47:01.729714    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:47:02.015490    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:47:02.084990    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:47:02.238883    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:47:02.491619    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:47:02.589570    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:47:02.728519    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:47:02.996535    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:47:03.076836    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:47:03.233893    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:47:03.486487    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:47:03.583725    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:47:03.740397    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:47:03.990825    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:47:04.088666    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:47:04.227610    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:47:04.788981    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:47:04.791081    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:47:04.793814    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:47:05.120247    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:47:05.123825    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:47:05.230747    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:47:05.785531    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:47:05.785932    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:47:05.787346    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:47:05.998860    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:47:06.084655    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:47:06.235960    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:47:06.568790    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:47:06.681667    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:47:06.734258    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:47:06.988019    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:47:07.089875    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:47:07.242210    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:47:07.493933    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:47:07.590196    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:47:07.733370    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:47:07.986654    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:47:08.082590    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:47:08.238156    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:47:08.489145    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:47:08.585892    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:47:08.728698    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:47:08.998615    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:47:09.080280    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:47:09.238145    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:47:09.490327    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:47:09.585307    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:47:09.728108    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:47:09.996363    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:47:10.075293    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:47:10.230204    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:47:10.493508    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:47:10.591410    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:47:10.732199    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:47:11.000064    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:47:11.080137    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:47:11.238644    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:47:11.486319    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:47:11.586471    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:47:11.740279    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:47:11.993327    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:47:12.077141    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:47:12.232275    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:47:12.500117    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:47:12.582204    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:47:12.937342    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:47:12.987618    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:47:13.093749    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:47:13.244866    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:47:13.496688    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:47:13.581933    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:47:13.734517    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:47:13.986748    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:47:14.085511    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:47:14.230839    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:47:14.498998    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:47:14.581936    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:47:14.738503    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:47:14.993302    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:47:15.091780    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:47:15.232910    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:47:15.489812    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:47:15.587394    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:47:15.730119    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:47:15.984920    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:47:16.083993    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:47:16.242893    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:47:16.494441    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:47:16.576630    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:47:16.733337    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:47:16.987223    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:47:17.084176    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:47:17.239831    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:47:17.492467    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:47:17.591186    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:47:17.728204    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:47:18.000211    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:47:18.088600    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:47:18.236766    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:47:18.488088    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:47:18.584379    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:47:18.738847    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:47:18.990696    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:47:19.084374    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:47:19.243399    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:47:19.496765    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:47:19.577005    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:47:19.733588    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:47:19.989662    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:47:20.086243    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:47:20.243463    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:47:20.489511    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:47:20.587491    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:47:20.946034    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:47:21.000251    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:47:21.102080    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:47:21.233762    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:47:21.499913    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:47:21.581250    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:47:21.735073    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:47:22.005747    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:47:22.082464    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:47:22.242639    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:47:22.488737    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:47:22.587405    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:47:22.743081    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:47:22.994766    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:47:23.094091    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:47:23.234443    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:47:23.488688    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:47:23.584733    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:47:24.142180    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:47:24.146364    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:47:24.154029    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:47:24.253573    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:47:24.490333    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:47:24.589929    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:47:24.729980    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:47:24.995935    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:47:25.092531    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:47:25.244634    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:47:25.494555    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:47:25.590449    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:47:25.730102    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:47:25.991988    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:47:26.086888    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:47:26.240839    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:47:26.489613    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:47:26.585566    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:47:26.738782    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:47:27.002143    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:47:27.083748    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:47:27.229189    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:47:27.491792    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:47:27.589250    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:47:27.742327    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:47:27.989773    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:47:28.086802    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:47:28.241057    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:47:28.491483    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:47:28.587977    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:47:28.729114    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:47:28.999006    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:47:29.080337    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:47:29.240255    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:47:29.814666    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:47:29.814997    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:47:29.818610    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:47:29.991534    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:47:30.088113    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:47:30.245600    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:47:30.495085    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:47:30.603679    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:47:30.732596    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:47:30.986992    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:47:31.085615    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:47:31.242429    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:47:31.490641    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:47:31.586724    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:47:31.742845    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:47:31.994645    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:47:32.091168    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:47:32.234698    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:47:32.498032    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:47:32.578296    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:47:32.735346    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:47:33.002520    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:47:33.097084    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:47:33.243718    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:47:33.493403    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:47:33.591006    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:47:33.730539    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:47:34.000767    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:47:34.080579    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:47:34.232645    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:47:34.496631    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:47:34.594913    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:47:34.732791    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:47:35.001198    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:47:35.080873    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:47:35.236868    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:47:35.503392    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:47:35.582158    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:47:35.737974    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:47:35.988111    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:47:36.085460    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:47:36.241158    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:47:36.494649    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:47:36.594342    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:47:36.731437    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:47:36.997574    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:47:37.079609    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:47:37.229981    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:47:37.497320    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:47:37.590249    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:47:37.731221    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:47:38.000532    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:47:38.443894    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:47:38.444592    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:47:38.501390    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:47:38.583921    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:47:38.740087    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:47:39.003403    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:47:39.082859    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:47:39.239102    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:47:39.510550    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:47:39.584905    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:47:39.749018    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:47:39.995412    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:47:40.095801    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:47:40.233905    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:47:40.502485    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:47:40.597999    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:47:40.746512    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:47:40.993453    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:47:41.087901    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:47:41.240081    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:47:41.492179    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:47:41.588033    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:47:41.735313    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:47:41.997449    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:47:42.081183    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:47:42.232966    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:47:42.502226    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:47:42.584232    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:47:42.739222    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:47:42.994103    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:47:43.090117    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:47:43.232098    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:47:43.502338    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:47:43.580737    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:47:44.124764    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:47:44.125683    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:47:44.126693    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:47:44.257044    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:47:44.490799    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:47:44.589977    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:47:44.741842    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:47:44.990118    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:47:45.086921    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:47:45.238538    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:47:45.489505    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:47:45.600812    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:47:45.743401    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:47:45.999764    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:47:46.137970    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:47:46.235214    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:47:46.488786    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:47:46.586693    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:47:46.742716    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:47:46.996091    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:47:47.490018    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:47:47.490018    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:47:47.493521    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:47:47.579701    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 17:47:47.738897    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:47:47.990522    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:47:48.087340    6448 kapi.go:107] duration metric: took 2m5.0210592s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0314 17:47:48.243739    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:47:48.493862    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:47:48.734554    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:47:49.001843    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:47:49.236019    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:47:49.503277    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:47:49.741844    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:47:49.995673    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:47:50.233059    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:47:50.497006    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:47:50.733218    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:47:50.996879    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:47:51.240377    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:47:51.500708    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:47:51.736211    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:47:52.001108    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:47:52.237473    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:47:52.498221    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:47:52.735460    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:47:53.001530    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:47:53.237531    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:47:53.502829    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:47:53.737932    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:47:54.010541    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:47:54.241487    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:47:54.504905    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:47:54.737444    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:47:55.001951    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:47:55.237675    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:47:55.501062    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:47:55.734164    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:47:55.997991    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:47:56.233539    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:47:56.494959    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:47:56.742616    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:47:56.992019    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:47:57.242054    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:47:57.497521    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:47:57.731782    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:47:57.999769    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:47:58.232314    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:47:58.498493    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:47:58.735208    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:47:59.002577    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:47:59.241064    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:47:59.496250    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:47:59.737654    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:47:59.989483    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:48:00.242954    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:48:00.499172    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:48:00.737009    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:48:01.003621    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:48:01.295495    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:48:01.640351    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:48:02.055921    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:48:02.057106    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:48:02.491243    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:48:02.493548    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:48:02.736468    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:48:03.000837    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:48:03.240459    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:48:03.499937    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:48:03.731650    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:48:04.001628    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:48:04.242485    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:48:04.495769    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:48:04.734419    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:48:05.004067    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:48:05.244808    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:48:05.499733    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:48:05.738937    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:48:05.991008    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:48:06.245555    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:48:06.499875    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:48:06.740709    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:48:06.990964    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:48:07.246764    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:48:07.498172    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:48:07.735371    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:48:08.005087    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:48:08.239799    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:48:08.494207    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:48:08.734790    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:48:09.003396    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:48:09.241143    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:48:09.503758    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:48:09.820765    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:48:10.628876    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:48:10.630618    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:48:10.988535    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:48:10.989172    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:48:10.994091    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:48:11.281543    6448 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 17:48:11.495729    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:48:11.731945    6448 kapi.go:107] duration metric: took 2m31.0122832s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0314 17:48:12.001327    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:48:12.493948    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:48:13.003256    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:48:13.497405    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:48:14.005210    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:48:14.498461    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:48:15.181651    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:48:15.496037    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:48:16.030015    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:48:16.492272    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:48:17.003193    6448 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 17:48:17.496006    6448 kapi.go:107] duration metric: took 2m32.0102337s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0314 17:48:17.498786    6448 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-953400 cluster.
	I0314 17:48:17.500961    6448 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0314 17:48:17.503147    6448 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0314 17:48:17.505556    6448 out.go:177] * Enabled addons: nvidia-device-plugin, ingress-dns, metrics-server, storage-provisioner, cloud-spanner, helm-tiller, inspektor-gadget, yakd, default-storageclass, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0314 17:48:17.507267    6448 addons.go:505] duration metric: took 3m6.2876027s for enable addons: enabled=[nvidia-device-plugin ingress-dns metrics-server storage-provisioner cloud-spanner helm-tiller inspektor-gadget yakd default-storageclass volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0314 17:48:17.507267    6448 start.go:245] waiting for cluster config update ...
	I0314 17:48:17.507267    6448 start.go:254] writing updated cluster config ...
	I0314 17:48:17.516882    6448 ssh_runner.go:195] Run: rm -f paused
	I0314 17:48:17.720699    6448 start.go:600] kubectl: 1.29.2, cluster: 1.28.4 (minor skew: 1)
	I0314 17:48:17.725705    6448 out.go:177] * Done! kubectl is now configured to use "addons-953400" cluster and "default" namespace by default
	
	
	==> Docker <==
	Mar 14 17:48:51 addons-953400 cri-dockerd[1214]: time="2024-03-14T17:48:51Z" level=info msg="Stop pulling image docker.io/nginx:latest: Status: Downloaded newer image for nginx:latest"
	Mar 14 17:48:51 addons-953400 dockerd[1329]: time="2024-03-14T17:48:51.523910582Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 14 17:48:51 addons-953400 dockerd[1329]: time="2024-03-14T17:48:51.524026590Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 14 17:48:51 addons-953400 dockerd[1329]: time="2024-03-14T17:48:51.524042191Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 14 17:48:51 addons-953400 dockerd[1329]: time="2024-03-14T17:48:51.525052764Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 14 17:49:00 addons-953400 dockerd[1323]: time="2024-03-14T17:49:00.602738212Z" level=info msg="ignoring event" container=236b6fab60b1d19ce417396983b3cbc0e22e5c46cfac64197e25b9b52e9566b0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 14 17:49:00 addons-953400 dockerd[1329]: time="2024-03-14T17:49:00.602942127Z" level=info msg="shim disconnected" id=236b6fab60b1d19ce417396983b3cbc0e22e5c46cfac64197e25b9b52e9566b0 namespace=moby
	Mar 14 17:49:00 addons-953400 dockerd[1329]: time="2024-03-14T17:49:00.603017232Z" level=warning msg="cleaning up after shim disconnected" id=236b6fab60b1d19ce417396983b3cbc0e22e5c46cfac64197e25b9b52e9566b0 namespace=moby
	Mar 14 17:49:00 addons-953400 dockerd[1329]: time="2024-03-14T17:49:00.603648677Z" level=info msg="cleaning up dead shim" namespace=moby
	Mar 14 17:49:00 addons-953400 dockerd[1323]: time="2024-03-14T17:49:00.802335316Z" level=info msg="ignoring event" container=e720febc9345818a6530d0bf9454c8ba99f10762c190b8c0a2e1d7839fb858db module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 14 17:49:00 addons-953400 dockerd[1329]: time="2024-03-14T17:49:00.802971762Z" level=info msg="shim disconnected" id=e720febc9345818a6530d0bf9454c8ba99f10762c190b8c0a2e1d7839fb858db namespace=moby
	Mar 14 17:49:00 addons-953400 dockerd[1329]: time="2024-03-14T17:49:00.803088670Z" level=warning msg="cleaning up after shim disconnected" id=e720febc9345818a6530d0bf9454c8ba99f10762c190b8c0a2e1d7839fb858db namespace=moby
	Mar 14 17:49:00 addons-953400 dockerd[1329]: time="2024-03-14T17:49:00.803103872Z" level=info msg="cleaning up dead shim" namespace=moby
	Mar 14 17:49:03 addons-953400 dockerd[1323]: time="2024-03-14T17:49:03.259999464Z" level=info msg="ignoring event" container=c883b94d68fce445627aaab4012562bbdd0bf2c37152df51858bcd84b0203d1e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 14 17:49:03 addons-953400 dockerd[1329]: time="2024-03-14T17:49:03.262380554Z" level=info msg="shim disconnected" id=c883b94d68fce445627aaab4012562bbdd0bf2c37152df51858bcd84b0203d1e namespace=moby
	Mar 14 17:49:03 addons-953400 dockerd[1329]: time="2024-03-14T17:49:03.262473562Z" level=warning msg="cleaning up after shim disconnected" id=c883b94d68fce445627aaab4012562bbdd0bf2c37152df51858bcd84b0203d1e namespace=moby
	Mar 14 17:49:03 addons-953400 dockerd[1329]: time="2024-03-14T17:49:03.262488763Z" level=info msg="cleaning up dead shim" namespace=moby
	Mar 14 17:49:03 addons-953400 dockerd[1323]: time="2024-03-14T17:49:03.409786225Z" level=info msg="ignoring event" container=8fbca9d727972c021459bb30240e05794be5d8ce04e07f315d84f9b3bc9ac912 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 14 17:49:03 addons-953400 dockerd[1329]: time="2024-03-14T17:49:03.410213559Z" level=info msg="shim disconnected" id=8fbca9d727972c021459bb30240e05794be5d8ce04e07f315d84f9b3bc9ac912 namespace=moby
	Mar 14 17:49:03 addons-953400 dockerd[1329]: time="2024-03-14T17:49:03.411093029Z" level=warning msg="cleaning up after shim disconnected" id=8fbca9d727972c021459bb30240e05794be5d8ce04e07f315d84f9b3bc9ac912 namespace=moby
	Mar 14 17:49:03 addons-953400 dockerd[1329]: time="2024-03-14T17:49:03.411111031Z" level=info msg="cleaning up dead shim" namespace=moby
	Mar 14 17:49:06 addons-953400 dockerd[1323]: time="2024-03-14T17:49:06.439542804Z" level=info msg="ignoring event" container=c6b1c541f22ae491b15a343a9d9312d0564cb8f96f9c2e5f6f08136622d48eed module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 14 17:49:06 addons-953400 dockerd[1329]: time="2024-03-14T17:49:06.439994040Z" level=info msg="shim disconnected" id=c6b1c541f22ae491b15a343a9d9312d0564cb8f96f9c2e5f6f08136622d48eed namespace=moby
	Mar 14 17:49:06 addons-953400 dockerd[1329]: time="2024-03-14T17:49:06.440043344Z" level=warning msg="cleaning up after shim disconnected" id=c6b1c541f22ae491b15a343a9d9312d0564cb8f96f9c2e5f6f08136622d48eed namespace=moby
	Mar 14 17:49:06 addons-953400 dockerd[1329]: time="2024-03-14T17:49:06.440053745Z" level=info msg="cleaning up dead shim" namespace=moby
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD
	c84ce1fc6b76a       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:e6c5b3bc32072ea370d34c27836efd11b3519d25bd444c2a8efc339cff0e20fb                                 54 seconds ago       Running             gcp-auth                                 0                   d39819d9bd556       gcp-auth-7d69788767-cvs76
	063d928362293       registry.k8s.io/ingress-nginx/controller@sha256:42b3f0e5d0846876b1791cd3afeb5f1cbbe4259d6f35651dcc1b5c980925379c                             59 seconds ago       Running             controller                               0                   604c57ccac1fa       ingress-nginx-controller-76dc478dd8-mrlz9
	0ae2ae348472f       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          About a minute ago   Running             csi-snapshotter                          0                   3595ce8b7be29       csi-hostpathplugin-v4htt
	5576107a2d43c       registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8                          About a minute ago   Running             csi-provisioner                          0                   3595ce8b7be29       csi-hostpathplugin-v4htt
	84c99a3d59cb1       registry.k8s.io/sig-storage/livenessprobe@sha256:cacee2b5c36dd59d4c7e8469c05c9e4ef53ecb2df9025fa8c10cdaf61bce62f0                            About a minute ago   Running             liveness-probe                           0                   3595ce8b7be29       csi-hostpathplugin-v4htt
	1a0795535fa11       registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5                           About a minute ago   Running             hostpath                                 0                   3595ce8b7be29       csi-hostpathplugin-v4htt
	6cf3859f44d2f       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:f1c25991bac2fbb7f5fcf91ed9438df31e30edee6bed5a780464238aa09ad24c                About a minute ago   Running             node-driver-registrar                    0                   3595ce8b7be29       csi-hostpathplugin-v4htt
	759c3e2e1abca       registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b                             About a minute ago   Running             csi-attacher                             0                   f822e7cb967fe       csi-hostpath-attacher-0
	d37d41d6608ad       registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7                              About a minute ago   Running             csi-resizer                              0                   5f4d3ae8dcc28       csi-hostpath-resizer-0
	54c13bd5c1633       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:80b9ba94aa2afe24553d69bd165a6a51552d1582d68618ec00d3b804a7d9193c   About a minute ago   Running             csi-external-health-monitor-controller   0                   3595ce8b7be29       csi-hostpathplugin-v4htt
	54270d3571ea5       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:44d1d0e9f19c63f58b380c5fddaca7cf22c7cee564adeff365225a5df5ef3334                   About a minute ago   Exited              patch                                    0                   9f89758d0d2b5       ingress-nginx-admission-patch-r5t4p
	57e08a2073a9f       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:44d1d0e9f19c63f58b380c5fddaca7cf22c7cee564adeff365225a5df5ef3334                   About a minute ago   Exited              create                                   0                   61dcf3704cf1b       ingress-nginx-admission-create-rhv74
	6ac8164d4b8cf       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280                      About a minute ago   Running             volume-snapshot-controller               0                   5b3a4d606fa32       snapshot-controller-58dbcc7b99-vfqkc
	ff6a44bc697bb       rancher/local-path-provisioner@sha256:e34c88ae0affb1cdefbb874140d6339d4a27ec4ee420ae8199cd839997b05246                                       About a minute ago   Running             local-path-provisioner                   0                   a9f4e6c19f8e8       local-path-provisioner-78b46b4d5c-zzbvd
	b64a1ec6ee15f       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280                      About a minute ago   Running             volume-snapshot-controller               0                   810dfb0d4266b       snapshot-controller-58dbcc7b99-gtr2s
	a17aa0fc61c0d       marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310                                                        2 minutes ago        Running             yakd                                     0                   a52f05c43b7d0       yakd-dashboard-9947fc6bf-tq9fn
	7c6497418eb3e       ghcr.io/helm/tiller@sha256:4c43eb385032945cad047d2350e4945d913b90b3ab43ee61cecb32a495c6df0f                                                  2 minutes ago        Running             tiller                                   0                   7e5732dba35bb       tiller-deploy-7b677967b9-gqg8w
	5aa7822dc3f5f       gcr.io/cloud-spanner-emulator/emulator@sha256:41d5dccfcf13817a2348beba0ca7c650ffdd795f7fcbe975b7822c9eed262e15                               2 minutes ago        Running             cloud-spanner-emulator                   0                   b14ed963519d4       cloud-spanner-emulator-6548d5df46-wfz9s
	9d56952f11113       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4abe27f9fc03fedab1d655e2020e6b165faf3bf6de1088ce6cf215a75b78f05f                             2 minutes ago        Running             minikube-ingress-dns                     0                   4cfb0f712ef8d       kube-ingress-dns-minikube
	8efdab20c3b88       6e38f40d628db                                                                                                                                3 minutes ago        Running             storage-provisioner                      0                   8cd396f4f6b90       storage-provisioner
	65d2edbcde3a1       ead0a4a53df89                                                                                                                                3 minutes ago        Running             coredns                                  0                   71ec971c83da6       coredns-5dd5756b68-68dzl
	c6dd1059512fb       83f6cc407eed8                                                                                                                                3 minutes ago        Running             kube-proxy                               0                   69b37d021be7d       kube-proxy-kddsj
	2b2eb495c731a       73deb9a3f7025                                                                                                                                4 minutes ago        Running             etcd                                     0                   81831df955853       etcd-addons-953400
	9363433f1cd6a       e3db313c6dbc0                                                                                                                                4 minutes ago        Running             kube-scheduler                           0                   13889bc645d1a       kube-scheduler-addons-953400
	13d66e6e04852       7fe0e6f37db33                                                                                                                                4 minutes ago        Running             kube-apiserver                           0                   50ac109cd0681       kube-apiserver-addons-953400
	88b721b730415       d058aa5ab969c                                                                                                                                4 minutes ago        Running             kube-controller-manager                  0                   dad8e98f1e98a       kube-controller-manager-addons-953400
	
	
	==> controller_ingress [063d92836229] <==
	W0314 17:48:11.776513       7 client_config.go:618] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
	I0314 17:48:11.776757       7 main.go:205] "Creating API client" host="https://10.96.0.1:443"
	I0314 17:48:11.786348       7 main.go:249] "Running in Kubernetes cluster" major="1" minor="28" git="v1.28.4" state="clean" commit="bae2c62678db2b5053817bc97181fcc2e8388103" platform="linux/amd64"
	I0314 17:48:12.012029       7 main.go:101] "SSL fake certificate created" file="/etc/ingress-controller/ssl/default-fake-certificate.pem"
	I0314 17:48:12.044501       7 ssl.go:536] "loading tls certificate" path="/usr/local/certificates/cert" key="/usr/local/certificates/key"
	I0314 17:48:12.061717       7 nginx.go:265] "Starting NGINX Ingress controller"
	I0314 17:48:12.081755       7 event.go:364] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"854ca7a9-d6eb-4d88-ae00-a6272c352371", APIVersion:"v1", ResourceVersion:"708", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/ingress-nginx-controller
	I0314 17:48:12.089144       7 event.go:364] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"tcp-services", UID:"17199fa7-7641-4661-b0a3-817f0826b205", APIVersion:"v1", ResourceVersion:"709", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/tcp-services
	I0314 17:48:12.089414       7 event.go:364] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"udp-services", UID:"f7e9c821-b15d-4639-b9a7-b223af3857cf", APIVersion:"v1", ResourceVersion:"710", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/udp-services
	I0314 17:48:13.264077       7 nginx.go:308] "Starting NGINX process"
	I0314 17:48:13.264188       7 leaderelection.go:250] attempting to acquire leader lease ingress-nginx/ingress-nginx-leader...
	I0314 17:48:13.267913       7 nginx.go:328] "Starting validation webhook" address=":8443" certPath="/usr/local/certificates/cert" keyPath="/usr/local/certificates/key"
	I0314 17:48:13.268127       7 controller.go:190] "Configuration changes detected, backend reload required"
	I0314 17:48:13.281070       7 leaderelection.go:260] successfully acquired lease ingress-nginx/ingress-nginx-leader
	I0314 17:48:13.281690       7 status.go:84] "New leader elected" identity="ingress-nginx-controller-76dc478dd8-mrlz9"
	I0314 17:48:13.295936       7 status.go:219] "POD is not ready" pod="ingress-nginx/ingress-nginx-controller-76dc478dd8-mrlz9" node="addons-953400"
	I0314 17:48:13.345608       7 controller.go:210] "Backend successfully reloaded"
	I0314 17:48:13.345997       7 controller.go:221] "Initial sync, sleeping for 1 second"
	I0314 17:48:13.346731       7 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-76dc478dd8-mrlz9", UID:"b9dc977d-afca-4d27-86da-2ada921a9a49", APIVersion:"v1", ResourceVersion:"1260", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
	  Build:         71f78d49f0a496c31d4c19f095469f3f23900f8a
	  Repository:    https://github.com/kubernetes/ingress-nginx
	  nginx version: nginx/1.25.3
	
	-------------------------------------------------------------------------------
	
	
	
	==> coredns [65d2edbcde3a] <==
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = d518b2f22d7013b4ce33ee954d9f8802810eac8bed02a1cf0be20d76208a6f83258316421f15df605ab13f1704501370ffcd7655fbac5818a200880248c94b94
	[INFO] Reloading complete
	[INFO] 10.244.0.8:34037 - 5985 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000893467s
	[INFO] 10.244.0.8:34037 - 61285 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000096207s
	[INFO] 10.244.0.8:43341 - 18238 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000079306s
	[INFO] 10.244.0.8:43341 - 50747 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000076506s
	[INFO] 10.244.0.8:52681 - 1771 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000106808s
	[INFO] 10.244.0.8:52681 - 44014 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000138111s
	[INFO] 10.244.0.8:52583 - 65426 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000179113s
	[INFO] 10.244.0.8:52583 - 41364 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000100908s
	[INFO] 10.244.0.8:49042 - 58093 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000155011s
	[INFO] 10.244.0.8:60989 - 44121 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000060605s
	[INFO] 10.244.0.8:33219 - 54756 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000091607s
	[INFO] 10.244.0.8:44001 - 14351 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000044903s
	[INFO] 10.244.0.22:60679 - 9777 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000290624s
	[INFO] 10.244.0.22:60000 - 30839 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000149412s
	[INFO] 10.244.0.22:48599 - 47936 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.00012141s
	[INFO] 10.244.0.22:50054 - 19933 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000063905s
	[INFO] 10.244.0.22:53335 - 42240 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00012041s
	[INFO] 10.244.0.22:45393 - 9073 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000063005s
	[INFO] 10.244.0.22:44286 - 46714 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd 192 0.001734339s
	[INFO] 10.244.0.22:56454 - 55338 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd 240 0.002152073s
	[INFO] 10.244.0.25:36855 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000406032s
	[INFO] 10.244.0.25:48625 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.00038133s
	
	
	==> describe nodes <==
	Name:               addons-953400
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-953400
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c6f78a3db54ac629870afb44fb5bc8be9e04a8c7
	                    minikube.k8s.io/name=addons-953400
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_14T17_44_58_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-953400
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-953400"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 14 Mar 2024 17:44:54 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-953400
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 14 Mar 2024 17:49:05 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 14 Mar 2024 17:49:05 +0000   Thu, 14 Mar 2024 17:44:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 14 Mar 2024 17:49:05 +0000   Thu, 14 Mar 2024 17:44:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 14 Mar 2024 17:49:05 +0000   Thu, 14 Mar 2024 17:44:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 14 Mar 2024 17:49:05 +0000   Thu, 14 Mar 2024 17:45:02 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.17.87.211
	  Hostname:    addons-953400
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912868Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912868Ki
	  pods:               110
	System Info:
	  Machine ID:                 c314617201b045faa63b200855e60496
	  System UUID:                95e05610-9458-034d-aedd-4c11f2eeaf9a
	  Boot ID:                    7fa0eb36-54f5-4c7e-9a71-1a9d74ada938
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://25.0.4
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (19 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     cloud-spanner-emulator-6548d5df46-wfz9s      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m35s
	  gcp-auth                    gcp-auth-7d69788767-cvs76                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m24s
	  ingress-nginx               ingress-nginx-controller-76dc478dd8-mrlz9    100m (5%!)(MISSING)     0 (0%!)(MISSING)      90Mi (2%!)(MISSING)        0 (0%!)(MISSING)         3m29s
	  kube-system                 coredns-5dd5756b68-68dzl                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     3m58s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m27s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m26s
	  kube-system                 csi-hostpathplugin-v4htt                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m27s
	  kube-system                 etcd-addons-953400                           100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         4m11s
	  kube-system                 kube-apiserver-addons-953400                 250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m11s
	  kube-system                 kube-controller-manager-addons-953400        200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m13s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m36s
	  kube-system                 kube-proxy-kddsj                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m58s
	  kube-system                 kube-scheduler-addons-953400                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m11s
	  kube-system                 snapshot-controller-58dbcc7b99-gtr2s         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m31s
	  kube-system                 snapshot-controller-58dbcc7b99-vfqkc         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m31s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m33s
	  kube-system                 tiller-deploy-7b677967b9-gqg8w               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m33s
	  local-path-storage          local-path-provisioner-78b46b4d5c-zzbvd      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m30s
	  yakd-dashboard              yakd-dashboard-9947fc6bf-tq9fn               0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (3%!)(MISSING)       256Mi (6%!)(MISSING)     3m31s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             388Mi (10%!)(MISSING)  426Mi (11%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m50s                  kube-proxy       
	  Normal  Starting                 4m20s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m20s (x8 over 4m20s)  kubelet          Node addons-953400 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m20s (x8 over 4m20s)  kubelet          Node addons-953400 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m20s (x7 over 4m20s)  kubelet          Node addons-953400 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m20s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 4m11s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  4m11s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m11s                  kubelet          Node addons-953400 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m11s                  kubelet          Node addons-953400 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m11s                  kubelet          Node addons-953400 status is now: NodeHasSufficientPID
	  Normal  NodeReady                4m7s                   kubelet          Node addons-953400 status is now: NodeReady
	  Normal  RegisteredNode           3m59s                  node-controller  Node addons-953400 event: Registered Node addons-953400 in Controller
	
	
	==> dmesg <==
	[  +0.126746] kauditd_printk_skb: 62 callbacks suppressed
	[Mar14 17:45] systemd-fstab-generator[3484]: Ignoring "noauto" option for root device
	[  +0.462791] kauditd_printk_skb: 12 callbacks suppressed
	[  +5.106674] kauditd_printk_skb: 32 callbacks suppressed
	[  +5.618138] kauditd_printk_skb: 36 callbacks suppressed
	[ +11.224359] kauditd_printk_skb: 8 callbacks suppressed
	[  +5.253243] kauditd_printk_skb: 37 callbacks suppressed
	[  +5.146504] kauditd_printk_skb: 103 callbacks suppressed
	[ +11.714971] kauditd_printk_skb: 49 callbacks suppressed
	[Mar14 17:46] kauditd_printk_skb: 4 callbacks suppressed
	[ +24.639736] kauditd_printk_skb: 4 callbacks suppressed
	[Mar14 17:47] kauditd_printk_skb: 31 callbacks suppressed
	[  +5.006266] kauditd_printk_skb: 6 callbacks suppressed
	[  +6.338773] kauditd_printk_skb: 11 callbacks suppressed
	[ +14.078174] kauditd_printk_skb: 56 callbacks suppressed
	[  +5.731516] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.083691] kauditd_printk_skb: 29 callbacks suppressed
	[Mar14 17:48] kauditd_printk_skb: 8 callbacks suppressed
	[ +14.284088] kauditd_printk_skb: 4 callbacks suppressed
	[  +5.440463] kauditd_printk_skb: 42 callbacks suppressed
	[  +6.272279] kauditd_printk_skb: 15 callbacks suppressed
	[  +8.716048] kauditd_printk_skb: 16 callbacks suppressed
	[  +6.276335] kauditd_printk_skb: 28 callbacks suppressed
	[Mar14 17:49] kauditd_printk_skb: 15 callbacks suppressed
	[  +5.836548] kauditd_printk_skb: 2 callbacks suppressed
	
	
	==> etcd [2b2eb495c731] <==
	{"level":"warn","ts":"2024-03-14T17:48:10.878727Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-14T17:48:10.285381Z","time spent":"593.322035ms","remote":"127.0.0.1:46100","response type":"/etcdserverpb.KV/Range","request count":0,"request size":56,"response count":1,"response size":31,"request content":"key:\"/registry/ingressclasses/\" range_end:\"/registry/ingressclasses0\" count_only:true "}
	{"level":"warn","ts":"2024-03-14T17:48:11.238593Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"348.579383ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:1 size:4153"}
	{"level":"info","ts":"2024-03-14T17:48:11.238712Z","caller":"traceutil/trace.go:171","msg":"trace[2085361120] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:1; response_revision:1254; }","duration":"348.714194ms","start":"2024-03-14T17:48:10.889986Z","end":"2024-03-14T17:48:11.2387Z","steps":["trace[2085361120] 'range keys from in-memory index tree'  (duration: 348.500877ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-14T17:48:11.23875Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-14T17:48:10.889973Z","time spent":"348.766898ms","remote":"127.0.0.1:45972","response type":"/etcdserverpb.KV/Range","request count":0,"request size":52,"response count":1,"response size":4177,"request content":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" "}
	{"level":"warn","ts":"2024-03-14T17:48:11.239976Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"105.478067ms","expected-duration":"100ms","prefix":"","request":"header:<ID:9699221144721721348 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1251 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-03-14T17:48:11.240092Z","caller":"traceutil/trace.go:171","msg":"trace[1930859457] linearizableReadLoop","detail":"{readStateIndex:1314; appliedIndex:1313; }","duration":"240.755227ms","start":"2024-03-14T17:48:10.999327Z","end":"2024-03-14T17:48:11.240083Z","steps":["trace[1930859457] 'read index received'  (duration: 135.079744ms)","trace[1930859457] 'applied index is now lower than readState.Index'  (duration: 105.673983ms)"],"step_count":2}
	{"level":"warn","ts":"2024-03-14T17:48:11.240399Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"241.087854ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" ","response":"range_response_count:3 size:13879"}
	{"level":"info","ts":"2024-03-14T17:48:11.240571Z","caller":"traceutil/trace.go:171","msg":"trace[422359360] range","detail":"{range_begin:/registry/pods/ingress-nginx/; range_end:/registry/pods/ingress-nginx0; response_count:3; response_revision:1255; }","duration":"241.260568ms","start":"2024-03-14T17:48:10.9993Z","end":"2024-03-14T17:48:11.240561Z","steps":["trace[422359360] 'agreement among raft nodes before linearized reading'  (duration: 240.927941ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-14T17:48:11.241004Z","caller":"traceutil/trace.go:171","msg":"trace[1763353815] transaction","detail":"{read_only:false; response_revision:1255; number_of_response:1; }","duration":"350.549342ms","start":"2024-03-14T17:48:10.890442Z","end":"2024-03-14T17:48:11.240991Z","steps":["trace[1763353815] 'process raft request'  (duration: 244.008589ms)","trace[1763353815] 'compare'  (duration: 104.046753ms)"],"step_count":2}
	{"level":"warn","ts":"2024-03-14T17:48:11.241332Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-14T17:48:10.89043Z","time spent":"350.766159ms","remote":"127.0.0.1:45946","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1251 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2024-03-14T17:48:15.432076Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"314.116904ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"warn","ts":"2024-03-14T17:48:15.432647Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"180.677197ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:1 size:4153"}
	{"level":"info","ts":"2024-03-14T17:48:15.432695Z","caller":"traceutil/trace.go:171","msg":"trace[1505630651] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:1; response_revision:1273; }","duration":"180.733901ms","start":"2024-03-14T17:48:15.251955Z","end":"2024-03-14T17:48:15.432688Z","steps":["trace[1505630651] 'range keys from in-memory index tree'  (duration: 180.577989ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-14T17:48:15.432511Z","caller":"traceutil/trace.go:171","msg":"trace[1160457725] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1273; }","duration":"314.55804ms","start":"2024-03-14T17:48:15.117934Z","end":"2024-03-14T17:48:15.432492Z","steps":["trace[1160457725] 'range keys from in-memory index tree'  (duration: 313.93969ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-14T17:48:15.433497Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-14T17:48:15.117916Z","time spent":"315.56802ms","remote":"127.0.0.1:45778","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/health\" "}
	{"level":"warn","ts":"2024-03-14T17:48:15.43408Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"169.429094ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1113"}
	{"level":"info","ts":"2024-03-14T17:48:15.434169Z","caller":"traceutil/trace.go:171","msg":"trace[28509976] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1273; }","duration":"169.640211ms","start":"2024-03-14T17:48:15.264517Z","end":"2024-03-14T17:48:15.434157Z","steps":["trace[28509976] 'range keys from in-memory index tree'  (duration: 169.348188ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-14T17:48:41.838024Z","caller":"traceutil/trace.go:171","msg":"trace[1862841152] transaction","detail":"{read_only:false; response_revision:1438; number_of_response:1; }","duration":"119.150149ms","start":"2024-03-14T17:48:41.718857Z","end":"2024-03-14T17:48:41.838007Z","steps":["trace[1862841152] 'process raft request'  (duration: 118.696013ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-14T17:48:44.972497Z","caller":"traceutil/trace.go:171","msg":"trace[728062114] transaction","detail":"{read_only:false; response_revision:1444; number_of_response:1; }","duration":"431.341556ms","start":"2024-03-14T17:48:44.54114Z","end":"2024-03-14T17:48:44.972481Z","steps":["trace[728062114] 'process raft request'  (duration: 430.960126ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-14T17:48:44.972776Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-14T17:48:44.54112Z","time spent":"431.51207ms","remote":"127.0.0.1:46068","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":541,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/leases/kube-node-lease/addons-953400\" mod_revision:1379 > success:<request_put:<key:\"/registry/leases/kube-node-lease/addons-953400\" value_size:487 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/addons-953400\" > >"}
	{"level":"warn","ts":"2024-03-14T17:48:45.125457Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"109.837521ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/local-path-storage/helper-pod-delete-pvc-9b8b44a0-12f8-43a2-8d75-342adde9e68c\" ","response":"range_response_count:1 size:4234"}
	{"level":"info","ts":"2024-03-14T17:48:45.125511Z","caller":"traceutil/trace.go:171","msg":"trace[71165377] range","detail":"{range_begin:/registry/pods/local-path-storage/helper-pod-delete-pvc-9b8b44a0-12f8-43a2-8d75-342adde9e68c; range_end:; response_count:1; response_revision:1444; }","duration":"109.909226ms","start":"2024-03-14T17:48:45.01559Z","end":"2024-03-14T17:48:45.125499Z","steps":["trace[71165377] 'range keys from in-memory index tree'  (duration: 109.743614ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-14T17:48:48.63449Z","caller":"traceutil/trace.go:171","msg":"trace[387274803] transaction","detail":"{read_only:false; response_revision:1456; number_of_response:1; }","duration":"122.455733ms","start":"2024-03-14T17:48:48.512019Z","end":"2024-03-14T17:48:48.634475Z","steps":["trace[387274803] 'process raft request'  (duration: 122.345625ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-14T17:48:48.987221Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"198.232898ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/validatingwebhookconfigurations/\" range_end:\"/registry/validatingwebhookconfigurations0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-03-14T17:48:48.987307Z","caller":"traceutil/trace.go:171","msg":"trace[566135365] range","detail":"{range_begin:/registry/validatingwebhookconfigurations/; range_end:/registry/validatingwebhookconfigurations0; response_count:0; response_revision:1456; }","duration":"198.342306ms","start":"2024-03-14T17:48:48.788952Z","end":"2024-03-14T17:48:48.987295Z","steps":["trace[566135365] 'count revisions from in-memory index tree'  (duration: 198.150493ms)"],"step_count":1}
	
	
	==> gcp-auth [c84ce1fc6b76] <==
	2024/03/14 17:48:16 GCP Auth Webhook started!
	2024/03/14 17:48:18 Ready to marshal response ...
	2024/03/14 17:48:18 Ready to write response ...
	2024/03/14 17:48:18 Ready to marshal response ...
	2024/03/14 17:48:18 Ready to write response ...
	2024/03/14 17:48:28 Ready to marshal response ...
	2024/03/14 17:48:28 Ready to write response ...
	2024/03/14 17:48:38 Ready to marshal response ...
	2024/03/14 17:48:38 Ready to write response ...
	2024/03/14 17:48:39 Ready to marshal response ...
	2024/03/14 17:48:39 Ready to write response ...
	2024/03/14 17:49:09 Ready to marshal response ...
	2024/03/14 17:49:09 Ready to write response ...
	
	
	==> kernel <==
	 17:49:09 up 6 min,  0 users,  load average: 2.89, 2.27, 1.03
	Linux addons-953400 5.10.207 #1 SMP Wed Mar 13 22:01:28 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [13d66e6e0485] <==
	I0314 17:46:31.953010       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E0314 17:46:31.955716       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.102.204.21:443/apis/metrics.k8s.io/v1beta1: Get "https://10.102.204.21:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.102.204.21:443: connect: connection refused
	I0314 17:46:32.055681       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0314 17:46:54.195059       1 trace.go:236] Trace[606219424]: "List" accept:application/json, */*,audit-id:dc88ec1c-243d-4230-b4c6-257a9ee7fecd,client:172.17.80.1,protocol:HTTP/2.0,resource:pods,scope:namespace,url:/api/v1/namespaces/kube-system/pods,user-agent:minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format,verb:LIST (14-Mar-2024 17:46:53.341) (total time: 852ms):
	Trace[606219424]: ["List(recursive=true) etcd3" audit-id:dc88ec1c-243d-4230-b4c6-257a9ee7fecd,key:/pods/kube-system,resourceVersion:,resourceVersionMatch:,limit:0,continue: 852ms (17:46:53.341)]
	Trace[606219424]: [852.724388ms] [852.724388ms] END
	I0314 17:46:54.196040       1 trace.go:236] Trace[307232485]: "List" accept:application/json, */*,audit-id:c72b7d40-3146-453d-8104-9fe290426566,client:172.17.80.1,protocol:HTTP/2.0,resource:pods,scope:namespace,url:/api/v1/namespaces/ingress-nginx/pods,user-agent:minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format,verb:LIST (14-Mar-2024 17:46:53.483) (total time: 712ms):
	Trace[307232485]: ["List(recursive=true) etcd3" audit-id:c72b7d40-3146-453d-8104-9fe290426566,key:/pods/ingress-nginx,resourceVersion:,resourceVersionMatch:,limit:0,continue: 712ms (17:46:53.483)]
	Trace[307232485]: [712.319585ms] [712.319585ms] END
	I0314 17:46:54.297371       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0314 17:47:54.298043       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0314 17:48:10.884780       1 trace.go:236] Trace[1611439897]: "Update" accept:application/json, */*,audit-id:f877ab9d-7929-40ba-8ef7-1c132a86e70b,client:10.244.0.16,protocol:HTTP/2.0,resource:leases,scope:resource,url:/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/external-health-monitor-leader-hostpath-csi-k8s-io,user-agent:csi-external-health-monitor-controller/v0.0.0 (linux/amd64) kubernetes/$Format,verb:PUT (14-Mar-2024 17:48:10.143) (total time: 741ms):
	Trace[1611439897]: ["GuaranteedUpdate etcd3" audit-id:f877ab9d-7929-40ba-8ef7-1c132a86e70b,key:/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io,type:*coordination.Lease,resource:leases.coordination.k8s.io 740ms (17:48:10.143)
	Trace[1611439897]:  ---"Txn call completed" 740ms (17:48:10.884)]
	Trace[1611439897]: [741.178105ms] [741.178105ms] END
	I0314 17:48:10.885852       1 trace.go:236] Trace[1719123176]: "Get" accept:application/json, */*,audit-id:3dc708f7-b533-4fd6-80c3-b94a11f46563,client:172.17.87.211,protocol:HTTP/2.0,resource:endpoints,scope:resource,url:/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath,user-agent:storage-provisioner/v0.0.0 (linux/amd64) kubernetes/$Format,verb:GET (14-Mar-2024 17:48:10.157) (total time: 728ms):
	Trace[1719123176]: ---"About to write a response" 728ms (17:48:10.885)
	Trace[1719123176]: [728.329274ms] [728.329274ms] END
	I0314 17:48:10.886536       1 trace.go:236] Trace[1244447863]: "List" accept:application/json, */*,audit-id:886fb3f5-1ac2-4ecd-a074-4abc0e51f682,client:172.17.80.1,protocol:HTTP/2.0,resource:pods,scope:namespace,url:/api/v1/namespaces/gcp-auth/pods,user-agent:minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format,verb:LIST (14-Mar-2024 17:48:10.244) (total time: 642ms):
	Trace[1244447863]: ["List(recursive=true) etcd3" audit-id:886fb3f5-1ac2-4ecd-a074-4abc0e51f682,key:/pods/gcp-auth,resourceVersion:,resourceVersionMatch:,limit:0,continue: 641ms (17:48:10.244)]
	Trace[1244447863]: [642.077149ms] [642.077149ms] END
	I0314 17:48:59.112612       1 controller.go:624] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0314 17:49:06.323434       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	I0314 17:49:06.348871       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0314 17:49:07.397818       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	
	
	==> kube-controller-manager [88b721b73041] <==
	I0314 17:47:42.806099       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/snapshot-controller-58dbcc7b99" duration="12.105655ms"
	I0314 17:47:42.807086       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/snapshot-controller-58dbcc7b99" duration="52.304µs"
	I0314 17:48:04.030560       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-patch"
	I0314 17:48:04.039119       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-create"
	I0314 17:48:04.123925       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-patch"
	I0314 17:48:04.140567       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-create"
	I0314 17:48:11.843386       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-76dc478dd8" duration="94.007µs"
	I0314 17:48:17.333994       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="gcp-auth/gcp-auth-7d69788767" duration="27.858134ms"
	I0314 17:48:17.335467       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="gcp-auth/gcp-auth-7d69788767" duration="87.507µs"
	I0314 17:48:18.407836       1 event.go:307] "Event occurred" object="default/test-pvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="WaitForFirstConsumer" message="waiting for first consumer to be created before binding"
	I0314 17:48:18.440526       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I0314 17:48:18.441703       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I0314 17:48:18.745532       1 event.go:307] "Event occurred" object="default/test-pvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'rancher.io/local-path' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I0314 17:48:18.746006       1 event.go:307] "Event occurred" object="default/test-pvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'rancher.io/local-path' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I0314 17:48:25.321145       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I0314 17:48:27.575902       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-76dc478dd8" duration="37.276088ms"
	I0314 17:48:27.576270       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-76dc478dd8" duration="104.609µs"
	I0314 17:48:37.667017       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-69cf46c98" duration="8.001µs"
	I0314 17:48:37.970915       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I0314 17:48:49.236131       1 replica_set.go:676] "Finished syncing" kind="ReplicationController" key="kube-system/registry" duration="12.401µs"
	I0314 17:49:02.642492       1 event.go:307] "Event occurred" object="default/hpvc-restore" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	E0314 17:49:07.400654       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
	I0314 17:49:07.830101       1 event.go:307] "Event occurred" object="default/hpvc-restore" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	W0314 17:49:08.764166       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0314 17:49:08.764209       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	
	
	==> kube-proxy [c6dd1059512f] <==
	I0314 17:45:17.882986       1 server_others.go:69] "Using iptables proxy"
	I0314 17:45:18.046963       1 node.go:141] Successfully retrieved node IP: 172.17.87.211
	I0314 17:45:18.290481       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0314 17:45:18.290529       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0314 17:45:18.296974       1 server_others.go:152] "Using iptables Proxier"
	I0314 17:45:18.297045       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0314 17:45:18.297441       1 server.go:846] "Version info" version="v1.28.4"
	I0314 17:45:18.297460       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0314 17:45:18.301949       1 config.go:188] "Starting service config controller"
	I0314 17:45:18.301988       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0314 17:45:18.302026       1 config.go:97] "Starting endpoint slice config controller"
	I0314 17:45:18.302033       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0314 17:45:18.302867       1 config.go:315] "Starting node config controller"
	I0314 17:45:18.302886       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0314 17:45:18.410594       1 shared_informer.go:318] Caches are synced for service config
	I0314 17:45:18.410742       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0314 17:45:18.412477       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [9363433f1cd6] <==
	W0314 17:44:55.331832       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0314 17:44:55.331857       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0314 17:44:55.344140       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0314 17:44:55.344475       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0314 17:44:55.485491       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0314 17:44:55.485571       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0314 17:44:55.594884       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0314 17:44:55.595003       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0314 17:44:55.596977       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0314 17:44:55.597332       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0314 17:44:55.633087       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0314 17:44:55.633118       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0314 17:44:55.679395       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0314 17:44:55.679427       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0314 17:44:55.686100       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0314 17:44:55.686404       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0314 17:44:55.889034       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0314 17:44:55.889101       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0314 17:44:55.892885       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0314 17:44:55.892909       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0314 17:44:55.945888       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0314 17:44:55.946155       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0314 17:44:55.976858       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0314 17:44:55.977042       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0314 17:44:58.247982       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Mar 14 17:49:06 addons-953400 kubelet[2779]: I0314 17:49:06.752025    2779 reconciler_common.go:300] "Volume detached for volume \"modules\" (UniqueName: \"kubernetes.io/host-path/2dbfe7be-9274-4c45-ab1b-7cfd9a866ec7-modules\") on node \"addons-953400\" DevicePath \"\""
	Mar 14 17:49:06 addons-953400 kubelet[2779]: I0314 17:49:06.752037    2779 reconciler_common.go:300] "Volume detached for volume \"debugfs\" (UniqueName: \"kubernetes.io/host-path/2dbfe7be-9274-4c45-ab1b-7cfd9a866ec7-debugfs\") on node \"addons-953400\" DevicePath \"\""
	Mar 14 17:49:06 addons-953400 kubelet[2779]: I0314 17:49:06.752047    2779 reconciler_common.go:300] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/2dbfe7be-9274-4c45-ab1b-7cfd9a866ec7-run\") on node \"addons-953400\" DevicePath \"\""
	Mar 14 17:49:07 addons-953400 kubelet[2779]: I0314 17:49:07.150681    2779 scope.go:117] "RemoveContainer" containerID="2466cad0ae36672df0c343c6f91c4b7a4f516abb159c9cad809683491a7dc9e9"
	Mar 14 17:49:08 addons-953400 kubelet[2779]: I0314 17:49:08.531009    2779 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="2dbfe7be-9274-4c45-ab1b-7cfd9a866ec7" path="/var/lib/kubelet/pods/2dbfe7be-9274-4c45-ab1b-7cfd9a866ec7/volumes"
	Mar 14 17:49:09 addons-953400 kubelet[2779]: I0314 17:49:09.097316    2779 topology_manager.go:215] "Topology Admit Handler" podUID="07647055-5add-40e7-9984-d0f41bca5829" podNamespace="default" podName="task-pv-pod-restore"
	Mar 14 17:49:09 addons-953400 kubelet[2779]: E0314 17:49:09.104377    2779 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b1660d21-0990-49e4-8dbd-38ca8e48925c" containerName="helper-pod"
	Mar 14 17:49:09 addons-953400 kubelet[2779]: E0314 17:49:09.104408    2779 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a3d1b2c5-1dbe-465c-a3cb-5e2c60dfc6aa" containerName="registry"
	Mar 14 17:49:09 addons-953400 kubelet[2779]: E0314 17:49:09.104419    2779 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="926967d3-08e6-4eac-85e1-1799cfddbc1f" containerName="task-pv-container"
	Mar 14 17:49:09 addons-953400 kubelet[2779]: E0314 17:49:09.104462    2779 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="cb78e5bd-1c28-45cc-b020-51f1a27eeb0a" containerName="registry-proxy"
	Mar 14 17:49:09 addons-953400 kubelet[2779]: E0314 17:49:09.104474    2779 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2dbfe7be-9274-4c45-ab1b-7cfd9a866ec7" containerName="gadget"
	Mar 14 17:49:09 addons-953400 kubelet[2779]: E0314 17:49:09.104482    2779 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2dbfe7be-9274-4c45-ab1b-7cfd9a866ec7" containerName="gadget"
	Mar 14 17:49:09 addons-953400 kubelet[2779]: E0314 17:49:09.104491    2779 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2dbfe7be-9274-4c45-ab1b-7cfd9a866ec7" containerName="gadget"
	Mar 14 17:49:09 addons-953400 kubelet[2779]: E0314 17:49:09.105034    2779 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="93445059-341c-47bd-aac9-8a1887ea3d53" containerName="nvidia-device-plugin-ctr"
	Mar 14 17:49:09 addons-953400 kubelet[2779]: I0314 17:49:09.105369    2779 memory_manager.go:346] "RemoveStaleState removing state" podUID="2dbfe7be-9274-4c45-ab1b-7cfd9a866ec7" containerName="gadget"
	Mar 14 17:49:09 addons-953400 kubelet[2779]: I0314 17:49:09.105391    2779 memory_manager.go:346] "RemoveStaleState removing state" podUID="926967d3-08e6-4eac-85e1-1799cfddbc1f" containerName="task-pv-container"
	Mar 14 17:49:09 addons-953400 kubelet[2779]: I0314 17:49:09.105401    2779 memory_manager.go:346] "RemoveStaleState removing state" podUID="b1660d21-0990-49e4-8dbd-38ca8e48925c" containerName="helper-pod"
	Mar 14 17:49:09 addons-953400 kubelet[2779]: I0314 17:49:09.105410    2779 memory_manager.go:346] "RemoveStaleState removing state" podUID="93445059-341c-47bd-aac9-8a1887ea3d53" containerName="nvidia-device-plugin-ctr"
	Mar 14 17:49:09 addons-953400 kubelet[2779]: I0314 17:49:09.105419    2779 memory_manager.go:346] "RemoveStaleState removing state" podUID="a3d1b2c5-1dbe-465c-a3cb-5e2c60dfc6aa" containerName="registry"
	Mar 14 17:49:09 addons-953400 kubelet[2779]: I0314 17:49:09.105428    2779 memory_manager.go:346] "RemoveStaleState removing state" podUID="2dbfe7be-9274-4c45-ab1b-7cfd9a866ec7" containerName="gadget"
	Mar 14 17:49:09 addons-953400 kubelet[2779]: I0314 17:49:09.105466    2779 memory_manager.go:346] "RemoveStaleState removing state" podUID="cb78e5bd-1c28-45cc-b020-51f1a27eeb0a" containerName="registry-proxy"
	Mar 14 17:49:09 addons-953400 kubelet[2779]: I0314 17:49:09.182794    2779 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/07647055-5add-40e7-9984-d0f41bca5829-gcp-creds\") pod \"task-pv-pod-restore\" (UID: \"07647055-5add-40e7-9984-d0f41bca5829\") " pod="default/task-pv-pod-restore"
	Mar 14 17:49:09 addons-953400 kubelet[2779]: I0314 17:49:09.182859    2779 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-fb4d52f4-3660-4c9b-a16d-c7f7f13072a9\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^280e1824-e22b-11ee-a91f-1e38b9ddae0c\") pod \"task-pv-pod-restore\" (UID: \"07647055-5add-40e7-9984-d0f41bca5829\") " pod="default/task-pv-pod-restore"
	Mar 14 17:49:09 addons-953400 kubelet[2779]: I0314 17:49:09.182898    2779 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q49nw\" (UniqueName: \"kubernetes.io/projected/07647055-5add-40e7-9984-d0f41bca5829-kube-api-access-q49nw\") pod \"task-pv-pod-restore\" (UID: \"07647055-5add-40e7-9984-d0f41bca5829\") " pod="default/task-pv-pod-restore"
	Mar 14 17:49:09 addons-953400 kubelet[2779]: I0314 17:49:09.321880    2779 operation_generator.go:665] "MountVolume.MountDevice succeeded for volume \"pvc-fb4d52f4-3660-4c9b-a16d-c7f7f13072a9\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^280e1824-e22b-11ee-a91f-1e38b9ddae0c\") pod \"task-pv-pod-restore\" (UID: \"07647055-5add-40e7-9984-d0f41bca5829\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/hostpath.csi.k8s.io/861b14851a7d4b826e66a000025d882a9c5e81e20c188ae9197614bb8fad6dde/globalmount\"" pod="default/task-pv-pod-restore"
	
	
	==> storage-provisioner [8efdab20c3b8] <==
	I0314 17:45:43.303008       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0314 17:45:43.424436       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0314 17:45:43.424486       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0314 17:45:43.696530       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0314 17:45:43.704147       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-953400_ac62ac77-b12f-427d-8a84-1fd56415338c!
	I0314 17:45:43.705339       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"cb529b18-369e-4202-ada6-7568fc3d9623", APIVersion:"v1", ResourceVersion:"830", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-953400_ac62ac77-b12f-427d-8a84-1fd56415338c became leader
	I0314 17:45:43.805351       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-953400_ac62ac77-b12f-427d-8a84-1fd56415338c!
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0314 17:49:01.172435   11888 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p addons-953400 -n addons-953400
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p addons-953400 -n addons-953400: (11.8475851s)
helpers_test.go:261: (dbg) Run:  kubectl --context addons-953400 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: ingress-nginx-admission-create-rhv74 ingress-nginx-admission-patch-r5t4p
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-953400 describe pod ingress-nginx-admission-create-rhv74 ingress-nginx-admission-patch-r5t4p
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-953400 describe pod ingress-nginx-admission-create-rhv74 ingress-nginx-admission-patch-r5t4p: exit status 1 (159.2321ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-rhv74" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-r5t4p" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-953400 describe pod ingress-nginx-admission-create-rhv74 ingress-nginx-admission-patch-r5t4p: exit status 1
--- FAIL: TestAddons/parallel/Registry (64.70s)

                                                
                                    
x
+
TestErrorSpam/setup (187.73s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-windows-amd64.exe start -p nospam-536000 -n=1 --memory=2250 --wait=false --log_dir=C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-536000 --driver=hyperv
E0314 17:53:17.821317   11052 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-953400\client.crt: The system cannot find the path specified.
E0314 17:53:17.836535   11052 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-953400\client.crt: The system cannot find the path specified.
E0314 17:53:17.852076   11052 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-953400\client.crt: The system cannot find the path specified.
E0314 17:53:17.883762   11052 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-953400\client.crt: The system cannot find the path specified.
E0314 17:53:17.931585   11052 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-953400\client.crt: The system cannot find the path specified.
E0314 17:53:18.025748   11052 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-953400\client.crt: The system cannot find the path specified.
E0314 17:53:18.199958   11052 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-953400\client.crt: The system cannot find the path specified.
E0314 17:53:18.529720   11052 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-953400\client.crt: The system cannot find the path specified.
E0314 17:53:19.182863   11052 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-953400\client.crt: The system cannot find the path specified.
E0314 17:53:20.471750   11052 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-953400\client.crt: The system cannot find the path specified.
E0314 17:53:23.039109   11052 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-953400\client.crt: The system cannot find the path specified.
E0314 17:53:28.168724   11052 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-953400\client.crt: The system cannot find the path specified.
E0314 17:53:38.414593   11052 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-953400\client.crt: The system cannot find the path specified.
E0314 17:53:58.908448   11052 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-953400\client.crt: The system cannot find the path specified.
E0314 17:54:39.880180   11052 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-953400\client.crt: The system cannot find the path specified.
error_spam_test.go:81: (dbg) Done: out/minikube-windows-amd64.exe start -p nospam-536000 -n=1 --memory=2250 --wait=false --log_dir=C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-536000 --driver=hyperv: (3m7.7254598s)
error_spam_test.go:96: unexpected stderr: "W0314 17:52:33.242249    6176 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube7\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified."
error_spam_test.go:110: minikube stdout:
* [nospam-536000] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4046 Build 19045.4046
- KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
- MINIKUBE_LOCATION=18384
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
* Using the hyperv driver based on user configuration
* Starting "nospam-536000" primary control-plane node in "nospam-536000" cluster
* Creating hyperv VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
* Preparing Kubernetes v1.28.4 on Docker 25.0.4 ...
- Generating certificates and keys ...
- Booting up control plane ...
- Configuring RBAC rules ...
* Configuring bridge CNI (Container Networking Interface) ...
* Verifying Kubernetes components...
- Using image gcr.io/k8s-minikube/storage-provisioner:v5
* Enabled addons: storage-provisioner, default-storageclass
* Done! kubectl is now configured to use "nospam-536000" cluster and "default" namespace by default
error_spam_test.go:111: minikube stderr:
W0314 17:52:33.242249    6176 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
--- FAIL: TestErrorSpam/setup (187.73s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (31.24s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:731: link out/minikube-windows-amd64.exe out\kubectl.exe: Cannot create a file when that file already exists.
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-866600 -n functional-866600
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-866600 -n functional-866600: (11.114613s)
helpers_test.go:244: <<< TestFunctional/serial/MinikubeKubectlCmdDirectly FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/serial/MinikubeKubectlCmdDirectly]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-866600 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p functional-866600 logs -n 25: (7.9612122s)
helpers_test.go:252: TestFunctional/serial/MinikubeKubectlCmdDirectly logs: 
-- stdout --
	
	==> Audit <==
	|---------|-------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| Command |                            Args                             |      Profile      |       User        | Version |     Start Time      |      End Time       |
	|---------|-------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| pause   | nospam-536000 --log_dir                                     | nospam-536000     | minikube7\jenkins | v1.32.0 | 14 Mar 24 17:56 UTC | 14 Mar 24 17:56 UTC |
	|         | C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-536000 |                   |                   |         |                     |                     |
	|         | pause                                                       |                   |                   |         |                     |                     |
	| unpause | nospam-536000 --log_dir                                     | nospam-536000     | minikube7\jenkins | v1.32.0 | 14 Mar 24 17:56 UTC | 14 Mar 24 17:56 UTC |
	|         | C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-536000 |                   |                   |         |                     |                     |
	|         | unpause                                                     |                   |                   |         |                     |                     |
	| unpause | nospam-536000 --log_dir                                     | nospam-536000     | minikube7\jenkins | v1.32.0 | 14 Mar 24 17:57 UTC | 14 Mar 24 17:57 UTC |
	|         | C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-536000 |                   |                   |         |                     |                     |
	|         | unpause                                                     |                   |                   |         |                     |                     |
	| unpause | nospam-536000 --log_dir                                     | nospam-536000     | minikube7\jenkins | v1.32.0 | 14 Mar 24 17:57 UTC | 14 Mar 24 17:57 UTC |
	|         | C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-536000 |                   |                   |         |                     |                     |
	|         | unpause                                                     |                   |                   |         |                     |                     |
	| stop    | nospam-536000 --log_dir                                     | nospam-536000     | minikube7\jenkins | v1.32.0 | 14 Mar 24 17:57 UTC | 14 Mar 24 17:57 UTC |
	|         | C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-536000 |                   |                   |         |                     |                     |
	|         | stop                                                        |                   |                   |         |                     |                     |
	| stop    | nospam-536000 --log_dir                                     | nospam-536000     | minikube7\jenkins | v1.32.0 | 14 Mar 24 17:57 UTC | 14 Mar 24 17:58 UTC |
	|         | C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-536000 |                   |                   |         |                     |                     |
	|         | stop                                                        |                   |                   |         |                     |                     |
	| stop    | nospam-536000 --log_dir                                     | nospam-536000     | minikube7\jenkins | v1.32.0 | 14 Mar 24 17:58 UTC | 14 Mar 24 17:58 UTC |
	|         | C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-536000 |                   |                   |         |                     |                     |
	|         | stop                                                        |                   |                   |         |                     |                     |
	| delete  | -p nospam-536000                                            | nospam-536000     | minikube7\jenkins | v1.32.0 | 14 Mar 24 17:58 UTC | 14 Mar 24 17:58 UTC |
	| start   | -p functional-866600                                        | functional-866600 | minikube7\jenkins | v1.32.0 | 14 Mar 24 17:58 UTC | 14 Mar 24 18:02 UTC |
	|         | --memory=4000                                               |                   |                   |         |                     |                     |
	|         | --apiserver-port=8441                                       |                   |                   |         |                     |                     |
	|         | --wait=all --driver=hyperv                                  |                   |                   |         |                     |                     |
	| start   | -p functional-866600                                        | functional-866600 | minikube7\jenkins | v1.32.0 | 14 Mar 24 18:02 UTC | 14 Mar 24 18:04 UTC |
	|         | --alsologtostderr -v=8                                      |                   |                   |         |                     |                     |
	| cache   | functional-866600 cache add                                 | functional-866600 | minikube7\jenkins | v1.32.0 | 14 Mar 24 18:04 UTC | 14 Mar 24 18:04 UTC |
	|         | registry.k8s.io/pause:3.1                                   |                   |                   |         |                     |                     |
	| cache   | functional-866600 cache add                                 | functional-866600 | minikube7\jenkins | v1.32.0 | 14 Mar 24 18:04 UTC | 14 Mar 24 18:04 UTC |
	|         | registry.k8s.io/pause:3.3                                   |                   |                   |         |                     |                     |
	| cache   | functional-866600 cache add                                 | functional-866600 | minikube7\jenkins | v1.32.0 | 14 Mar 24 18:04 UTC | 14 Mar 24 18:04 UTC |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| cache   | functional-866600 cache add                                 | functional-866600 | minikube7\jenkins | v1.32.0 | 14 Mar 24 18:04 UTC | 14 Mar 24 18:04 UTC |
	|         | minikube-local-cache-test:functional-866600                 |                   |                   |         |                     |                     |
	| cache   | functional-866600 cache delete                              | functional-866600 | minikube7\jenkins | v1.32.0 | 14 Mar 24 18:04 UTC | 14 Mar 24 18:04 UTC |
	|         | minikube-local-cache-test:functional-866600                 |                   |                   |         |                     |                     |
	| cache   | delete                                                      | minikube          | minikube7\jenkins | v1.32.0 | 14 Mar 24 18:04 UTC | 14 Mar 24 18:04 UTC |
	|         | registry.k8s.io/pause:3.3                                   |                   |                   |         |                     |                     |
	| cache   | list                                                        | minikube          | minikube7\jenkins | v1.32.0 | 14 Mar 24 18:04 UTC | 14 Mar 24 18:04 UTC |
	| ssh     | functional-866600 ssh sudo                                  | functional-866600 | minikube7\jenkins | v1.32.0 | 14 Mar 24 18:04 UTC | 14 Mar 24 18:04 UTC |
	|         | crictl images                                               |                   |                   |         |                     |                     |
	| ssh     | functional-866600                                           | functional-866600 | minikube7\jenkins | v1.32.0 | 14 Mar 24 18:04 UTC | 14 Mar 24 18:05 UTC |
	|         | ssh sudo docker rmi                                         |                   |                   |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| ssh     | functional-866600 ssh                                       | functional-866600 | minikube7\jenkins | v1.32.0 | 14 Mar 24 18:05 UTC |                     |
	|         | sudo crictl inspecti                                        |                   |                   |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| cache   | functional-866600 cache reload                              | functional-866600 | minikube7\jenkins | v1.32.0 | 14 Mar 24 18:05 UTC | 14 Mar 24 18:05 UTC |
	| ssh     | functional-866600 ssh                                       | functional-866600 | minikube7\jenkins | v1.32.0 | 14 Mar 24 18:05 UTC | 14 Mar 24 18:05 UTC |
	|         | sudo crictl inspecti                                        |                   |                   |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| cache   | delete                                                      | minikube          | minikube7\jenkins | v1.32.0 | 14 Mar 24 18:05 UTC | 14 Mar 24 18:05 UTC |
	|         | registry.k8s.io/pause:3.1                                   |                   |                   |         |                     |                     |
	| cache   | delete                                                      | minikube          | minikube7\jenkins | v1.32.0 | 14 Mar 24 18:05 UTC | 14 Mar 24 18:05 UTC |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| kubectl | functional-866600 kubectl --                                | functional-866600 | minikube7\jenkins | v1.32.0 | 14 Mar 24 18:05 UTC | 14 Mar 24 18:05 UTC |
	|         | --context functional-866600                                 |                   |                   |         |                     |                     |
	|         | get pods                                                    |                   |                   |         |                     |                     |
	|---------|-------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/14 18:02:20
	Running on machine: minikube7
	Binary: Built with gc go1.22.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0314 18:02:20.072046    5716 out.go:291] Setting OutFile to fd 900 ...
	I0314 18:02:20.072259    5716 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 18:02:20.072259    5716 out.go:304] Setting ErrFile to fd 696...
	I0314 18:02:20.072259    5716 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 18:02:20.091619    5716 out.go:298] Setting JSON to false
	I0314 18:02:20.094742    5716 start.go:129] hostinfo: {"hostname":"minikube7","uptime":61144,"bootTime":1710378195,"procs":196,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4046 Build 19045.4046","kernelVersion":"10.0.19045.4046 Build 19045.4046","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f66ed2ea-6c04-4a6b-8eea-b2fb0953e990"}
	W0314 18:02:20.094742    5716 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0314 18:02:20.098908    5716 out.go:177] * [functional-866600] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4046 Build 19045.4046
	I0314 18:02:20.100812    5716 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0314 18:02:20.100812    5716 notify.go:220] Checking for updates...
	I0314 18:02:20.104105    5716 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0314 18:02:20.106539    5716 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	I0314 18:02:20.110224    5716 out.go:177]   - MINIKUBE_LOCATION=18384
	I0314 18:02:20.112979    5716 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0314 18:02:20.118207    5716 config.go:182] Loaded profile config "functional-866600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0314 18:02:20.118778    5716 driver.go:392] Setting default libvirt URI to qemu:///system
	I0314 18:02:25.036630    5716 out.go:177] * Using the hyperv driver based on existing profile
	I0314 18:02:25.039698    5716 start.go:297] selected driver: hyperv
	I0314 18:02:25.039698    5716 start.go:901] validating driver "hyperv" against &{Name:functional-866600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.28.4 ClusterName:functional-866600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.91.78 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVe
rsion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 18:02:25.039861    5716 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0314 18:02:25.085035    5716 cni.go:84] Creating CNI manager for ""
	I0314 18:02:25.085035    5716 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0314 18:02:25.085035    5716 start.go:340] cluster config:
	{Name:functional-866600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-866600 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.91.78 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 18:02:25.086037    5716 iso.go:125] acquiring lock: {Name:mk1b3e73402180391a20a865a9454da445c269fc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0314 18:02:25.089520    5716 out.go:177] * Starting "functional-866600" primary control-plane node in "functional-866600" cluster
	I0314 18:02:25.091567    5716 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0314 18:02:25.091567    5716 preload.go:147] Found local preload: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I0314 18:02:25.091567    5716 cache.go:56] Caching tarball of preloaded images
	I0314 18:02:25.092666    5716 preload.go:173] Found C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0314 18:02:25.092879    5716 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0314 18:02:25.093062    5716 profile.go:142] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-866600\config.json ...
	I0314 18:02:25.094243    5716 start.go:360] acquireMachinesLock for functional-866600: {Name:mk814f158b6187cc9297257c36fdbe0d2871c950 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0314 18:02:25.094922    5716 start.go:364] duration metric: took 0s to acquireMachinesLock for "functional-866600"
	I0314 18:02:25.094982    5716 start.go:96] Skipping create...Using existing machine configuration
	I0314 18:02:25.095094    5716 fix.go:54] fixHost starting: 
	I0314 18:02:25.095134    5716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-866600 ).state
	I0314 18:02:27.658001    5716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 18:02:27.658485    5716 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:02:27.658485    5716 fix.go:112] recreateIfNeeded on functional-866600: state=Running err=<nil>
	W0314 18:02:27.658485    5716 fix.go:138] unexpected machine state, will restart: <nil>
	I0314 18:02:27.664209    5716 out.go:177] * Updating the running hyperv "functional-866600" VM ...
	I0314 18:02:27.669346    5716 machine.go:94] provisionDockerMachine start ...
	I0314 18:02:27.669346    5716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-866600 ).state
	I0314 18:02:29.696918    5716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 18:02:29.696918    5716 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:02:29.696996    5716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-866600 ).networkadapters[0]).ipaddresses[0]
	I0314 18:02:32.054909    5716 main.go:141] libmachine: [stdout =====>] : 172.17.91.78
	
	I0314 18:02:32.054909    5716 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:02:32.059009    5716 main.go:141] libmachine: Using SSH client type: native
	I0314 18:02:32.059633    5716 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xac9f80] 0xaccb60 <nil>  [] 0s} 172.17.91.78 22 <nil> <nil>}
	I0314 18:02:32.059633    5716 main.go:141] libmachine: About to run SSH command:
	hostname
	I0314 18:02:32.191709    5716 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-866600
	
	I0314 18:02:32.191709    5716 buildroot.go:166] provisioning hostname "functional-866600"
	I0314 18:02:32.191709    5716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-866600 ).state
	I0314 18:02:34.132300    5716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 18:02:34.132300    5716 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:02:34.132489    5716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-866600 ).networkadapters[0]).ipaddresses[0]
	I0314 18:02:36.531205    5716 main.go:141] libmachine: [stdout =====>] : 172.17.91.78
	
	I0314 18:02:36.531205    5716 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:02:36.534768    5716 main.go:141] libmachine: Using SSH client type: native
	I0314 18:02:36.535372    5716 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xac9f80] 0xaccb60 <nil>  [] 0s} 172.17.91.78 22 <nil> <nil>}
	I0314 18:02:36.535372    5716 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-866600 && echo "functional-866600" | sudo tee /etc/hostname
	I0314 18:02:36.695945    5716 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-866600
	
	I0314 18:02:36.696069    5716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-866600 ).state
	I0314 18:02:38.638009    5716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 18:02:38.638009    5716 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:02:38.638086    5716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-866600 ).networkadapters[0]).ipaddresses[0]
	I0314 18:02:41.019226    5716 main.go:141] libmachine: [stdout =====>] : 172.17.91.78
	
	I0314 18:02:41.019465    5716 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:02:41.023726    5716 main.go:141] libmachine: Using SSH client type: native
	I0314 18:02:41.023726    5716 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xac9f80] 0xaccb60 <nil>  [] 0s} 172.17.91.78 22 <nil> <nil>}
	I0314 18:02:41.023726    5716 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-866600' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-866600/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-866600' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0314 18:02:41.160118    5716 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0314 18:02:41.160118    5716 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube7\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube7\minikube-integration\.minikube}
	I0314 18:02:41.160118    5716 buildroot.go:174] setting up certificates
	I0314 18:02:41.160118    5716 provision.go:84] configureAuth start
	I0314 18:02:41.160118    5716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-866600 ).state
	I0314 18:02:43.109668    5716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 18:02:43.109668    5716 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:02:43.110111    5716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-866600 ).networkadapters[0]).ipaddresses[0]
	I0314 18:02:45.523022    5716 main.go:141] libmachine: [stdout =====>] : 172.17.91.78
	
	I0314 18:02:45.523022    5716 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:02:45.523099    5716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-866600 ).state
	I0314 18:02:47.490603    5716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 18:02:47.491427    5716 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:02:47.491427    5716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-866600 ).networkadapters[0]).ipaddresses[0]
	I0314 18:02:49.852871    5716 main.go:141] libmachine: [stdout =====>] : 172.17.91.78
	
	I0314 18:02:49.852871    5716 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:02:49.852871    5716 provision.go:143] copyHostCerts
	I0314 18:02:49.852871    5716 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem
	I0314 18:02:49.852871    5716 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem, removing ...
	I0314 18:02:49.852871    5716 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.pem
	I0314 18:02:49.853451    5716 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0314 18:02:49.854171    5716 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem
	I0314 18:02:49.854171    5716 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem, removing ...
	I0314 18:02:49.854171    5716 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cert.pem
	I0314 18:02:49.854787    5716 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0314 18:02:49.855521    5716 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem
	I0314 18:02:49.855521    5716 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem, removing ...
	I0314 18:02:49.855521    5716 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\key.pem
	I0314 18:02:49.856094    5716 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem (1679 bytes)
	I0314 18:02:49.856829    5716 provision.go:117] generating server cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.functional-866600 san=[127.0.0.1 172.17.91.78 functional-866600 localhost minikube]
	I0314 18:02:50.072542    5716 provision.go:177] copyRemoteCerts
	I0314 18:02:50.081469    5716 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0314 18:02:50.082469    5716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-866600 ).state
	I0314 18:02:52.067581    5716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 18:02:52.067581    5716 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:02:52.067581    5716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-866600 ).networkadapters[0]).ipaddresses[0]
	I0314 18:02:54.454980    5716 main.go:141] libmachine: [stdout =====>] : 172.17.91.78
	
	I0314 18:02:54.454980    5716 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:02:54.455371    5716 sshutil.go:53] new ssh client: &{IP:172.17.91.78 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\functional-866600\id_rsa Username:docker}
	I0314 18:02:54.566432    5716 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.4836302s)
	I0314 18:02:54.566889    5716 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0314 18:02:54.567389    5716 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0314 18:02:54.617745    5716 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0314 18:02:54.618744    5716 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I0314 18:02:54.662888    5716 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0314 18:02:54.663915    5716 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0314 18:02:54.708752    5716 provision.go:87] duration metric: took 13.5476272s to configureAuth
	I0314 18:02:54.708844    5716 buildroot.go:189] setting minikube options for container-runtime
	I0314 18:02:54.709574    5716 config.go:182] Loaded profile config "functional-866600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0314 18:02:54.709770    5716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-866600 ).state
	I0314 18:02:56.688213    5716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 18:02:56.688213    5716 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:02:56.688213    5716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-866600 ).networkadapters[0]).ipaddresses[0]
	I0314 18:02:59.095463    5716 main.go:141] libmachine: [stdout =====>] : 172.17.91.78
	
	I0314 18:02:59.095463    5716 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:02:59.099332    5716 main.go:141] libmachine: Using SSH client type: native
	I0314 18:02:59.099796    5716 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xac9f80] 0xaccb60 <nil>  [] 0s} 172.17.91.78 22 <nil> <nil>}
	I0314 18:02:59.099796    5716 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0314 18:02:59.232622    5716 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0314 18:02:59.232622    5716 buildroot.go:70] root file system type: tmpfs
	I0314 18:02:59.232708    5716 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0314 18:02:59.232802    5716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-866600 ).state
	I0314 18:03:01.222567    5716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 18:03:01.222567    5716 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:03:01.223350    5716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-866600 ).networkadapters[0]).ipaddresses[0]
	I0314 18:03:03.683501    5716 main.go:141] libmachine: [stdout =====>] : 172.17.91.78
	
	I0314 18:03:03.684050    5716 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:03:03.688069    5716 main.go:141] libmachine: Using SSH client type: native
	I0314 18:03:03.688221    5716 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xac9f80] 0xaccb60 <nil>  [] 0s} 172.17.91.78 22 <nil> <nil>}
	I0314 18:03:03.688221    5716 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0314 18:03:03.850814    5716 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0314 18:03:03.850814    5716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-866600 ).state
	I0314 18:03:05.839156    5716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 18:03:05.839156    5716 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:03:05.839269    5716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-866600 ).networkadapters[0]).ipaddresses[0]
	I0314 18:03:08.226852    5716 main.go:141] libmachine: [stdout =====>] : 172.17.91.78
	
	I0314 18:03:08.226852    5716 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:03:08.232882    5716 main.go:141] libmachine: Using SSH client type: native
	I0314 18:03:08.233190    5716 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xac9f80] 0xaccb60 <nil>  [] 0s} 172.17.91.78 22 <nil> <nil>}
	I0314 18:03:08.233190    5716 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0314 18:03:08.381406    5716 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0314 18:03:08.381460    5716 machine.go:97] duration metric: took 40.7090886s to provisionDockerMachine
	I0314 18:03:08.381514    5716 start.go:293] postStartSetup for "functional-866600" (driver="hyperv")
	I0314 18:03:08.381570    5716 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0314 18:03:08.390794    5716 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0314 18:03:08.391644    5716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-866600 ).state
	I0314 18:03:10.350086    5716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 18:03:10.350140    5716 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:03:10.350328    5716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-866600 ).networkadapters[0]).ipaddresses[0]
	I0314 18:03:12.722931    5716 main.go:141] libmachine: [stdout =====>] : 172.17.91.78
	
	I0314 18:03:12.722931    5716 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:03:12.724217    5716 sshutil.go:53] new ssh client: &{IP:172.17.91.78 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\functional-866600\id_rsa Username:docker}
	I0314 18:03:12.824688    5716 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.4335656s)
	I0314 18:03:12.833407    5716 ssh_runner.go:195] Run: cat /etc/os-release
	I0314 18:03:12.840746    5716 command_runner.go:130] > NAME=Buildroot
	I0314 18:03:12.840746    5716 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0314 18:03:12.840746    5716 command_runner.go:130] > ID=buildroot
	I0314 18:03:12.840746    5716 command_runner.go:130] > VERSION_ID=2023.02.9
	I0314 18:03:12.840746    5716 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0314 18:03:12.840746    5716 info.go:137] Remote host: Buildroot 2023.02.9
	I0314 18:03:12.840746    5716 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\addons for local assets ...
	I0314 18:03:12.840746    5716 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\files for local assets ...
	I0314 18:03:12.841341    5716 filesync.go:149] local asset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\110522.pem -> 110522.pem in /etc/ssl/certs
	I0314 18:03:12.841887    5716 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\110522.pem -> /etc/ssl/certs/110522.pem
	I0314 18:03:12.842628    5716 filesync.go:149] local asset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\test\nested\copy\11052\hosts -> hosts in /etc/test/nested/copy/11052
	I0314 18:03:12.842628    5716 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\test\nested\copy\11052\hosts -> /etc/test/nested/copy/11052/hosts
	I0314 18:03:12.851797    5716 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/11052
	I0314 18:03:12.867995    5716 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\110522.pem --> /etc/ssl/certs/110522.pem (1708 bytes)
	I0314 18:03:12.912059    5716 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\test\nested\copy\11052\hosts --> /etc/test/nested/copy/11052/hosts (40 bytes)
	I0314 18:03:12.959005    5716 start.go:296] duration metric: took 4.5770956s for postStartSetup
	I0314 18:03:12.959005    5716 fix.go:56] duration metric: took 47.8604666s for fixHost
	I0314 18:03:12.959005    5716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-866600 ).state
	I0314 18:03:14.940047    5716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 18:03:14.940047    5716 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:03:14.940047    5716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-866600 ).networkadapters[0]).ipaddresses[0]
	I0314 18:03:17.316411    5716 main.go:141] libmachine: [stdout =====>] : 172.17.91.78
	
	I0314 18:03:17.317167    5716 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:03:17.323834    5716 main.go:141] libmachine: Using SSH client type: native
	I0314 18:03:17.324469    5716 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xac9f80] 0xaccb60 <nil>  [] 0s} 172.17.91.78 22 <nil> <nil>}
	I0314 18:03:17.324469    5716 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0314 18:03:17.454397    5716 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710439397.718665893
	
	I0314 18:03:17.454505    5716 fix.go:216] guest clock: 1710439397.718665893
	I0314 18:03:17.454505    5716 fix.go:229] Guest: 2024-03-14 18:03:17.718665893 +0000 UTC Remote: 2024-03-14 18:03:12.9590059 +0000 UTC m=+53.021135501 (delta=4.759659993s)
	I0314 18:03:17.454505    5716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-866600 ).state
	I0314 18:03:19.421180    5716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 18:03:19.421180    5716 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:03:19.421716    5716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-866600 ).networkadapters[0]).ipaddresses[0]
	I0314 18:03:21.764851    5716 main.go:141] libmachine: [stdout =====>] : 172.17.91.78
	
	I0314 18:03:21.764851    5716 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:03:21.769352    5716 main.go:141] libmachine: Using SSH client type: native
	I0314 18:03:21.769879    5716 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xac9f80] 0xaccb60 <nil>  [] 0s} 172.17.91.78 22 <nil> <nil>}
	I0314 18:03:21.769970    5716 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1710439397
	I0314 18:03:21.910754    5716 main.go:141] libmachine: SSH cmd err, output: <nil>: Thu Mar 14 18:03:17 UTC 2024
	
	I0314 18:03:21.910818    5716 fix.go:236] clock set: Thu Mar 14 18:03:17 UTC 2024
	 (err=<nil>)
	I0314 18:03:21.910818    5716 start.go:83] releasing machines lock for "functional-866600", held for 56.8116759s
	I0314 18:03:21.911043    5716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-866600 ).state
	I0314 18:03:23.863650    5716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 18:03:23.863650    5716 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:03:23.863943    5716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-866600 ).networkadapters[0]).ipaddresses[0]
	I0314 18:03:26.238934    5716 main.go:141] libmachine: [stdout =====>] : 172.17.91.78
	
	I0314 18:03:26.238934    5716 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:03:26.242329    5716 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0314 18:03:26.242511    5716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-866600 ).state
	I0314 18:03:26.249864    5716 ssh_runner.go:195] Run: cat /version.json
	I0314 18:03:26.249864    5716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-866600 ).state
	I0314 18:03:28.230567    5716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 18:03:28.230567    5716 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:03:28.230751    5716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-866600 ).networkadapters[0]).ipaddresses[0]
	I0314 18:03:28.240556    5716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 18:03:28.240556    5716 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:03:28.240556    5716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-866600 ).networkadapters[0]).ipaddresses[0]
	I0314 18:03:30.648476    5716 main.go:141] libmachine: [stdout =====>] : 172.17.91.78
	
	I0314 18:03:30.649469    5716 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:03:30.649861    5716 sshutil.go:53] new ssh client: &{IP:172.17.91.78 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\functional-866600\id_rsa Username:docker}
	I0314 18:03:30.674653    5716 main.go:141] libmachine: [stdout =====>] : 172.17.91.78
	
	I0314 18:03:30.674653    5716 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:03:30.675177    5716 sshutil.go:53] new ssh client: &{IP:172.17.91.78 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\functional-866600\id_rsa Username:docker}
	I0314 18:03:30.805370    5716 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0314 18:03:30.805482    5716 command_runner.go:130] > {"iso_version": "v1.32.1-1710348681-18375", "kicbase_version": "v0.0.42-1710284843-18375", "minikube_version": "v1.32.0", "commit": "fd5757a6603390a2c0efe3b1e5cdd797538203fd"}
	I0314 18:03:30.805482    5716 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.5628152s)
	I0314 18:03:30.805574    5716 ssh_runner.go:235] Completed: cat /version.json: (4.5553724s)
	I0314 18:03:30.819005    5716 ssh_runner.go:195] Run: systemctl --version
	I0314 18:03:30.827868    5716 command_runner.go:130] > systemd 252 (252)
	I0314 18:03:30.827979    5716 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0314 18:03:30.836314    5716 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0314 18:03:30.846978    5716 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0314 18:03:30.847274    5716 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0314 18:03:30.856689    5716 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0314 18:03:30.877961    5716 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0314 18:03:30.877961    5716 start.go:494] detecting cgroup driver to use...
	I0314 18:03:30.877961    5716 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0314 18:03:30.911411    5716 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0314 18:03:30.921497    5716 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0314 18:03:30.952179    5716 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0314 18:03:30.972098    5716 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0314 18:03:30.981946    5716 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0314 18:03:31.012899    5716 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0314 18:03:31.051560    5716 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0314 18:03:31.085373    5716 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0314 18:03:31.113382    5716 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0314 18:03:31.141220    5716 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0314 18:03:31.168731    5716 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0314 18:03:31.186378    5716 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0314 18:03:31.196794    5716 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0314 18:03:31.224990    5716 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 18:03:31.512228    5716 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0314 18:03:31.543433    5716 start.go:494] detecting cgroup driver to use...
	I0314 18:03:31.552691    5716 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0314 18:03:31.576387    5716 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0314 18:03:31.576450    5716 command_runner.go:130] > [Unit]
	I0314 18:03:31.576450    5716 command_runner.go:130] > Description=Docker Application Container Engine
	I0314 18:03:31.576450    5716 command_runner.go:130] > Documentation=https://docs.docker.com
	I0314 18:03:31.576450    5716 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0314 18:03:31.576450    5716 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0314 18:03:31.576502    5716 command_runner.go:130] > StartLimitBurst=3
	I0314 18:03:31.576502    5716 command_runner.go:130] > StartLimitIntervalSec=60
	I0314 18:03:31.576502    5716 command_runner.go:130] > [Service]
	I0314 18:03:31.576502    5716 command_runner.go:130] > Type=notify
	I0314 18:03:31.576502    5716 command_runner.go:130] > Restart=on-failure
	I0314 18:03:31.576502    5716 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0314 18:03:31.576571    5716 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0314 18:03:31.576571    5716 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0314 18:03:31.576571    5716 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0314 18:03:31.576635    5716 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0314 18:03:31.576635    5716 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0314 18:03:31.576635    5716 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0314 18:03:31.576635    5716 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0314 18:03:31.576698    5716 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0314 18:03:31.576698    5716 command_runner.go:130] > ExecStart=
	I0314 18:03:31.576698    5716 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0314 18:03:31.576698    5716 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0314 18:03:31.576698    5716 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0314 18:03:31.576876    5716 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0314 18:03:31.576876    5716 command_runner.go:130] > LimitNOFILE=infinity
	I0314 18:03:31.576876    5716 command_runner.go:130] > LimitNPROC=infinity
	I0314 18:03:31.576876    5716 command_runner.go:130] > LimitCORE=infinity
	I0314 18:03:31.576876    5716 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0314 18:03:31.576949    5716 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0314 18:03:31.576976    5716 command_runner.go:130] > TasksMax=infinity
	I0314 18:03:31.576976    5716 command_runner.go:130] > TimeoutStartSec=0
	I0314 18:03:31.576976    5716 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0314 18:03:31.576976    5716 command_runner.go:130] > Delegate=yes
	I0314 18:03:31.576976    5716 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0314 18:03:31.576976    5716 command_runner.go:130] > KillMode=process
	I0314 18:03:31.577061    5716 command_runner.go:130] > [Install]
	I0314 18:03:31.577084    5716 command_runner.go:130] > WantedBy=multi-user.target
	I0314 18:03:31.588350    5716 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0314 18:03:31.623583    5716 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0314 18:03:31.660992    5716 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0314 18:03:31.693541    5716 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0314 18:03:31.715716    5716 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0314 18:03:31.752109    5716 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0314 18:03:31.763368    5716 ssh_runner.go:195] Run: which cri-dockerd
	I0314 18:03:31.769420    5716 command_runner.go:130] > /usr/bin/cri-dockerd
	I0314 18:03:31.778898    5716 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0314 18:03:31.795774    5716 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0314 18:03:31.839382    5716 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0314 18:03:32.111668    5716 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0314 18:03:32.361946    5716 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0314 18:03:32.362249    5716 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0314 18:03:32.400945    5716 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 18:03:32.648944    5716 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0314 18:03:45.415474    5716 ssh_runner.go:235] Completed: sudo systemctl restart docker: (12.7655857s)
	I0314 18:03:45.425697    5716 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0314 18:03:45.461706    5716 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0314 18:03:45.506300    5716 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0314 18:03:45.537634    5716 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0314 18:03:45.738379    5716 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0314 18:03:45.940392    5716 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 18:03:46.138881    5716 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0314 18:03:46.176289    5716 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0314 18:03:46.206924    5716 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 18:03:46.403631    5716 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0314 18:03:46.532219    5716 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0314 18:03:46.545002    5716 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0314 18:03:46.553765    5716 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0314 18:03:46.553765    5716 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0314 18:03:46.553765    5716 command_runner.go:130] > Device: 0,22	Inode: 1504        Links: 1
	I0314 18:03:46.553765    5716 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0314 18:03:46.553765    5716 command_runner.go:130] > Access: 2024-03-14 18:03:46.805163743 +0000
	I0314 18:03:46.553765    5716 command_runner.go:130] > Modify: 2024-03-14 18:03:46.703153609 +0000
	I0314 18:03:46.553765    5716 command_runner.go:130] > Change: 2024-03-14 18:03:46.709154205 +0000
	I0314 18:03:46.553765    5716 command_runner.go:130] >  Birth: -
	I0314 18:03:46.553765    5716 start.go:562] Will wait 60s for crictl version
	I0314 18:03:46.563346    5716 ssh_runner.go:195] Run: which crictl
	I0314 18:03:46.572948    5716 command_runner.go:130] > /usr/bin/crictl
	I0314 18:03:46.580893    5716 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0314 18:03:46.648815    5716 command_runner.go:130] > Version:  0.1.0
	I0314 18:03:46.648895    5716 command_runner.go:130] > RuntimeName:  docker
	I0314 18:03:46.648895    5716 command_runner.go:130] > RuntimeVersion:  25.0.4
	I0314 18:03:46.648895    5716 command_runner.go:130] > RuntimeApiVersion:  v1
	I0314 18:03:46.648978    5716 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  25.0.4
	RuntimeApiVersion:  v1
	I0314 18:03:46.657532    5716 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0314 18:03:46.688984    5716 command_runner.go:130] > 25.0.4
	I0314 18:03:46.696158    5716 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0314 18:03:46.727801    5716 command_runner.go:130] > 25.0.4
	I0314 18:03:46.733739    5716 out.go:204] * Preparing Kubernetes v1.28.4 on Docker 25.0.4 ...
	I0314 18:03:46.733862    5716 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0314 18:03:46.737279    5716 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0314 18:03:46.737279    5716 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0314 18:03:46.737279    5716 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0314 18:03:46.737279    5716 ip.go:207] Found interface: {Index:7 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:82:e8:09 Flags:up|broadcast|multicast|running}
	I0314 18:03:46.740086    5716 ip.go:210] interface addr: fe80::e3be:cf7e:6bd2:b964/64
	I0314 18:03:46.740086    5716 ip.go:210] interface addr: 172.17.80.1/20
	I0314 18:03:46.749460    5716 ssh_runner.go:195] Run: grep 172.17.80.1	host.minikube.internal$ /etc/hosts
	I0314 18:03:46.755509    5716 command_runner.go:130] > 172.17.80.1	host.minikube.internal
	I0314 18:03:46.755599    5716 kubeadm.go:877] updating cluster {Name:functional-866600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.28.4 ClusterName:functional-866600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.91.78 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L Mo
untGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0314 18:03:46.756243    5716 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0314 18:03:46.764430    5716 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0314 18:03:46.788666    5716 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.28.4
	I0314 18:03:46.788666    5716 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.28.4
	I0314 18:03:46.788666    5716 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.28.4
	I0314 18:03:46.788666    5716 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.28.4
	I0314 18:03:46.788666    5716 command_runner.go:130] > registry.k8s.io/etcd:3.5.9-0
	I0314 18:03:46.788666    5716 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.10.1
	I0314 18:03:46.788666    5716 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0314 18:03:46.788666    5716 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0314 18:03:46.788826    5716 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.4
	registry.k8s.io/kube-proxy:v1.28.4
	registry.k8s.io/kube-controller-manager:v1.28.4
	registry.k8s.io/kube-scheduler:v1.28.4
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0314 18:03:46.788826    5716 docker.go:615] Images already preloaded, skipping extraction
	I0314 18:03:46.795276    5716 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0314 18:03:46.829610    5716 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.28.4
	I0314 18:03:46.829610    5716 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.28.4
	I0314 18:03:46.829610    5716 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.28.4
	I0314 18:03:46.829610    5716 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.28.4
	I0314 18:03:46.829610    5716 command_runner.go:130] > registry.k8s.io/etcd:3.5.9-0
	I0314 18:03:46.829610    5716 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.10.1
	I0314 18:03:46.829610    5716 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0314 18:03:46.829610    5716 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0314 18:03:46.829610    5716 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.4
	registry.k8s.io/kube-proxy:v1.28.4
	registry.k8s.io/kube-controller-manager:v1.28.4
	registry.k8s.io/kube-scheduler:v1.28.4
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0314 18:03:46.829610    5716 cache_images.go:84] Images are preloaded, skipping loading
	I0314 18:03:46.829610    5716 kubeadm.go:928] updating node { 172.17.91.78 8441 v1.28.4 docker true true} ...
	I0314 18:03:46.830599    5716 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-866600 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.17.91.78
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:functional-866600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0314 18:03:46.836596    5716 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0314 18:03:46.871687    5716 command_runner.go:130] > cgroupfs
	I0314 18:03:46.871687    5716 cni.go:84] Creating CNI manager for ""
	I0314 18:03:46.871687    5716 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0314 18:03:46.871687    5716 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0314 18:03:46.871687    5716 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.17.91.78 APIServerPort:8441 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-866600 NodeName:functional-866600 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.17.91.78"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.17.91.78 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0314 18:03:46.871687    5716 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.17.91.78
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "functional-866600"
	  kubeletExtraArgs:
	    node-ip: 172.17.91.78
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.17.91.78"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0314 18:03:46.880659    5716 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0314 18:03:46.897232    5716 command_runner.go:130] > kubeadm
	I0314 18:03:46.897232    5716 command_runner.go:130] > kubectl
	I0314 18:03:46.897232    5716 command_runner.go:130] > kubelet
	I0314 18:03:46.897232    5716 binaries.go:44] Found k8s binaries, skipping transfer
	I0314 18:03:46.908224    5716 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0314 18:03:46.932131    5716 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0314 18:03:46.959947    5716 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0314 18:03:46.989327    5716 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I0314 18:03:47.026432    5716 ssh_runner.go:195] Run: grep 172.17.91.78	control-plane.minikube.internal$ /etc/hosts
	I0314 18:03:47.032201    5716 command_runner.go:130] > 172.17.91.78	control-plane.minikube.internal
	I0314 18:03:47.041570    5716 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 18:03:47.234691    5716 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0314 18:03:47.258813    5716 certs.go:68] Setting up C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-866600 for IP: 172.17.91.78
	I0314 18:03:47.258813    5716 certs.go:194] generating shared ca certs ...
	I0314 18:03:47.258813    5716 certs.go:226] acquiring lock for ca certs: {Name:mk3bd6e475aadf28590677101b36f61c132d4290 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 18:03:47.259379    5716 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key
	I0314 18:03:47.259379    5716 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key
	I0314 18:03:47.259995    5716 certs.go:256] generating profile certs ...
	I0314 18:03:47.260806    5716 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-866600\client.key
	I0314 18:03:47.260948    5716 certs.go:359] skipping valid signed profile cert regeneration for "minikube": C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-866600\apiserver.key.8e671bad
	I0314 18:03:47.260948    5716 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-866600\proxy-client.key
	I0314 18:03:47.260948    5716 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0314 18:03:47.261648    5716 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0314 18:03:47.261847    5716 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0314 18:03:47.262018    5716 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0314 18:03:47.262176    5716 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-866600\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0314 18:03:47.262224    5716 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-866600\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0314 18:03:47.262224    5716 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-866600\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0314 18:03:47.262224    5716 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-866600\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0314 18:03:47.262987    5716 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\11052.pem (1338 bytes)
	W0314 18:03:47.263038    5716 certs.go:480] ignoring C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\11052_empty.pem, impossibly tiny 0 bytes
	I0314 18:03:47.263038    5716 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0314 18:03:47.263562    5716 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0314 18:03:47.263784    5716 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0314 18:03:47.263960    5716 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0314 18:03:47.263960    5716 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\110522.pem (1708 bytes)
	I0314 18:03:47.264522    5716 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\110522.pem -> /usr/share/ca-certificates/110522.pem
	I0314 18:03:47.264522    5716 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0314 18:03:47.264522    5716 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\11052.pem -> /usr/share/ca-certificates/11052.pem
	I0314 18:03:47.265907    5716 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0314 18:03:47.308537    5716 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0314 18:03:47.348523    5716 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0314 18:03:47.389927    5716 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0314 18:03:47.434128    5716 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-866600\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0314 18:03:47.476071    5716 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-866600\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0314 18:03:47.514427    5716 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-866600\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0314 18:03:47.563856    5716 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-866600\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0314 18:03:47.643750    5716 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\110522.pem --> /usr/share/ca-certificates/110522.pem (1708 bytes)
	I0314 18:03:47.703055    5716 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0314 18:03:47.749755    5716 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\11052.pem --> /usr/share/ca-certificates/11052.pem (1338 bytes)
	I0314 18:03:47.793480    5716 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0314 18:03:47.840146    5716 ssh_runner.go:195] Run: openssl version
	I0314 18:03:47.849362    5716 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0314 18:03:47.859285    5716 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110522.pem && ln -fs /usr/share/ca-certificates/110522.pem /etc/ssl/certs/110522.pem"
	I0314 18:03:47.885870    5716 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/110522.pem
	I0314 18:03:47.894057    5716 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Mar 14 17:58 /usr/share/ca-certificates/110522.pem
	I0314 18:03:47.894114    5716 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 14 17:58 /usr/share/ca-certificates/110522.pem
	I0314 18:03:47.903020    5716 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110522.pem
	I0314 18:03:47.913027    5716 command_runner.go:130] > 3ec20f2e
	I0314 18:03:47.922168    5716 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110522.pem /etc/ssl/certs/3ec20f2e.0"
	I0314 18:03:47.951586    5716 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0314 18:03:47.986557    5716 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0314 18:03:47.994522    5716 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Mar 14 17:44 /usr/share/ca-certificates/minikubeCA.pem
	I0314 18:03:47.994547    5716 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 14 17:44 /usr/share/ca-certificates/minikubeCA.pem
	I0314 18:03:48.003373    5716 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0314 18:03:48.018379    5716 command_runner.go:130] > b5213941
	I0314 18:03:48.027368    5716 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0314 18:03:48.070274    5716 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11052.pem && ln -fs /usr/share/ca-certificates/11052.pem /etc/ssl/certs/11052.pem"
	I0314 18:03:48.103681    5716 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11052.pem
	I0314 18:03:48.112004    5716 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Mar 14 17:58 /usr/share/ca-certificates/11052.pem
	I0314 18:03:48.112004    5716 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 14 17:58 /usr/share/ca-certificates/11052.pem
	I0314 18:03:48.120628    5716 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11052.pem
	I0314 18:03:48.132460    5716 command_runner.go:130] > 51391683
	I0314 18:03:48.141408    5716 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11052.pem /etc/ssl/certs/51391683.0"
	I0314 18:03:48.170268    5716 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0314 18:03:48.182125    5716 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0314 18:03:48.182187    5716 command_runner.go:130] >   Size: 1164      	Blocks: 8          IO Block: 4096   regular file
	I0314 18:03:48.182187    5716 command_runner.go:130] > Device: 8,1	Inode: 1053989     Links: 1
	I0314 18:03:48.182187    5716 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0314 18:03:48.182187    5716 command_runner.go:130] > Access: 2024-03-14 18:01:12.804384974 +0000
	I0314 18:03:48.182187    5716 command_runner.go:130] > Modify: 2024-03-14 18:01:12.804384974 +0000
	I0314 18:03:48.182249    5716 command_runner.go:130] > Change: 2024-03-14 18:01:12.804384974 +0000
	I0314 18:03:48.182249    5716 command_runner.go:130] >  Birth: 2024-03-14 18:01:12.804384974 +0000
	I0314 18:03:48.190798    5716 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0314 18:03:48.198794    5716 command_runner.go:130] > Certificate will not expire
	I0314 18:03:48.208172    5716 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0314 18:03:48.217277    5716 command_runner.go:130] > Certificate will not expire
	I0314 18:03:48.227198    5716 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0314 18:03:48.236825    5716 command_runner.go:130] > Certificate will not expire
	I0314 18:03:48.250619    5716 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0314 18:03:48.260835    5716 command_runner.go:130] > Certificate will not expire
	I0314 18:03:48.270118    5716 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0314 18:03:48.279723    5716 command_runner.go:130] > Certificate will not expire
	I0314 18:03:48.288642    5716 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0314 18:03:48.297378    5716 command_runner.go:130] > Certificate will not expire
	I0314 18:03:48.297378    5716 kubeadm.go:391] StartCluster: {Name:functional-866600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
28.4 ClusterName:functional-866600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.91.78 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L Mount
GID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 18:03:48.304963    5716 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0314 18:03:48.353720    5716 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0314 18:03:48.370805    5716 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I0314 18:03:48.370805    5716 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I0314 18:03:48.370902    5716 command_runner.go:130] > /var/lib/minikube/etcd:
	I0314 18:03:48.370902    5716 command_runner.go:130] > member
	W0314 18:03:48.370962    5716 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0314 18:03:48.371022    5716 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0314 18:03:48.371022    5716 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0314 18:03:48.379459    5716 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0314 18:03:48.399448    5716 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0314 18:03:48.400493    5716 kubeconfig.go:125] found "functional-866600" server: "https://172.17.91.78:8441"
	I0314 18:03:48.401447    5716 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0314 18:03:48.402446    5716 kapi.go:59] client config for functional-866600: &rest.Config{Host:"https://172.17.91.78:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\functional-866600\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\functional-866600\\client.key", CAFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), C
AData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1ec9180), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0314 18:03:48.403450    5716 cert_rotation.go:137] Starting client certificate rotation controller
	I0314 18:03:48.411455    5716 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0314 18:03:48.426482    5716 kubeadm.go:624] The running cluster does not require reconfiguration: 172.17.91.78
	I0314 18:03:48.426689    5716 kubeadm.go:1153] stopping kube-system containers ...
	I0314 18:03:48.433091    5716 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0314 18:03:48.477680    5716 command_runner.go:130] > 0ab078b3b1cb
	I0314 18:03:48.477680    5716 command_runner.go:130] > 1e6c09ec79cb
	I0314 18:03:48.477680    5716 command_runner.go:130] > eb1790abcad2
	I0314 18:03:48.477680    5716 command_runner.go:130] > 9bce7b51a10d
	I0314 18:03:48.478139    5716 command_runner.go:130] > e1d44ecb441a
	I0314 18:03:48.478139    5716 command_runner.go:130] > c3d217441c89
	I0314 18:03:48.478139    5716 command_runner.go:130] > 44e064493f5c
	I0314 18:03:48.478139    5716 command_runner.go:130] > 04cd8910e468
	I0314 18:03:48.478139    5716 command_runner.go:130] > 35b40dfba1a9
	I0314 18:03:48.478139    5716 command_runner.go:130] > 8eeafd22647e
	I0314 18:03:48.478139    5716 command_runner.go:130] > ad2a39b82075
	I0314 18:03:48.478139    5716 command_runner.go:130] > 36c6f4519365
	I0314 18:03:48.478139    5716 command_runner.go:130] > b459c7260fab
	I0314 18:03:48.478207    5716 command_runner.go:130] > 44f4f68e5fae
	I0314 18:03:48.478207    5716 command_runner.go:130] > fa12a5de2bb7
	I0314 18:03:48.478207    5716 command_runner.go:130] > d50b7e3cc459
	I0314 18:03:48.478207    5716 command_runner.go:130] > f611537ce328
	I0314 18:03:48.478207    5716 command_runner.go:130] > 7e30d4621f75
	I0314 18:03:48.478207    5716 command_runner.go:130] > e12e36bccaf1
	I0314 18:03:48.478207    5716 command_runner.go:130] > 86e3255c8648
	I0314 18:03:48.480423    5716 docker.go:483] Stopping containers: [0ab078b3b1cb 1e6c09ec79cb eb1790abcad2 9bce7b51a10d e1d44ecb441a c3d217441c89 44e064493f5c 04cd8910e468 35b40dfba1a9 8eeafd22647e ad2a39b82075 36c6f4519365 b459c7260fab 44f4f68e5fae fa12a5de2bb7 d50b7e3cc459 f611537ce328 7e30d4621f75 e12e36bccaf1 86e3255c8648]
	I0314 18:03:48.489419    5716 ssh_runner.go:195] Run: docker stop 0ab078b3b1cb 1e6c09ec79cb eb1790abcad2 9bce7b51a10d e1d44ecb441a c3d217441c89 44e064493f5c 04cd8910e468 35b40dfba1a9 8eeafd22647e ad2a39b82075 36c6f4519365 b459c7260fab 44f4f68e5fae fa12a5de2bb7 d50b7e3cc459 f611537ce328 7e30d4621f75 e12e36bccaf1 86e3255c8648
	I0314 18:03:49.263598    5716 command_runner.go:130] > 0ab078b3b1cb
	I0314 18:03:49.263598    5716 command_runner.go:130] > 1e6c09ec79cb
	I0314 18:03:49.263598    5716 command_runner.go:130] > eb1790abcad2
	I0314 18:03:49.263598    5716 command_runner.go:130] > 9bce7b51a10d
	I0314 18:03:49.263598    5716 command_runner.go:130] > e1d44ecb441a
	I0314 18:03:49.263598    5716 command_runner.go:130] > c3d217441c89
	I0314 18:03:49.263598    5716 command_runner.go:130] > 44e064493f5c
	I0314 18:03:49.263598    5716 command_runner.go:130] > 04cd8910e468
	I0314 18:03:49.263598    5716 command_runner.go:130] > 35b40dfba1a9
	I0314 18:03:49.263598    5716 command_runner.go:130] > 8eeafd22647e
	I0314 18:03:49.263598    5716 command_runner.go:130] > ad2a39b82075
	I0314 18:03:49.263598    5716 command_runner.go:130] > 36c6f4519365
	I0314 18:03:49.263598    5716 command_runner.go:130] > b459c7260fab
	I0314 18:03:49.263598    5716 command_runner.go:130] > 44f4f68e5fae
	I0314 18:03:49.263598    5716 command_runner.go:130] > fa12a5de2bb7
	I0314 18:03:49.263598    5716 command_runner.go:130] > d50b7e3cc459
	I0314 18:03:49.263598    5716 command_runner.go:130] > f611537ce328
	I0314 18:03:49.263598    5716 command_runner.go:130] > 7e30d4621f75
	I0314 18:03:49.263598    5716 command_runner.go:130] > e12e36bccaf1
	I0314 18:03:49.263598    5716 command_runner.go:130] > 86e3255c8648
	I0314 18:03:49.274929    5716 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0314 18:03:49.354131    5716 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0314 18:03:49.370429    5716 command_runner.go:130] > -rw------- 1 root root 5639 Mar 14 18:01 /etc/kubernetes/admin.conf
	I0314 18:03:49.370429    5716 command_runner.go:130] > -rw------- 1 root root 5652 Mar 14 18:01 /etc/kubernetes/controller-manager.conf
	I0314 18:03:49.370429    5716 command_runner.go:130] > -rw------- 1 root root 2007 Mar 14 18:01 /etc/kubernetes/kubelet.conf
	I0314 18:03:49.370429    5716 command_runner.go:130] > -rw------- 1 root root 5600 Mar 14 18:01 /etc/kubernetes/scheduler.conf
	I0314 18:03:49.370429    5716 kubeadm.go:156] found existing configuration files:
	-rw------- 1 root root 5639 Mar 14 18:01 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Mar 14 18:01 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2007 Mar 14 18:01 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Mar 14 18:01 /etc/kubernetes/scheduler.conf
	
	I0314 18:03:49.379088    5716 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I0314 18:03:49.396141    5716 command_runner.go:130] >     server: https://control-plane.minikube.internal:8441
	I0314 18:03:49.405310    5716 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I0314 18:03:49.422250    5716 command_runner.go:130] >     server: https://control-plane.minikube.internal:8441
	I0314 18:03:49.430782    5716 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I0314 18:03:49.448476    5716 kubeadm.go:162] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0314 18:03:49.457037    5716 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0314 18:03:49.485600    5716 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I0314 18:03:49.503723    5716 kubeadm.go:162] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0314 18:03:49.513404    5716 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0314 18:03:49.540912    5716 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0314 18:03:49.557902    5716 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0314 18:03:49.725818    5716 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0314 18:03:49.725926    5716 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0314 18:03:49.725986    5716 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0314 18:03:49.725986    5716 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0314 18:03:49.726041    5716 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I0314 18:03:49.726093    5716 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I0314 18:03:49.726131    5716 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I0314 18:03:49.726172    5716 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I0314 18:03:49.726207    5716 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I0314 18:03:49.726284    5716 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0314 18:03:49.726346    5716 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0314 18:03:49.726384    5716 command_runner.go:130] > [certs] Using the existing "sa" key
	I0314 18:03:49.726474    5716 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0314 18:03:50.547881    5716 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0314 18:03:50.547947    5716 command_runner.go:130] > [kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/admin.conf"
	I0314 18:03:50.547947    5716 command_runner.go:130] > [kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/kubelet.conf"
	I0314 18:03:50.547947    5716 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0314 18:03:50.547947    5716 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0314 18:03:50.548056    5716 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0314 18:03:50.864085    5716 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0314 18:03:50.864085    5716 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0314 18:03:50.864185    5716 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0314 18:03:50.864185    5716 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0314 18:03:50.948777    5716 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0314 18:03:50.949613    5716 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0314 18:03:50.959778    5716 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0314 18:03:50.960778    5716 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0314 18:03:50.965991    5716 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0314 18:03:51.121845    5716 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0314 18:03:51.124299    5716 api_server.go:52] waiting for apiserver process to appear ...
	I0314 18:03:51.133252    5716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 18:03:51.650123    5716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 18:03:52.142215    5716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 18:03:52.640730    5716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 18:03:53.144798    5716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 18:03:53.167737    5716 command_runner.go:130] > 7145
	I0314 18:03:53.167737    5716 api_server.go:72] duration metric: took 2.0438248s to wait for apiserver process to appear ...
	I0314 18:03:53.167737    5716 api_server.go:88] waiting for apiserver healthz status ...
	I0314 18:03:53.167737    5716 api_server.go:253] Checking apiserver healthz at https://172.17.91.78:8441/healthz ...
	I0314 18:03:56.025184    5716 api_server.go:279] https://172.17.91.78:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0314 18:03:56.025184    5716 api_server.go:103] status: https://172.17.91.78:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0314 18:03:56.025184    5716 api_server.go:253] Checking apiserver healthz at https://172.17.91.78:8441/healthz ...
	I0314 18:03:56.086521    5716 api_server.go:279] https://172.17.91.78:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0314 18:03:56.086521    5716 api_server.go:103] status: https://172.17.91.78:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0314 18:03:56.180520    5716 api_server.go:253] Checking apiserver healthz at https://172.17.91.78:8441/healthz ...
	I0314 18:03:56.190129    5716 api_server.go:279] https://172.17.91.78:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0314 18:03:56.190234    5716 api_server.go:103] status: https://172.17.91.78:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0314 18:03:56.682426    5716 api_server.go:253] Checking apiserver healthz at https://172.17.91.78:8441/healthz ...
	I0314 18:03:56.690053    5716 api_server.go:279] https://172.17.91.78:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0314 18:03:56.690294    5716 api_server.go:103] status: https://172.17.91.78:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0314 18:03:57.168269    5716 api_server.go:253] Checking apiserver healthz at https://172.17.91.78:8441/healthz ...
	I0314 18:03:57.176191    5716 api_server.go:279] https://172.17.91.78:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0314 18:03:57.176191    5716 api_server.go:103] status: https://172.17.91.78:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0314 18:03:57.677349    5716 api_server.go:253] Checking apiserver healthz at https://172.17.91.78:8441/healthz ...
	I0314 18:03:57.685908    5716 api_server.go:279] https://172.17.91.78:8441/healthz returned 200:
	ok
	I0314 18:03:57.686871    5716 round_trippers.go:463] GET https://172.17.91.78:8441/version
	I0314 18:03:57.686871    5716 round_trippers.go:469] Request Headers:
	I0314 18:03:57.686871    5716 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:03:57.686871    5716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:03:57.697736    5716 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0314 18:03:57.697794    5716 round_trippers.go:577] Response Headers:
	I0314 18:03:57.697794    5716 round_trippers.go:580]     Audit-Id: 70f49c44-605e-4c7d-a16f-d0d67930e47d
	I0314 18:03:57.697794    5716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 18:03:57.697794    5716 round_trippers.go:580]     Content-Type: application/json
	I0314 18:03:57.697794    5716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7afade58-6a21-4091-9b10-1e0f6959fe88
	I0314 18:03:57.697850    5716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5eedacab-8917-4088-9cb2-f35d60041e49
	I0314 18:03:57.697850    5716 round_trippers.go:580]     Content-Length: 264
	I0314 18:03:57.697850    5716 round_trippers.go:580]     Date: Thu, 14 Mar 2024 18:03:57 GMT
	I0314 18:03:57.697905    5716 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.4",
	  "gitCommit": "bae2c62678db2b5053817bc97181fcc2e8388103",
	  "gitTreeState": "clean",
	  "buildDate": "2023-11-15T16:48:54Z",
	  "goVersion": "go1.20.11",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0314 18:03:57.698063    5716 api_server.go:141] control plane version: v1.28.4
	I0314 18:03:57.698114    5716 api_server.go:131] duration metric: took 4.5300427s to wait for apiserver health ...
	I0314 18:03:57.698114    5716 cni.go:84] Creating CNI manager for ""
	I0314 18:03:57.698114    5716 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0314 18:03:57.700578    5716 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0314 18:03:57.710587    5716 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0314 18:03:57.728725    5716 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0314 18:03:57.761672    5716 system_pods.go:43] waiting for kube-system pods to appear ...
	I0314 18:03:57.761881    5716 round_trippers.go:463] GET https://172.17.91.78:8441/api/v1/namespaces/kube-system/pods
	I0314 18:03:57.761933    5716 round_trippers.go:469] Request Headers:
	I0314 18:03:57.761933    5716 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:03:57.761933    5716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:03:57.768687    5716 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0314 18:03:57.768839    5716 round_trippers.go:577] Response Headers:
	I0314 18:03:57.768839    5716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7afade58-6a21-4091-9b10-1e0f6959fe88
	I0314 18:03:57.768839    5716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5eedacab-8917-4088-9cb2-f35d60041e49
	I0314 18:03:57.768839    5716 round_trippers.go:580]     Date: Thu, 14 Mar 2024 18:03:58 GMT
	I0314 18:03:57.768839    5716 round_trippers.go:580]     Audit-Id: df0903cb-428b-4a5f-b9d5-0e1c78dc92f2
	I0314 18:03:57.768839    5716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 18:03:57.768839    5716 round_trippers.go:580]     Content-Type: application/json
	I0314 18:03:57.769993    5716 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"540"},"items":[{"metadata":{"name":"coredns-5dd5756b68-n84nx","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"5d8f04ff-70b9-4332-a120-8993958cfd33","resourceVersion":"488","creationTimestamp":"2024-03-14T18:01:38Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"a19c2344-e6b5-4ec0-a733-b2c1a49d774f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-14T18:01:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a19c2344-e6b5-4ec0-a733-b2c1a49d774f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 49547 chars]
	I0314 18:03:57.775241    5716 system_pods.go:59] 7 kube-system pods found
	I0314 18:03:57.775317    5716 system_pods.go:61] "coredns-5dd5756b68-n84nx" [5d8f04ff-70b9-4332-a120-8993958cfd33] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0314 18:03:57.775317    5716 system_pods.go:61] "etcd-functional-866600" [22c38b7f-5371-47b4-b119-d9aec9d349cf] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0314 18:03:57.775317    5716 system_pods.go:61] "kube-apiserver-functional-866600" [9849501a-615b-4ff4-9914-35f4e0e718aa] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0314 18:03:57.775389    5716 system_pods.go:61] "kube-controller-manager-functional-866600" [f415043c-6140-4e46-8769-1445681ccc85] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0314 18:03:57.775389    5716 system_pods.go:61] "kube-proxy-7dppw" [8123be17-49ac-450e-9ff2-48b35f8a9a0f] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0314 18:03:57.775389    5716 system_pods.go:61] "kube-scheduler-functional-866600" [ae9aa2ce-db1b-4105-9a2c-243505551b2c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0314 18:03:57.775389    5716 system_pods.go:61] "storage-provisioner" [74f7dcf3-94a7-441e-a9c5-207e2bbd1efe] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0314 18:03:57.775389    5716 system_pods.go:74] duration metric: took 13.6537ms to wait for pod list to return data ...
	I0314 18:03:57.775468    5716 node_conditions.go:102] verifying NodePressure condition ...
	I0314 18:03:57.775490    5716 round_trippers.go:463] GET https://172.17.91.78:8441/api/v1/nodes
	I0314 18:03:57.775490    5716 round_trippers.go:469] Request Headers:
	I0314 18:03:57.775490    5716 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:03:57.775490    5716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:03:57.778651    5716 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 18:03:57.778651    5716 round_trippers.go:577] Response Headers:
	I0314 18:03:57.778965    5716 round_trippers.go:580]     Audit-Id: d7cc5bb1-1a5a-45ca-ad68-05c7ac6d1742
	I0314 18:03:57.778965    5716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 18:03:57.778965    5716 round_trippers.go:580]     Content-Type: application/json
	I0314 18:03:57.778965    5716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7afade58-6a21-4091-9b10-1e0f6959fe88
	I0314 18:03:57.778965    5716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5eedacab-8917-4088-9cb2-f35d60041e49
	I0314 18:03:57.778965    5716 round_trippers.go:580]     Date: Thu, 14 Mar 2024 18:03:58 GMT
	I0314 18:03:57.779336    5716 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"540"},"items":[{"metadata":{"name":"functional-866600","uid":"d6b958a8-7514-4695-bae4-635d1e29c4f0","resourceVersion":"491","creationTimestamp":"2024-03-14T18:01:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-866600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"functional-866600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T18_01_24_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedF
ields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","ti [truncated 4838 chars]
	I0314 18:03:57.779936    5716 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0314 18:03:57.779936    5716 node_conditions.go:123] node cpu capacity is 2
	I0314 18:03:57.779936    5716 node_conditions.go:105] duration metric: took 4.4672ms to run NodePressure ...
	I0314 18:03:57.779936    5716 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0314 18:03:58.346618    5716 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0314 18:03:58.346618    5716 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0314 18:03:58.346618    5716 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0314 18:03:58.346618    5716 round_trippers.go:463] GET https://172.17.91.78:8441/api/v1/namespaces/kube-system/pods?labelSelector=tier%!D(MISSING)control-plane
	I0314 18:03:58.346618    5716 round_trippers.go:469] Request Headers:
	I0314 18:03:58.346618    5716 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:03:58.346618    5716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:03:58.354196    5716 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0314 18:03:58.354196    5716 round_trippers.go:577] Response Headers:
	I0314 18:03:58.354196    5716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5eedacab-8917-4088-9cb2-f35d60041e49
	I0314 18:03:58.354196    5716 round_trippers.go:580]     Date: Thu, 14 Mar 2024 18:03:58 GMT
	I0314 18:03:58.354196    5716 round_trippers.go:580]     Audit-Id: e80ca519-505c-4852-ac6c-502bf91c976c
	I0314 18:03:58.354196    5716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 18:03:58.354196    5716 round_trippers.go:580]     Content-Type: application/json
	I0314 18:03:58.354196    5716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7afade58-6a21-4091-9b10-1e0f6959fe88
	I0314 18:03:58.355204    5716 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"547"},"items":[{"metadata":{"name":"etcd-functional-866600","namespace":"kube-system","uid":"22c38b7f-5371-47b4-b119-d9aec9d349cf","resourceVersion":"492","creationTimestamp":"2024-03-14T18:01:22Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.17.91.78:2379","kubernetes.io/config.hash":"abc7192aa143e5462050502ecfd46bc1","kubernetes.io/config.mirror":"abc7192aa143e5462050502ecfd46bc1","kubernetes.io/config.seen":"2024-03-14T18:01:15.863787877Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-866600","uid":"d6b958a8-7514-4695-bae4-635d1e29c4f0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-14T18:01:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotation
s":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-client-urls":{},"f:k [truncated 29744 chars]
	I0314 18:03:58.356202    5716 kubeadm.go:733] kubelet initialised
	I0314 18:03:58.356202    5716 kubeadm.go:734] duration metric: took 9.5826ms waiting for restarted kubelet to initialise ...
	I0314 18:03:58.356202    5716 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0314 18:03:58.356202    5716 round_trippers.go:463] GET https://172.17.91.78:8441/api/v1/namespaces/kube-system/pods
	I0314 18:03:58.356202    5716 round_trippers.go:469] Request Headers:
	I0314 18:03:58.356202    5716 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:03:58.356202    5716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:03:58.364516    5716 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0314 18:03:58.364516    5716 round_trippers.go:577] Response Headers:
	I0314 18:03:58.364516    5716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 18:03:58.364516    5716 round_trippers.go:580]     Content-Type: application/json
	I0314 18:03:58.364516    5716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7afade58-6a21-4091-9b10-1e0f6959fe88
	I0314 18:03:58.364516    5716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5eedacab-8917-4088-9cb2-f35d60041e49
	I0314 18:03:58.364516    5716 round_trippers.go:580]     Date: Thu, 14 Mar 2024 18:03:58 GMT
	I0314 18:03:58.364516    5716 round_trippers.go:580]     Audit-Id: 2c68bf21-dd9c-4c5e-a305-44a0540ca881
	I0314 18:03:58.365208    5716 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"547"},"items":[{"metadata":{"name":"coredns-5dd5756b68-n84nx","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"5d8f04ff-70b9-4332-a120-8993958cfd33","resourceVersion":"488","creationTimestamp":"2024-03-14T18:01:38Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"a19c2344-e6b5-4ec0-a733-b2c1a49d774f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-14T18:01:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a19c2344-e6b5-4ec0-a733-b2c1a49d774f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 49547 chars]
	I0314 18:03:58.367710    5716 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-n84nx" in "kube-system" namespace to be "Ready" ...
	I0314 18:03:58.367710    5716 round_trippers.go:463] GET https://172.17.91.78:8441/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-n84nx
	I0314 18:03:58.367710    5716 round_trippers.go:469] Request Headers:
	I0314 18:03:58.367710    5716 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:03:58.367710    5716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:03:58.375682    5716 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0314 18:03:58.375682    5716 round_trippers.go:577] Response Headers:
	I0314 18:03:58.375682    5716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7afade58-6a21-4091-9b10-1e0f6959fe88
	I0314 18:03:58.375682    5716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5eedacab-8917-4088-9cb2-f35d60041e49
	I0314 18:03:58.375682    5716 round_trippers.go:580]     Date: Thu, 14 Mar 2024 18:03:58 GMT
	I0314 18:03:58.375682    5716 round_trippers.go:580]     Audit-Id: 28065749-a336-404d-891e-fe1546c3d0f1
	I0314 18:03:58.375682    5716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 18:03:58.375682    5716 round_trippers.go:580]     Content-Type: application/json
	I0314 18:03:58.375682    5716 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-n84nx","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"5d8f04ff-70b9-4332-a120-8993958cfd33","resourceVersion":"488","creationTimestamp":"2024-03-14T18:01:38Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"a19c2344-e6b5-4ec0-a733-b2c1a49d774f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-14T18:01:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a19c2344-e6b5-4ec0-a733-b2c1a49d774f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6338 chars]
	I0314 18:03:58.376280    5716 round_trippers.go:463] GET https://172.17.91.78:8441/api/v1/nodes/functional-866600
	I0314 18:03:58.376280    5716 round_trippers.go:469] Request Headers:
	I0314 18:03:58.376280    5716 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:03:58.376280    5716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:03:58.383421    5716 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0314 18:03:58.383421    5716 round_trippers.go:577] Response Headers:
	I0314 18:03:58.383421    5716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 18:03:58.383421    5716 round_trippers.go:580]     Content-Type: application/json
	I0314 18:03:58.383421    5716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7afade58-6a21-4091-9b10-1e0f6959fe88
	I0314 18:03:58.383421    5716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5eedacab-8917-4088-9cb2-f35d60041e49
	I0314 18:03:58.383421    5716 round_trippers.go:580]     Date: Thu, 14 Mar 2024 18:03:58 GMT
	I0314 18:03:58.383421    5716 round_trippers.go:580]     Audit-Id: 5b11013b-3f2d-4317-8299-4c1dc20d0dea
	I0314 18:03:58.383956    5716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-866600","uid":"d6b958a8-7514-4695-bae4-635d1e29c4f0","resourceVersion":"491","creationTimestamp":"2024-03-14T18:01:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-866600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"functional-866600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T18_01_24_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-03-14T18:01:20Z","fieldsType":"FieldsV1", [truncated 4785 chars]
	I0314 18:03:58.882275    5716 round_trippers.go:463] GET https://172.17.91.78:8441/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-n84nx
	I0314 18:03:58.882341    5716 round_trippers.go:469] Request Headers:
	I0314 18:03:58.882341    5716 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:03:58.882341    5716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:03:58.885260    5716 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0314 18:03:58.885260    5716 round_trippers.go:577] Response Headers:
	I0314 18:03:58.885260    5716 round_trippers.go:580]     Date: Thu, 14 Mar 2024 18:03:59 GMT
	I0314 18:03:58.885260    5716 round_trippers.go:580]     Audit-Id: 1a29298a-5588-427d-b7ff-7cc495378cdc
	I0314 18:03:58.885260    5716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 18:03:58.885260    5716 round_trippers.go:580]     Content-Type: application/json
	I0314 18:03:58.885260    5716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7afade58-6a21-4091-9b10-1e0f6959fe88
	I0314 18:03:58.885260    5716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5eedacab-8917-4088-9cb2-f35d60041e49
	I0314 18:03:58.886255    5716 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-n84nx","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"5d8f04ff-70b9-4332-a120-8993958cfd33","resourceVersion":"551","creationTimestamp":"2024-03-14T18:01:38Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"a19c2344-e6b5-4ec0-a733-b2c1a49d774f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-14T18:01:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a19c2344-e6b5-4ec0-a733-b2c1a49d774f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6383 chars]
	I0314 18:03:58.886255    5716 round_trippers.go:463] GET https://172.17.91.78:8441/api/v1/nodes/functional-866600
	I0314 18:03:58.887276    5716 round_trippers.go:469] Request Headers:
	I0314 18:03:58.887276    5716 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:03:58.887276    5716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:03:58.891374    5716 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 18:03:58.891374    5716 round_trippers.go:577] Response Headers:
	I0314 18:03:58.891374    5716 round_trippers.go:580]     Date: Thu, 14 Mar 2024 18:03:59 GMT
	I0314 18:03:58.891374    5716 round_trippers.go:580]     Audit-Id: 79509898-45b4-4f37-81b4-72799c659d76
	I0314 18:03:58.891374    5716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 18:03:58.891374    5716 round_trippers.go:580]     Content-Type: application/json
	I0314 18:03:58.891374    5716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7afade58-6a21-4091-9b10-1e0f6959fe88
	I0314 18:03:58.891374    5716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5eedacab-8917-4088-9cb2-f35d60041e49
	I0314 18:03:58.891374    5716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-866600","uid":"d6b958a8-7514-4695-bae4-635d1e29c4f0","resourceVersion":"491","creationTimestamp":"2024-03-14T18:01:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-866600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"functional-866600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T18_01_24_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-03-14T18:01:20Z","fieldsType":"FieldsV1", [truncated 4785 chars]
	I0314 18:03:59.368247    5716 round_trippers.go:463] GET https://172.17.91.78:8441/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-n84nx
	I0314 18:03:59.368247    5716 round_trippers.go:469] Request Headers:
	I0314 18:03:59.368247    5716 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:03:59.368247    5716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:03:59.373907    5716 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0314 18:03:59.373907    5716 round_trippers.go:577] Response Headers:
	I0314 18:03:59.373979    5716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7afade58-6a21-4091-9b10-1e0f6959fe88
	I0314 18:03:59.373979    5716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5eedacab-8917-4088-9cb2-f35d60041e49
	I0314 18:03:59.373979    5716 round_trippers.go:580]     Date: Thu, 14 Mar 2024 18:03:59 GMT
	I0314 18:03:59.373979    5716 round_trippers.go:580]     Audit-Id: 7ddab430-1bb8-466c-b802-b43e3affaff3
	I0314 18:03:59.373979    5716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 18:03:59.373979    5716 round_trippers.go:580]     Content-Type: application/json
	I0314 18:03:59.373979    5716 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-n84nx","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"5d8f04ff-70b9-4332-a120-8993958cfd33","resourceVersion":"551","creationTimestamp":"2024-03-14T18:01:38Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"a19c2344-e6b5-4ec0-a733-b2c1a49d774f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-14T18:01:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a19c2344-e6b5-4ec0-a733-b2c1a49d774f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6383 chars]
	I0314 18:03:59.375105    5716 round_trippers.go:463] GET https://172.17.91.78:8441/api/v1/nodes/functional-866600
	I0314 18:03:59.375105    5716 round_trippers.go:469] Request Headers:
	I0314 18:03:59.375105    5716 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:03:59.375105    5716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:03:59.377995    5716 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0314 18:03:59.378772    5716 round_trippers.go:577] Response Headers:
	I0314 18:03:59.378772    5716 round_trippers.go:580]     Content-Type: application/json
	I0314 18:03:59.378772    5716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7afade58-6a21-4091-9b10-1e0f6959fe88
	I0314 18:03:59.378772    5716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5eedacab-8917-4088-9cb2-f35d60041e49
	I0314 18:03:59.378772    5716 round_trippers.go:580]     Date: Thu, 14 Mar 2024 18:03:59 GMT
	I0314 18:03:59.378772    5716 round_trippers.go:580]     Audit-Id: f149ff6e-60ee-43bd-8a3c-9ad391b9e341
	I0314 18:03:59.378772    5716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 18:03:59.378772    5716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-866600","uid":"d6b958a8-7514-4695-bae4-635d1e29c4f0","resourceVersion":"491","creationTimestamp":"2024-03-14T18:01:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-866600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"functional-866600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T18_01_24_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-03-14T18:01:20Z","fieldsType":"FieldsV1", [truncated 4785 chars]
	I0314 18:03:59.870346    5716 round_trippers.go:463] GET https://172.17.91.78:8441/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-n84nx
	I0314 18:03:59.870431    5716 round_trippers.go:469] Request Headers:
	I0314 18:03:59.870523    5716 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:03:59.870523    5716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:03:59.874262    5716 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 18:03:59.874262    5716 round_trippers.go:577] Response Headers:
	I0314 18:03:59.874262    5716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7afade58-6a21-4091-9b10-1e0f6959fe88
	I0314 18:03:59.874262    5716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5eedacab-8917-4088-9cb2-f35d60041e49
	I0314 18:03:59.874262    5716 round_trippers.go:580]     Date: Thu, 14 Mar 2024 18:04:00 GMT
	I0314 18:03:59.874262    5716 round_trippers.go:580]     Audit-Id: de8d4fb3-21da-4cf5-8d66-fabe1ecaaedf
	I0314 18:03:59.874262    5716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 18:03:59.874262    5716 round_trippers.go:580]     Content-Type: application/json
	I0314 18:03:59.875116    5716 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-n84nx","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"5d8f04ff-70b9-4332-a120-8993958cfd33","resourceVersion":"551","creationTimestamp":"2024-03-14T18:01:38Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"a19c2344-e6b5-4ec0-a733-b2c1a49d774f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-14T18:01:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a19c2344-e6b5-4ec0-a733-b2c1a49d774f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6383 chars]
	I0314 18:03:59.875824    5716 round_trippers.go:463] GET https://172.17.91.78:8441/api/v1/nodes/functional-866600
	I0314 18:03:59.875824    5716 round_trippers.go:469] Request Headers:
	I0314 18:03:59.875824    5716 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:03:59.875824    5716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:03:59.879407    5716 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 18:03:59.879508    5716 round_trippers.go:577] Response Headers:
	I0314 18:03:59.879508    5716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 18:03:59.879508    5716 round_trippers.go:580]     Content-Type: application/json
	I0314 18:03:59.879508    5716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7afade58-6a21-4091-9b10-1e0f6959fe88
	I0314 18:03:59.879508    5716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5eedacab-8917-4088-9cb2-f35d60041e49
	I0314 18:03:59.879508    5716 round_trippers.go:580]     Date: Thu, 14 Mar 2024 18:04:00 GMT
	I0314 18:03:59.879508    5716 round_trippers.go:580]     Audit-Id: 16596b43-7906-483a-9895-0a8c74d418cf
	I0314 18:03:59.879508    5716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-866600","uid":"d6b958a8-7514-4695-bae4-635d1e29c4f0","resourceVersion":"491","creationTimestamp":"2024-03-14T18:01:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-866600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"functional-866600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T18_01_24_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-03-14T18:01:20Z","fieldsType":"FieldsV1", [truncated 4785 chars]
	I0314 18:04:00.369106    5716 round_trippers.go:463] GET https://172.17.91.78:8441/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-n84nx
	I0314 18:04:00.369198    5716 round_trippers.go:469] Request Headers:
	I0314 18:04:00.369198    5716 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:04:00.369198    5716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:04:00.372904    5716 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 18:04:00.373087    5716 round_trippers.go:577] Response Headers:
	I0314 18:04:00.373087    5716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7afade58-6a21-4091-9b10-1e0f6959fe88
	I0314 18:04:00.373087    5716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5eedacab-8917-4088-9cb2-f35d60041e49
	I0314 18:04:00.373087    5716 round_trippers.go:580]     Date: Thu, 14 Mar 2024 18:04:00 GMT
	I0314 18:04:00.373087    5716 round_trippers.go:580]     Audit-Id: b341b872-31fe-4593-97b9-eebf3f0d0ccc
	I0314 18:04:00.373087    5716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 18:04:00.373087    5716 round_trippers.go:580]     Content-Type: application/json
	I0314 18:04:00.374803    5716 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-n84nx","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"5d8f04ff-70b9-4332-a120-8993958cfd33","resourceVersion":"551","creationTimestamp":"2024-03-14T18:01:38Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"a19c2344-e6b5-4ec0-a733-b2c1a49d774f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-14T18:01:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a19c2344-e6b5-4ec0-a733-b2c1a49d774f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6383 chars]
	I0314 18:04:00.377232    5716 round_trippers.go:463] GET https://172.17.91.78:8441/api/v1/nodes/functional-866600
	I0314 18:04:00.377323    5716 round_trippers.go:469] Request Headers:
	I0314 18:04:00.377323    5716 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:04:00.377323    5716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:04:00.380230    5716 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0314 18:04:00.380230    5716 round_trippers.go:577] Response Headers:
	I0314 18:04:00.380230    5716 round_trippers.go:580]     Audit-Id: de216480-8e4f-413a-a38f-759c6c8d6683
	I0314 18:04:00.380230    5716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 18:04:00.380230    5716 round_trippers.go:580]     Content-Type: application/json
	I0314 18:04:00.380230    5716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7afade58-6a21-4091-9b10-1e0f6959fe88
	I0314 18:04:00.380230    5716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5eedacab-8917-4088-9cb2-f35d60041e49
	I0314 18:04:00.380230    5716 round_trippers.go:580]     Date: Thu, 14 Mar 2024 18:04:00 GMT
	I0314 18:04:00.381161    5716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-866600","uid":"d6b958a8-7514-4695-bae4-635d1e29c4f0","resourceVersion":"491","creationTimestamp":"2024-03-14T18:01:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-866600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"functional-866600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T18_01_24_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-03-14T18:01:20Z","fieldsType":"FieldsV1", [truncated 4785 chars]
	I0314 18:04:00.381320    5716 pod_ready.go:102] pod "coredns-5dd5756b68-n84nx" in "kube-system" namespace has status "Ready":"False"
	I0314 18:04:00.882527    5716 round_trippers.go:463] GET https://172.17.91.78:8441/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-n84nx
	I0314 18:04:00.882794    5716 round_trippers.go:469] Request Headers:
	I0314 18:04:00.882794    5716 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:04:00.882794    5716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:04:00.889120    5716 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0314 18:04:00.889120    5716 round_trippers.go:577] Response Headers:
	I0314 18:04:00.889120    5716 round_trippers.go:580]     Audit-Id: 93bea54f-96d1-44e7-9a52-dcab98842a81
	I0314 18:04:00.889120    5716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 18:04:00.889120    5716 round_trippers.go:580]     Content-Type: application/json
	I0314 18:04:00.889120    5716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7afade58-6a21-4091-9b10-1e0f6959fe88
	I0314 18:04:00.889120    5716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5eedacab-8917-4088-9cb2-f35d60041e49
	I0314 18:04:00.889120    5716 round_trippers.go:580]     Date: Thu, 14 Mar 2024 18:04:01 GMT
	I0314 18:04:00.889120    5716 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-n84nx","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"5d8f04ff-70b9-4332-a120-8993958cfd33","resourceVersion":"551","creationTimestamp":"2024-03-14T18:01:38Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"a19c2344-e6b5-4ec0-a733-b2c1a49d774f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-14T18:01:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a19c2344-e6b5-4ec0-a733-b2c1a49d774f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6383 chars]
	I0314 18:04:00.889777    5716 round_trippers.go:463] GET https://172.17.91.78:8441/api/v1/nodes/functional-866600
	I0314 18:04:00.889777    5716 round_trippers.go:469] Request Headers:
	I0314 18:04:00.889777    5716 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:04:00.889777    5716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:04:00.892954    5716 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 18:04:00.892954    5716 round_trippers.go:577] Response Headers:
	I0314 18:04:00.892954    5716 round_trippers.go:580]     Content-Type: application/json
	I0314 18:04:00.892954    5716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7afade58-6a21-4091-9b10-1e0f6959fe88
	I0314 18:04:00.892954    5716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5eedacab-8917-4088-9cb2-f35d60041e49
	I0314 18:04:00.892954    5716 round_trippers.go:580]     Date: Thu, 14 Mar 2024 18:04:01 GMT
	I0314 18:04:00.892954    5716 round_trippers.go:580]     Audit-Id: 480f8129-5382-49c9-8d4d-6411b1f586c9
	I0314 18:04:00.892954    5716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 18:04:00.892954    5716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-866600","uid":"d6b958a8-7514-4695-bae4-635d1e29c4f0","resourceVersion":"491","creationTimestamp":"2024-03-14T18:01:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-866600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"functional-866600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T18_01_24_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-03-14T18:01:20Z","fieldsType":"FieldsV1", [truncated 4785 chars]
	I0314 18:04:01.382128    5716 round_trippers.go:463] GET https://172.17.91.78:8441/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-n84nx
	I0314 18:04:01.382128    5716 round_trippers.go:469] Request Headers:
	I0314 18:04:01.382128    5716 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:04:01.382128    5716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:04:01.385710    5716 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 18:04:01.385710    5716 round_trippers.go:577] Response Headers:
	I0314 18:04:01.385710    5716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7afade58-6a21-4091-9b10-1e0f6959fe88
	I0314 18:04:01.385710    5716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5eedacab-8917-4088-9cb2-f35d60041e49
	I0314 18:04:01.385710    5716 round_trippers.go:580]     Date: Thu, 14 Mar 2024 18:04:01 GMT
	I0314 18:04:01.385710    5716 round_trippers.go:580]     Audit-Id: b28f829a-5761-40f6-8dcb-d7430c734c9a
	I0314 18:04:01.385710    5716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 18:04:01.385710    5716 round_trippers.go:580]     Content-Type: application/json
	I0314 18:04:01.386691    5716 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-n84nx","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"5d8f04ff-70b9-4332-a120-8993958cfd33","resourceVersion":"558","creationTimestamp":"2024-03-14T18:01:38Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"a19c2344-e6b5-4ec0-a733-b2c1a49d774f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-14T18:01:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a19c2344-e6b5-4ec0-a733-b2c1a49d774f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6154 chars]
	I0314 18:04:01.386691    5716 round_trippers.go:463] GET https://172.17.91.78:8441/api/v1/nodes/functional-866600
	I0314 18:04:01.386691    5716 round_trippers.go:469] Request Headers:
	I0314 18:04:01.386691    5716 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:04:01.386691    5716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:04:01.390838    5716 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 18:04:01.390900    5716 round_trippers.go:577] Response Headers:
	I0314 18:04:01.390900    5716 round_trippers.go:580]     Audit-Id: 4d2464b3-98b5-42c6-af4e-b2999158f544
	I0314 18:04:01.390900    5716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 18:04:01.390900    5716 round_trippers.go:580]     Content-Type: application/json
	I0314 18:04:01.390900    5716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7afade58-6a21-4091-9b10-1e0f6959fe88
	I0314 18:04:01.390970    5716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5eedacab-8917-4088-9cb2-f35d60041e49
	I0314 18:04:01.390970    5716 round_trippers.go:580]     Date: Thu, 14 Mar 2024 18:04:01 GMT
	I0314 18:04:01.390970    5716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-866600","uid":"d6b958a8-7514-4695-bae4-635d1e29c4f0","resourceVersion":"491","creationTimestamp":"2024-03-14T18:01:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-866600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"functional-866600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T18_01_24_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-03-14T18:01:20Z","fieldsType":"FieldsV1", [truncated 4785 chars]
	I0314 18:04:01.391505    5716 pod_ready.go:92] pod "coredns-5dd5756b68-n84nx" in "kube-system" namespace has status "Ready":"True"
	I0314 18:04:01.391559    5716 pod_ready.go:81] duration metric: took 3.023626s for pod "coredns-5dd5756b68-n84nx" in "kube-system" namespace to be "Ready" ...
	I0314 18:04:01.391559    5716 pod_ready.go:78] waiting up to 4m0s for pod "etcd-functional-866600" in "kube-system" namespace to be "Ready" ...
	I0314 18:04:01.391559    5716 round_trippers.go:463] GET https://172.17.91.78:8441/api/v1/namespaces/kube-system/pods/etcd-functional-866600
	I0314 18:04:01.391559    5716 round_trippers.go:469] Request Headers:
	I0314 18:04:01.391559    5716 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:04:01.391559    5716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:04:01.394922    5716 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0314 18:04:01.394922    5716 round_trippers.go:577] Response Headers:
	I0314 18:04:01.394922    5716 round_trippers.go:580]     Audit-Id: 2302f34e-2163-4fd0-b370-c13831a87e0f
	I0314 18:04:01.394922    5716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 18:04:01.394922    5716 round_trippers.go:580]     Content-Type: application/json
	I0314 18:04:01.394922    5716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7afade58-6a21-4091-9b10-1e0f6959fe88
	I0314 18:04:01.394922    5716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5eedacab-8917-4088-9cb2-f35d60041e49
	I0314 18:04:01.394922    5716 round_trippers.go:580]     Date: Thu, 14 Mar 2024 18:04:01 GMT
	I0314 18:04:01.395043    5716 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-866600","namespace":"kube-system","uid":"22c38b7f-5371-47b4-b119-d9aec9d349cf","resourceVersion":"492","creationTimestamp":"2024-03-14T18:01:22Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.17.91.78:2379","kubernetes.io/config.hash":"abc7192aa143e5462050502ecfd46bc1","kubernetes.io/config.mirror":"abc7192aa143e5462050502ecfd46bc1","kubernetes.io/config.seen":"2024-03-14T18:01:15.863787877Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-866600","uid":"d6b958a8-7514-4695-bae4-635d1e29c4f0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-14T18:01:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6286 chars]
	I0314 18:04:01.395571    5716 round_trippers.go:463] GET https://172.17.91.78:8441/api/v1/nodes/functional-866600
	I0314 18:04:01.395571    5716 round_trippers.go:469] Request Headers:
	I0314 18:04:01.395615    5716 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:04:01.395615    5716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:04:01.397357    5716 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0314 18:04:01.397357    5716 round_trippers.go:577] Response Headers:
	I0314 18:04:01.397357    5716 round_trippers.go:580]     Audit-Id: a695d1c9-9ea6-4ae8-82bb-c6347947f7d2
	I0314 18:04:01.397357    5716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 18:04:01.397357    5716 round_trippers.go:580]     Content-Type: application/json
	I0314 18:04:01.397357    5716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7afade58-6a21-4091-9b10-1e0f6959fe88
	I0314 18:04:01.397357    5716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5eedacab-8917-4088-9cb2-f35d60041e49
	I0314 18:04:01.398321    5716 round_trippers.go:580]     Date: Thu, 14 Mar 2024 18:04:01 GMT
	I0314 18:04:01.398321    5716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-866600","uid":"d6b958a8-7514-4695-bae4-635d1e29c4f0","resourceVersion":"491","creationTimestamp":"2024-03-14T18:01:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-866600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"functional-866600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T18_01_24_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-03-14T18:01:20Z","fieldsType":"FieldsV1", [truncated 4785 chars]
	I0314 18:04:01.897928    5716 round_trippers.go:463] GET https://172.17.91.78:8441/api/v1/namespaces/kube-system/pods/etcd-functional-866600
	I0314 18:04:01.897928    5716 round_trippers.go:469] Request Headers:
	I0314 18:04:01.897928    5716 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:04:01.897928    5716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:04:01.901957    5716 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 18:04:01.901957    5716 round_trippers.go:577] Response Headers:
	I0314 18:04:01.902024    5716 round_trippers.go:580]     Content-Type: application/json
	I0314 18:04:01.902024    5716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7afade58-6a21-4091-9b10-1e0f6959fe88
	I0314 18:04:01.902024    5716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5eedacab-8917-4088-9cb2-f35d60041e49
	I0314 18:04:01.902024    5716 round_trippers.go:580]     Date: Thu, 14 Mar 2024 18:04:02 GMT
	I0314 18:04:01.902076    5716 round_trippers.go:580]     Audit-Id: 48fbf901-3a2f-4486-8033-442e534e3234
	I0314 18:04:01.902076    5716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 18:04:01.902076    5716 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-866600","namespace":"kube-system","uid":"22c38b7f-5371-47b4-b119-d9aec9d349cf","resourceVersion":"492","creationTimestamp":"2024-03-14T18:01:22Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.17.91.78:2379","kubernetes.io/config.hash":"abc7192aa143e5462050502ecfd46bc1","kubernetes.io/config.mirror":"abc7192aa143e5462050502ecfd46bc1","kubernetes.io/config.seen":"2024-03-14T18:01:15.863787877Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-866600","uid":"d6b958a8-7514-4695-bae4-635d1e29c4f0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-14T18:01:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6286 chars]
	I0314 18:04:01.902791    5716 round_trippers.go:463] GET https://172.17.91.78:8441/api/v1/nodes/functional-866600
	I0314 18:04:01.902791    5716 round_trippers.go:469] Request Headers:
	I0314 18:04:01.902791    5716 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:04:01.902842    5716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:04:01.905955    5716 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 18:04:01.905955    5716 round_trippers.go:577] Response Headers:
	I0314 18:04:01.905955    5716 round_trippers.go:580]     Content-Type: application/json
	I0314 18:04:01.905955    5716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7afade58-6a21-4091-9b10-1e0f6959fe88
	I0314 18:04:01.905955    5716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5eedacab-8917-4088-9cb2-f35d60041e49
	I0314 18:04:01.905955    5716 round_trippers.go:580]     Date: Thu, 14 Mar 2024 18:04:02 GMT
	I0314 18:04:01.905955    5716 round_trippers.go:580]     Audit-Id: 9826ee1f-9f39-4bfd-ae32-146ac9cd0a94
	I0314 18:04:01.905955    5716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 18:04:01.906626    5716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-866600","uid":"d6b958a8-7514-4695-bae4-635d1e29c4f0","resourceVersion":"491","creationTimestamp":"2024-03-14T18:01:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-866600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"functional-866600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T18_01_24_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-03-14T18:01:20Z","fieldsType":"FieldsV1", [truncated 4785 chars]
	I0314 18:04:02.399714    5716 round_trippers.go:463] GET https://172.17.91.78:8441/api/v1/namespaces/kube-system/pods/etcd-functional-866600
	I0314 18:04:02.399786    5716 round_trippers.go:469] Request Headers:
	I0314 18:04:02.399786    5716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:04:02.399858    5716 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:04:02.403026    5716 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 18:04:02.404017    5716 round_trippers.go:577] Response Headers:
	I0314 18:04:02.404105    5716 round_trippers.go:580]     Date: Thu, 14 Mar 2024 18:04:02 GMT
	I0314 18:04:02.404169    5716 round_trippers.go:580]     Audit-Id: c4d5be64-306c-42a3-a2fb-fc8ab554eaf1
	I0314 18:04:02.404205    5716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 18:04:02.404289    5716 round_trippers.go:580]     Content-Type: application/json
	I0314 18:04:02.404302    5716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7afade58-6a21-4091-9b10-1e0f6959fe88
	I0314 18:04:02.404302    5716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5eedacab-8917-4088-9cb2-f35d60041e49
	I0314 18:04:02.404534    5716 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-866600","namespace":"kube-system","uid":"22c38b7f-5371-47b4-b119-d9aec9d349cf","resourceVersion":"492","creationTimestamp":"2024-03-14T18:01:22Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.17.91.78:2379","kubernetes.io/config.hash":"abc7192aa143e5462050502ecfd46bc1","kubernetes.io/config.mirror":"abc7192aa143e5462050502ecfd46bc1","kubernetes.io/config.seen":"2024-03-14T18:01:15.863787877Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-866600","uid":"d6b958a8-7514-4695-bae4-635d1e29c4f0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-14T18:01:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6286 chars]
	I0314 18:04:02.405223    5716 round_trippers.go:463] GET https://172.17.91.78:8441/api/v1/nodes/functional-866600
	I0314 18:04:02.405223    5716 round_trippers.go:469] Request Headers:
	I0314 18:04:02.405308    5716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:04:02.405308    5716 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:04:02.408541    5716 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 18:04:02.408619    5716 round_trippers.go:577] Response Headers:
	I0314 18:04:02.408619    5716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7afade58-6a21-4091-9b10-1e0f6959fe88
	I0314 18:04:02.408691    5716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5eedacab-8917-4088-9cb2-f35d60041e49
	I0314 18:04:02.408691    5716 round_trippers.go:580]     Date: Thu, 14 Mar 2024 18:04:02 GMT
	I0314 18:04:02.408744    5716 round_trippers.go:580]     Audit-Id: 58c7d59e-3b20-467f-89cb-12ac815dfa66
	I0314 18:04:02.408744    5716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 18:04:02.408807    5716 round_trippers.go:580]     Content-Type: application/json
	I0314 18:04:02.409221    5716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-866600","uid":"d6b958a8-7514-4695-bae4-635d1e29c4f0","resourceVersion":"491","creationTimestamp":"2024-03-14T18:01:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-866600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"functional-866600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T18_01_24_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-03-14T18:01:20Z","fieldsType":"FieldsV1", [truncated 4785 chars]
	I0314 18:04:02.900568    5716 round_trippers.go:463] GET https://172.17.91.78:8441/api/v1/namespaces/kube-system/pods/etcd-functional-866600
	I0314 18:04:02.900849    5716 round_trippers.go:469] Request Headers:
	I0314 18:04:02.900849    5716 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:04:02.900849    5716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:04:02.904309    5716 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 18:04:02.904309    5716 round_trippers.go:577] Response Headers:
	I0314 18:04:02.904309    5716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 18:04:02.904309    5716 round_trippers.go:580]     Content-Type: application/json
	I0314 18:04:02.904858    5716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7afade58-6a21-4091-9b10-1e0f6959fe88
	I0314 18:04:02.904858    5716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5eedacab-8917-4088-9cb2-f35d60041e49
	I0314 18:04:02.904858    5716 round_trippers.go:580]     Date: Thu, 14 Mar 2024 18:04:03 GMT
	I0314 18:04:02.904858    5716 round_trippers.go:580]     Audit-Id: 975c8820-2db4-4e33-b14b-e93262590849
	I0314 18:04:02.905097    5716 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-866600","namespace":"kube-system","uid":"22c38b7f-5371-47b4-b119-d9aec9d349cf","resourceVersion":"492","creationTimestamp":"2024-03-14T18:01:22Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.17.91.78:2379","kubernetes.io/config.hash":"abc7192aa143e5462050502ecfd46bc1","kubernetes.io/config.mirror":"abc7192aa143e5462050502ecfd46bc1","kubernetes.io/config.seen":"2024-03-14T18:01:15.863787877Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-866600","uid":"d6b958a8-7514-4695-bae4-635d1e29c4f0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-14T18:01:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6286 chars]
	I0314 18:04:02.905526    5716 round_trippers.go:463] GET https://172.17.91.78:8441/api/v1/nodes/functional-866600
	I0314 18:04:02.905526    5716 round_trippers.go:469] Request Headers:
	I0314 18:04:02.905526    5716 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:04:02.905526    5716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:04:02.909079    5716 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 18:04:02.909079    5716 round_trippers.go:577] Response Headers:
	I0314 18:04:02.909079    5716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5eedacab-8917-4088-9cb2-f35d60041e49
	I0314 18:04:02.909079    5716 round_trippers.go:580]     Date: Thu, 14 Mar 2024 18:04:03 GMT
	I0314 18:04:02.909079    5716 round_trippers.go:580]     Audit-Id: b0e033b3-960e-4f58-ab10-205931cc08e0
	I0314 18:04:02.909079    5716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 18:04:02.909079    5716 round_trippers.go:580]     Content-Type: application/json
	I0314 18:04:02.909282    5716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7afade58-6a21-4091-9b10-1e0f6959fe88
	I0314 18:04:02.909529    5716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-866600","uid":"d6b958a8-7514-4695-bae4-635d1e29c4f0","resourceVersion":"491","creationTimestamp":"2024-03-14T18:01:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-866600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"functional-866600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T18_01_24_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-03-14T18:01:20Z","fieldsType":"FieldsV1", [truncated 4785 chars]
	I0314 18:04:03.398987    5716 round_trippers.go:463] GET https://172.17.91.78:8441/api/v1/namespaces/kube-system/pods/etcd-functional-866600
	I0314 18:04:03.399061    5716 round_trippers.go:469] Request Headers:
	I0314 18:04:03.399061    5716 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:04:03.399133    5716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:04:03.403510    5716 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 18:04:03.403609    5716 round_trippers.go:577] Response Headers:
	I0314 18:04:03.403609    5716 round_trippers.go:580]     Audit-Id: d5244f01-589a-4fd7-a677-1711bd7108c9
	I0314 18:04:03.403609    5716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 18:04:03.403671    5716 round_trippers.go:580]     Content-Type: application/json
	I0314 18:04:03.403671    5716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7afade58-6a21-4091-9b10-1e0f6959fe88
	I0314 18:04:03.403671    5716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5eedacab-8917-4088-9cb2-f35d60041e49
	I0314 18:04:03.403722    5716 round_trippers.go:580]     Date: Thu, 14 Mar 2024 18:04:03 GMT
	I0314 18:04:03.403827    5716 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-866600","namespace":"kube-system","uid":"22c38b7f-5371-47b4-b119-d9aec9d349cf","resourceVersion":"492","creationTimestamp":"2024-03-14T18:01:22Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.17.91.78:2379","kubernetes.io/config.hash":"abc7192aa143e5462050502ecfd46bc1","kubernetes.io/config.mirror":"abc7192aa143e5462050502ecfd46bc1","kubernetes.io/config.seen":"2024-03-14T18:01:15.863787877Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-866600","uid":"d6b958a8-7514-4695-bae4-635d1e29c4f0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-14T18:01:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6286 chars]
	I0314 18:04:03.404846    5716 round_trippers.go:463] GET https://172.17.91.78:8441/api/v1/nodes/functional-866600
	I0314 18:04:03.404938    5716 round_trippers.go:469] Request Headers:
	I0314 18:04:03.404938    5716 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:04:03.405012    5716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:04:03.408586    5716 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 18:04:03.408649    5716 round_trippers.go:577] Response Headers:
	I0314 18:04:03.408649    5716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7afade58-6a21-4091-9b10-1e0f6959fe88
	I0314 18:04:03.408649    5716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5eedacab-8917-4088-9cb2-f35d60041e49
	I0314 18:04:03.408649    5716 round_trippers.go:580]     Date: Thu, 14 Mar 2024 18:04:03 GMT
	I0314 18:04:03.408649    5716 round_trippers.go:580]     Audit-Id: 2d6c982f-92cd-4414-bc3c-2854f3ab0e2e
	I0314 18:04:03.408649    5716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 18:04:03.408649    5716 round_trippers.go:580]     Content-Type: application/json
	I0314 18:04:03.408917    5716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-866600","uid":"d6b958a8-7514-4695-bae4-635d1e29c4f0","resourceVersion":"491","creationTimestamp":"2024-03-14T18:01:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-866600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"functional-866600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T18_01_24_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-03-14T18:01:20Z","fieldsType":"FieldsV1", [truncated 4785 chars]
	I0314 18:04:03.408979    5716 pod_ready.go:102] pod "etcd-functional-866600" in "kube-system" namespace has status "Ready":"False"
	I0314 18:04:03.900194    5716 round_trippers.go:463] GET https://172.17.91.78:8441/api/v1/namespaces/kube-system/pods/etcd-functional-866600
	I0314 18:04:03.900194    5716 round_trippers.go:469] Request Headers:
	I0314 18:04:03.900194    5716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:04:03.900194    5716 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:04:03.904532    5716 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 18:04:03.904532    5716 round_trippers.go:577] Response Headers:
	I0314 18:04:03.904532    5716 round_trippers.go:580]     Audit-Id: 81167d19-0a49-4913-a9a4-51511d67ddc6
	I0314 18:04:03.904532    5716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 18:04:03.904532    5716 round_trippers.go:580]     Content-Type: application/json
	I0314 18:04:03.904532    5716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7afade58-6a21-4091-9b10-1e0f6959fe88
	I0314 18:04:03.904532    5716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5eedacab-8917-4088-9cb2-f35d60041e49
	I0314 18:04:03.904532    5716 round_trippers.go:580]     Date: Thu, 14 Mar 2024 18:04:04 GMT
	I0314 18:04:03.904532    5716 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-866600","namespace":"kube-system","uid":"22c38b7f-5371-47b4-b119-d9aec9d349cf","resourceVersion":"492","creationTimestamp":"2024-03-14T18:01:22Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.17.91.78:2379","kubernetes.io/config.hash":"abc7192aa143e5462050502ecfd46bc1","kubernetes.io/config.mirror":"abc7192aa143e5462050502ecfd46bc1","kubernetes.io/config.seen":"2024-03-14T18:01:15.863787877Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-866600","uid":"d6b958a8-7514-4695-bae4-635d1e29c4f0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-14T18:01:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6286 chars]
	I0314 18:04:03.905344    5716 round_trippers.go:463] GET https://172.17.91.78:8441/api/v1/nodes/functional-866600
	I0314 18:04:03.905402    5716 round_trippers.go:469] Request Headers:
	I0314 18:04:03.905402    5716 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:04:03.905402    5716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:04:03.908148    5716 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0314 18:04:03.908148    5716 round_trippers.go:577] Response Headers:
	I0314 18:04:03.908148    5716 round_trippers.go:580]     Date: Thu, 14 Mar 2024 18:04:04 GMT
	I0314 18:04:03.908148    5716 round_trippers.go:580]     Audit-Id: 6f74e4a8-8fc3-46c1-9d13-3c5f7b36e718
	I0314 18:04:03.908148    5716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 18:04:03.908148    5716 round_trippers.go:580]     Content-Type: application/json
	I0314 18:04:03.908148    5716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7afade58-6a21-4091-9b10-1e0f6959fe88
	I0314 18:04:03.908148    5716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5eedacab-8917-4088-9cb2-f35d60041e49
	I0314 18:04:03.908756    5716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-866600","uid":"d6b958a8-7514-4695-bae4-635d1e29c4f0","resourceVersion":"491","creationTimestamp":"2024-03-14T18:01:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-866600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"functional-866600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T18_01_24_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-03-14T18:01:20Z","fieldsType":"FieldsV1", [truncated 4785 chars]
	I0314 18:04:04.402587    5716 round_trippers.go:463] GET https://172.17.91.78:8441/api/v1/namespaces/kube-system/pods/etcd-functional-866600
	I0314 18:04:04.402630    5716 round_trippers.go:469] Request Headers:
	I0314 18:04:04.402630    5716 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:04:04.402630    5716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:04:04.406821    5716 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 18:04:04.406885    5716 round_trippers.go:577] Response Headers:
	I0314 18:04:04.406885    5716 round_trippers.go:580]     Audit-Id: 52fa9835-2987-4bb6-b5ba-82a3b9c92a73
	I0314 18:04:04.406885    5716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 18:04:04.406937    5716 round_trippers.go:580]     Content-Type: application/json
	I0314 18:04:04.406937    5716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7afade58-6a21-4091-9b10-1e0f6959fe88
	I0314 18:04:04.406937    5716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5eedacab-8917-4088-9cb2-f35d60041e49
	I0314 18:04:04.406937    5716 round_trippers.go:580]     Date: Thu, 14 Mar 2024 18:04:04 GMT
	I0314 18:04:04.407202    5716 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-866600","namespace":"kube-system","uid":"22c38b7f-5371-47b4-b119-d9aec9d349cf","resourceVersion":"492","creationTimestamp":"2024-03-14T18:01:22Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.17.91.78:2379","kubernetes.io/config.hash":"abc7192aa143e5462050502ecfd46bc1","kubernetes.io/config.mirror":"abc7192aa143e5462050502ecfd46bc1","kubernetes.io/config.seen":"2024-03-14T18:01:15.863787877Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-866600","uid":"d6b958a8-7514-4695-bae4-635d1e29c4f0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-14T18:01:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6286 chars]
	I0314 18:04:04.407356    5716 round_trippers.go:463] GET https://172.17.91.78:8441/api/v1/nodes/functional-866600
	I0314 18:04:04.407884    5716 round_trippers.go:469] Request Headers:
	I0314 18:04:04.407884    5716 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:04:04.407884    5716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:04:04.417182    5716 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0314 18:04:04.417182    5716 round_trippers.go:577] Response Headers:
	I0314 18:04:04.417182    5716 round_trippers.go:580]     Date: Thu, 14 Mar 2024 18:04:04 GMT
	I0314 18:04:04.417182    5716 round_trippers.go:580]     Audit-Id: 241c1568-84a8-403d-a36d-98a0a0ecdda4
	I0314 18:04:04.417182    5716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 18:04:04.417182    5716 round_trippers.go:580]     Content-Type: application/json
	I0314 18:04:04.417182    5716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7afade58-6a21-4091-9b10-1e0f6959fe88
	I0314 18:04:04.417182    5716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5eedacab-8917-4088-9cb2-f35d60041e49
	I0314 18:04:04.417511    5716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-866600","uid":"d6b958a8-7514-4695-bae4-635d1e29c4f0","resourceVersion":"491","creationTimestamp":"2024-03-14T18:01:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-866600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"functional-866600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T18_01_24_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-03-14T18:01:20Z","fieldsType":"FieldsV1", [truncated 4785 chars]
	I0314 18:04:04.900584    5716 round_trippers.go:463] GET https://172.17.91.78:8441/api/v1/namespaces/kube-system/pods/etcd-functional-866600
	I0314 18:04:04.900584    5716 round_trippers.go:469] Request Headers:
	I0314 18:04:04.900584    5716 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:04:04.900584    5716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:04:04.906213    5716 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0314 18:04:04.906213    5716 round_trippers.go:577] Response Headers:
	I0314 18:04:04.906213    5716 round_trippers.go:580]     Content-Type: application/json
	I0314 18:04:04.906213    5716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7afade58-6a21-4091-9b10-1e0f6959fe88
	I0314 18:04:04.906213    5716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5eedacab-8917-4088-9cb2-f35d60041e49
	I0314 18:04:04.906213    5716 round_trippers.go:580]     Date: Thu, 14 Mar 2024 18:04:05 GMT
	I0314 18:04:04.906213    5716 round_trippers.go:580]     Audit-Id: 186ca61a-48f0-4421-bd79-ffe2df621b8d
	I0314 18:04:04.906213    5716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 18:04:04.906213    5716 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-866600","namespace":"kube-system","uid":"22c38b7f-5371-47b4-b119-d9aec9d349cf","resourceVersion":"492","creationTimestamp":"2024-03-14T18:01:22Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.17.91.78:2379","kubernetes.io/config.hash":"abc7192aa143e5462050502ecfd46bc1","kubernetes.io/config.mirror":"abc7192aa143e5462050502ecfd46bc1","kubernetes.io/config.seen":"2024-03-14T18:01:15.863787877Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-866600","uid":"d6b958a8-7514-4695-bae4-635d1e29c4f0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-14T18:01:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6286 chars]
	I0314 18:04:04.907348    5716 round_trippers.go:463] GET https://172.17.91.78:8441/api/v1/nodes/functional-866600
	I0314 18:04:04.907348    5716 round_trippers.go:469] Request Headers:
	I0314 18:04:04.907348    5716 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:04:04.907348    5716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:04:04.910016    5716 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0314 18:04:04.910935    5716 round_trippers.go:577] Response Headers:
	I0314 18:04:04.910984    5716 round_trippers.go:580]     Content-Type: application/json
	I0314 18:04:04.910984    5716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7afade58-6a21-4091-9b10-1e0f6959fe88
	I0314 18:04:04.910984    5716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5eedacab-8917-4088-9cb2-f35d60041e49
	I0314 18:04:04.910984    5716 round_trippers.go:580]     Date: Thu, 14 Mar 2024 18:04:05 GMT
	I0314 18:04:04.910984    5716 round_trippers.go:580]     Audit-Id: 83d69dde-ce9c-477f-a2c9-4973a84100f8
	I0314 18:04:04.910984    5716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 18:04:04.911242    5716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-866600","uid":"d6b958a8-7514-4695-bae4-635d1e29c4f0","resourceVersion":"491","creationTimestamp":"2024-03-14T18:01:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-866600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"functional-866600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T18_01_24_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-03-14T18:01:20Z","fieldsType":"FieldsV1", [truncated 4785 chars]
	I0314 18:04:05.402180    5716 round_trippers.go:463] GET https://172.17.91.78:8441/api/v1/namespaces/kube-system/pods/etcd-functional-866600
	I0314 18:04:05.402249    5716 round_trippers.go:469] Request Headers:
	I0314 18:04:05.402249    5716 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:04:05.402249    5716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:04:05.406509    5716 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 18:04:05.406731    5716 round_trippers.go:577] Response Headers:
	I0314 18:04:05.406731    5716 round_trippers.go:580]     Audit-Id: 12f22f3f-7ddb-4a69-bd78-a78ed6139e61
	I0314 18:04:05.406731    5716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 18:04:05.406731    5716 round_trippers.go:580]     Content-Type: application/json
	I0314 18:04:05.406731    5716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7afade58-6a21-4091-9b10-1e0f6959fe88
	I0314 18:04:05.406731    5716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5eedacab-8917-4088-9cb2-f35d60041e49
	I0314 18:04:05.406731    5716 round_trippers.go:580]     Date: Thu, 14 Mar 2024 18:04:05 GMT
	I0314 18:04:05.406910    5716 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-866600","namespace":"kube-system","uid":"22c38b7f-5371-47b4-b119-d9aec9d349cf","resourceVersion":"492","creationTimestamp":"2024-03-14T18:01:22Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.17.91.78:2379","kubernetes.io/config.hash":"abc7192aa143e5462050502ecfd46bc1","kubernetes.io/config.mirror":"abc7192aa143e5462050502ecfd46bc1","kubernetes.io/config.seen":"2024-03-14T18:01:15.863787877Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-866600","uid":"d6b958a8-7514-4695-bae4-635d1e29c4f0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-14T18:01:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6286 chars]
	I0314 18:04:05.407221    5716 round_trippers.go:463] GET https://172.17.91.78:8441/api/v1/nodes/functional-866600
	I0314 18:04:05.407221    5716 round_trippers.go:469] Request Headers:
	I0314 18:04:05.407221    5716 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:04:05.407221    5716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:04:05.410715    5716 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0314 18:04:05.410715    5716 round_trippers.go:577] Response Headers:
	I0314 18:04:05.410715    5716 round_trippers.go:580]     Audit-Id: 55338e5b-e5ac-493f-9489-8bb0f10e6d72
	I0314 18:04:05.410715    5716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 18:04:05.410715    5716 round_trippers.go:580]     Content-Type: application/json
	I0314 18:04:05.410715    5716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7afade58-6a21-4091-9b10-1e0f6959fe88
	I0314 18:04:05.410820    5716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5eedacab-8917-4088-9cb2-f35d60041e49
	I0314 18:04:05.410820    5716 round_trippers.go:580]     Date: Thu, 14 Mar 2024 18:04:05 GMT
	I0314 18:04:05.411025    5716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-866600","uid":"d6b958a8-7514-4695-bae4-635d1e29c4f0","resourceVersion":"491","creationTimestamp":"2024-03-14T18:01:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-866600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"functional-866600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T18_01_24_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-03-14T18:01:20Z","fieldsType":"FieldsV1", [truncated 4785 chars]
	I0314 18:04:05.411125    5716 pod_ready.go:102] pod "etcd-functional-866600" in "kube-system" namespace has status "Ready":"False"
	I0314 18:04:05.905554    5716 round_trippers.go:463] GET https://172.17.91.78:8441/api/v1/namespaces/kube-system/pods/etcd-functional-866600
	I0314 18:04:05.905554    5716 round_trippers.go:469] Request Headers:
	I0314 18:04:05.905554    5716 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:04:05.905644    5716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:04:05.908798    5716 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 18:04:05.908798    5716 round_trippers.go:577] Response Headers:
	I0314 18:04:05.908798    5716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 18:04:05.908798    5716 round_trippers.go:580]     Content-Type: application/json
	I0314 18:04:05.908798    5716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7afade58-6a21-4091-9b10-1e0f6959fe88
	I0314 18:04:05.908798    5716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5eedacab-8917-4088-9cb2-f35d60041e49
	I0314 18:04:05.908798    5716 round_trippers.go:580]     Date: Thu, 14 Mar 2024 18:04:06 GMT
	I0314 18:04:05.908798    5716 round_trippers.go:580]     Audit-Id: 87b124fa-3686-47f8-a19e-787ab214f4d8
	I0314 18:04:05.909858    5716 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-866600","namespace":"kube-system","uid":"22c38b7f-5371-47b4-b119-d9aec9d349cf","resourceVersion":"566","creationTimestamp":"2024-03-14T18:01:22Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.17.91.78:2379","kubernetes.io/config.hash":"abc7192aa143e5462050502ecfd46bc1","kubernetes.io/config.mirror":"abc7192aa143e5462050502ecfd46bc1","kubernetes.io/config.seen":"2024-03-14T18:01:15.863787877Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-866600","uid":"d6b958a8-7514-4695-bae4-635d1e29c4f0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-14T18:01:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6062 chars]
	I0314 18:04:05.910309    5716 round_trippers.go:463] GET https://172.17.91.78:8441/api/v1/nodes/functional-866600
	I0314 18:04:05.910309    5716 round_trippers.go:469] Request Headers:
	I0314 18:04:05.910309    5716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:04:05.910309    5716 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:04:05.913096    5716 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0314 18:04:05.913096    5716 round_trippers.go:577] Response Headers:
	I0314 18:04:05.913096    5716 round_trippers.go:580]     Date: Thu, 14 Mar 2024 18:04:06 GMT
	I0314 18:04:05.913096    5716 round_trippers.go:580]     Audit-Id: 0546c38e-bde8-4ff1-962c-2e0c568b2cd1
	I0314 18:04:05.913096    5716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 18:04:05.913096    5716 round_trippers.go:580]     Content-Type: application/json
	I0314 18:04:05.913096    5716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7afade58-6a21-4091-9b10-1e0f6959fe88
	I0314 18:04:05.913096    5716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5eedacab-8917-4088-9cb2-f35d60041e49
	I0314 18:04:05.914242    5716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-866600","uid":"d6b958a8-7514-4695-bae4-635d1e29c4f0","resourceVersion":"491","creationTimestamp":"2024-03-14T18:01:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-866600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"functional-866600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T18_01_24_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-03-14T18:01:20Z","fieldsType":"FieldsV1", [truncated 4785 chars]
	I0314 18:04:05.914242    5716 pod_ready.go:92] pod "etcd-functional-866600" in "kube-system" namespace has status "Ready":"True"
	I0314 18:04:05.914242    5716 pod_ready.go:81] duration metric: took 4.5223496s for pod "etcd-functional-866600" in "kube-system" namespace to be "Ready" ...
	I0314 18:04:05.914242    5716 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-functional-866600" in "kube-system" namespace to be "Ready" ...
	I0314 18:04:05.914242    5716 round_trippers.go:463] GET https://172.17.91.78:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-866600
	I0314 18:04:05.914242    5716 round_trippers.go:469] Request Headers:
	I0314 18:04:05.914773    5716 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:04:05.914773    5716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:04:05.916911    5716 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0314 18:04:05.916911    5716 round_trippers.go:577] Response Headers:
	I0314 18:04:05.916911    5716 round_trippers.go:580]     Audit-Id: 75179ca4-7a31-41d0-b39f-da2ad0bf9eaf
	I0314 18:04:05.916911    5716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 18:04:05.916911    5716 round_trippers.go:580]     Content-Type: application/json
	I0314 18:04:05.916911    5716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7afade58-6a21-4091-9b10-1e0f6959fe88
	I0314 18:04:05.916911    5716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5eedacab-8917-4088-9cb2-f35d60041e49
	I0314 18:04:05.916911    5716 round_trippers.go:580]     Date: Thu, 14 Mar 2024 18:04:06 GMT
	I0314 18:04:05.917929    5716 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-866600","namespace":"kube-system","uid":"9849501a-615b-4ff4-9914-35f4e0e718aa","resourceVersion":"564","creationTimestamp":"2024-03-14T18:01:24Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.17.91.78:8441","kubernetes.io/config.hash":"2f4e6493e948c5ff2c579c0625751fb6","kubernetes.io/config.mirror":"2f4e6493e948c5ff2c579c0625751fb6","kubernetes.io/config.seen":"2024-03-14T18:01:24.642625382Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-866600","uid":"d6b958a8-7514-4695-bae4-635d1e29c4f0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-14T18:01:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7600 chars]
	I0314 18:04:05.918088    5716 round_trippers.go:463] GET https://172.17.91.78:8441/api/v1/nodes/functional-866600
	I0314 18:04:05.918088    5716 round_trippers.go:469] Request Headers:
	I0314 18:04:05.918088    5716 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:04:05.918088    5716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:04:05.920861    5716 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0314 18:04:05.920861    5716 round_trippers.go:577] Response Headers:
	I0314 18:04:05.920861    5716 round_trippers.go:580]     Audit-Id: a179e027-3fcc-4fe1-86e3-ebb7e36fc23e
	I0314 18:04:05.920861    5716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 18:04:05.920861    5716 round_trippers.go:580]     Content-Type: application/json
	I0314 18:04:05.921141    5716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7afade58-6a21-4091-9b10-1e0f6959fe88
	I0314 18:04:05.921141    5716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5eedacab-8917-4088-9cb2-f35d60041e49
	I0314 18:04:05.921141    5716 round_trippers.go:580]     Date: Thu, 14 Mar 2024 18:04:06 GMT
	I0314 18:04:05.922053    5716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-866600","uid":"d6b958a8-7514-4695-bae4-635d1e29c4f0","resourceVersion":"491","creationTimestamp":"2024-03-14T18:01:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-866600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"functional-866600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T18_01_24_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-03-14T18:01:20Z","fieldsType":"FieldsV1", [truncated 4785 chars]
	I0314 18:04:05.922470    5716 pod_ready.go:92] pod "kube-apiserver-functional-866600" in "kube-system" namespace has status "Ready":"True"
	I0314 18:04:05.922470    5716 pod_ready.go:81] duration metric: took 8.2272ms for pod "kube-apiserver-functional-866600" in "kube-system" namespace to be "Ready" ...
	I0314 18:04:05.922524    5716 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-functional-866600" in "kube-system" namespace to be "Ready" ...
	I0314 18:04:05.922623    5716 round_trippers.go:463] GET https://172.17.91.78:8441/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-866600
	I0314 18:04:05.922623    5716 round_trippers.go:469] Request Headers:
	I0314 18:04:05.922623    5716 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:04:05.922623    5716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:04:05.926544    5716 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 18:04:05.926544    5716 round_trippers.go:577] Response Headers:
	I0314 18:04:05.926544    5716 round_trippers.go:580]     Audit-Id: 0dcb5e25-9c18-4b50-9a39-584d5c340d9f
	I0314 18:04:05.926544    5716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 18:04:05.926607    5716 round_trippers.go:580]     Content-Type: application/json
	I0314 18:04:05.926607    5716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7afade58-6a21-4091-9b10-1e0f6959fe88
	I0314 18:04:05.926626    5716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5eedacab-8917-4088-9cb2-f35d60041e49
	I0314 18:04:05.926626    5716 round_trippers.go:580]     Date: Thu, 14 Mar 2024 18:04:06 GMT
	I0314 18:04:05.926787    5716 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-functional-866600","namespace":"kube-system","uid":"f415043c-6140-4e46-8769-1445681ccc85","resourceVersion":"560","creationTimestamp":"2024-03-14T18:01:24Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"7b30f7025a5ea99b6924c6f19e03fd8d","kubernetes.io/config.mirror":"7b30f7025a5ea99b6924c6f19e03fd8d","kubernetes.io/config.seen":"2024-03-14T18:01:24.642626682Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-866600","uid":"d6b958a8-7514-4695-bae4-635d1e29c4f0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-14T18:01:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes
.io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{"." [truncated 7173 chars]
	I0314 18:04:05.926987    5716 round_trippers.go:463] GET https://172.17.91.78:8441/api/v1/nodes/functional-866600
	I0314 18:04:05.926987    5716 round_trippers.go:469] Request Headers:
	I0314 18:04:05.926987    5716 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:04:05.926987    5716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:04:05.930551    5716 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 18:04:05.930551    5716 round_trippers.go:577] Response Headers:
	I0314 18:04:05.930551    5716 round_trippers.go:580]     Date: Thu, 14 Mar 2024 18:04:06 GMT
	I0314 18:04:05.930551    5716 round_trippers.go:580]     Audit-Id: 474e025a-c270-4034-ad08-8e9dd42df739
	I0314 18:04:05.930551    5716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 18:04:05.930551    5716 round_trippers.go:580]     Content-Type: application/json
	I0314 18:04:05.930551    5716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7afade58-6a21-4091-9b10-1e0f6959fe88
	I0314 18:04:05.930551    5716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5eedacab-8917-4088-9cb2-f35d60041e49
	I0314 18:04:05.930551    5716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-866600","uid":"d6b958a8-7514-4695-bae4-635d1e29c4f0","resourceVersion":"491","creationTimestamp":"2024-03-14T18:01:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-866600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"functional-866600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T18_01_24_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-03-14T18:01:20Z","fieldsType":"FieldsV1", [truncated 4785 chars]
	I0314 18:04:05.931183    5716 pod_ready.go:92] pod "kube-controller-manager-functional-866600" in "kube-system" namespace has status "Ready":"True"
	I0314 18:04:05.931183    5716 pod_ready.go:81] duration metric: took 8.6579ms for pod "kube-controller-manager-functional-866600" in "kube-system" namespace to be "Ready" ...
	I0314 18:04:05.931183    5716 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-7dppw" in "kube-system" namespace to be "Ready" ...
	I0314 18:04:05.931183    5716 round_trippers.go:463] GET https://172.17.91.78:8441/api/v1/namespaces/kube-system/pods/kube-proxy-7dppw
	I0314 18:04:05.931183    5716 round_trippers.go:469] Request Headers:
	I0314 18:04:05.931183    5716 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:04:05.931183    5716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:04:05.933999    5716 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0314 18:04:05.934465    5716 round_trippers.go:577] Response Headers:
	I0314 18:04:05.934465    5716 round_trippers.go:580]     Audit-Id: 1112b02e-80d6-485f-89ce-b41c09c969fc
	I0314 18:04:05.934465    5716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 18:04:05.934465    5716 round_trippers.go:580]     Content-Type: application/json
	I0314 18:04:05.934465    5716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7afade58-6a21-4091-9b10-1e0f6959fe88
	I0314 18:04:05.934530    5716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5eedacab-8917-4088-9cb2-f35d60041e49
	I0314 18:04:05.934530    5716 round_trippers.go:580]     Date: Thu, 14 Mar 2024 18:04:06 GMT
	I0314 18:04:05.934675    5716 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-7dppw","generateName":"kube-proxy-","namespace":"kube-system","uid":"8123be17-49ac-450e-9ff2-48b35f8a9a0f","resourceVersion":"557","creationTimestamp":"2024-03-14T18:01:38Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"5c66890e-1d26-44c8-84a0-cba890186b64","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-14T18:01:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5c66890e-1d26-44c8-84a0-cba890186b64\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5736 chars]
	I0314 18:04:05.935060    5716 round_trippers.go:463] GET https://172.17.91.78:8441/api/v1/nodes/functional-866600
	I0314 18:04:05.935060    5716 round_trippers.go:469] Request Headers:
	I0314 18:04:05.935060    5716 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:04:05.935060    5716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:04:05.937629    5716 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0314 18:04:05.937629    5716 round_trippers.go:577] Response Headers:
	I0314 18:04:05.937629    5716 round_trippers.go:580]     Content-Type: application/json
	I0314 18:04:05.937629    5716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7afade58-6a21-4091-9b10-1e0f6959fe88
	I0314 18:04:05.937629    5716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5eedacab-8917-4088-9cb2-f35d60041e49
	I0314 18:04:05.937629    5716 round_trippers.go:580]     Date: Thu, 14 Mar 2024 18:04:06 GMT
	I0314 18:04:05.937629    5716 round_trippers.go:580]     Audit-Id: 4bf03ff9-593b-497b-bdb2-d1fb1962541a
	I0314 18:04:05.937629    5716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 18:04:05.937629    5716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-866600","uid":"d6b958a8-7514-4695-bae4-635d1e29c4f0","resourceVersion":"491","creationTimestamp":"2024-03-14T18:01:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-866600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"functional-866600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T18_01_24_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-03-14T18:01:20Z","fieldsType":"FieldsV1", [truncated 4785 chars]
	I0314 18:04:05.937629    5716 pod_ready.go:92] pod "kube-proxy-7dppw" in "kube-system" namespace has status "Ready":"True"
	I0314 18:04:05.937629    5716 pod_ready.go:81] duration metric: took 6.4459ms for pod "kube-proxy-7dppw" in "kube-system" namespace to be "Ready" ...
	I0314 18:04:05.938591    5716 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-functional-866600" in "kube-system" namespace to be "Ready" ...
	I0314 18:04:05.938591    5716 round_trippers.go:463] GET https://172.17.91.78:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-866600
	I0314 18:04:05.938685    5716 round_trippers.go:469] Request Headers:
	I0314 18:04:05.938717    5716 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:04:05.938717    5716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:04:05.940985    5716 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0314 18:04:05.941955    5716 round_trippers.go:577] Response Headers:
	I0314 18:04:05.941955    5716 round_trippers.go:580]     Audit-Id: 29e9fa34-efe5-41b9-b48e-1522cb44394a
	I0314 18:04:05.942046    5716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 18:04:05.942046    5716 round_trippers.go:580]     Content-Type: application/json
	I0314 18:04:05.942046    5716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7afade58-6a21-4091-9b10-1e0f6959fe88
	I0314 18:04:05.942046    5716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5eedacab-8917-4088-9cb2-f35d60041e49
	I0314 18:04:05.942046    5716 round_trippers.go:580]     Date: Thu, 14 Mar 2024 18:04:06 GMT
	I0314 18:04:05.942161    5716 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-functional-866600","namespace":"kube-system","uid":"ae9aa2ce-db1b-4105-9a2c-243505551b2c","resourceVersion":"562","creationTimestamp":"2024-03-14T18:01:24Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"7a19f9d78246bcc36cfab94a17fd28fb","kubernetes.io/config.mirror":"7a19f9d78246bcc36cfab94a17fd28fb","kubernetes.io/config.seen":"2024-03-14T18:01:24.642627682Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-866600","uid":"d6b958a8-7514-4695-bae4-635d1e29c4f0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-14T18:01:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{
},"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component": [truncated 4903 chars]
	I0314 18:04:05.942660    5716 round_trippers.go:463] GET https://172.17.91.78:8441/api/v1/nodes/functional-866600
	I0314 18:04:05.942690    5716 round_trippers.go:469] Request Headers:
	I0314 18:04:05.942690    5716 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:04:05.942690    5716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:04:05.944599    5716 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0314 18:04:05.944599    5716 round_trippers.go:577] Response Headers:
	I0314 18:04:05.944599    5716 round_trippers.go:580]     Audit-Id: 60610679-b4fe-43a5-9154-cad2a918033b
	I0314 18:04:05.945615    5716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 18:04:05.945615    5716 round_trippers.go:580]     Content-Type: application/json
	I0314 18:04:05.945615    5716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7afade58-6a21-4091-9b10-1e0f6959fe88
	I0314 18:04:05.945646    5716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5eedacab-8917-4088-9cb2-f35d60041e49
	I0314 18:04:05.945646    5716 round_trippers.go:580]     Date: Thu, 14 Mar 2024 18:04:06 GMT
	I0314 18:04:05.945646    5716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-866600","uid":"d6b958a8-7514-4695-bae4-635d1e29c4f0","resourceVersion":"491","creationTimestamp":"2024-03-14T18:01:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-866600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"functional-866600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T18_01_24_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-03-14T18:01:20Z","fieldsType":"FieldsV1", [truncated 4785 chars]
	I0314 18:04:05.946112    5716 pod_ready.go:92] pod "kube-scheduler-functional-866600" in "kube-system" namespace has status "Ready":"True"
	I0314 18:04:05.946155    5716 pod_ready.go:81] duration metric: took 7.5627ms for pod "kube-scheduler-functional-866600" in "kube-system" namespace to be "Ready" ...
	I0314 18:04:05.946155    5716 pod_ready.go:38] duration metric: took 7.5893933s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0314 18:04:05.946207    5716 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0314 18:04:05.963540    5716 command_runner.go:130] > -16
	I0314 18:04:05.963603    5716 ops.go:34] apiserver oom_adj: -16
	I0314 18:04:05.963603    5716 kubeadm.go:591] duration metric: took 17.5912831s to restartPrimaryControlPlane
	I0314 18:04:05.963603    5716 kubeadm.go:393] duration metric: took 17.6649223s to StartCluster
	I0314 18:04:05.963675    5716 settings.go:142] acquiring lock: {Name:mk2f48f1c2db86c45c5c20d13312e07e9c171d09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 18:04:05.963810    5716 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0314 18:04:05.964948    5716 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\kubeconfig: {Name:mk3b9816be6135fef42a073128e9ed356868417a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 18:04:05.966311    5716 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0314 18:04:05.966433    5716 addons.go:69] Setting storage-provisioner=true in profile "functional-866600"
	I0314 18:04:05.966311    5716 start.go:234] Will wait 6m0s for node &{Name: IP:172.17.91.78 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0314 18:04:05.966433    5716 addons.go:69] Setting default-storageclass=true in profile "functional-866600"
	I0314 18:04:05.966504    5716 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-866600"
	I0314 18:04:05.966433    5716 addons.go:234] Setting addon storage-provisioner=true in "functional-866600"
	W0314 18:04:05.966585    5716 addons.go:243] addon storage-provisioner should already be in state true
	I0314 18:04:05.970821    5716 out.go:177] * Verifying Kubernetes components...
	I0314 18:04:05.966781    5716 host.go:66] Checking if "functional-866600" exists ...
	I0314 18:04:05.966826    5716 config.go:182] Loaded profile config "functional-866600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0314 18:04:05.967153    5716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-866600 ).state
	I0314 18:04:05.972224    5716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-866600 ).state
	I0314 18:04:05.984064    5716 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 18:04:06.256553    5716 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0314 18:04:06.288072    5716 node_ready.go:35] waiting up to 6m0s for node "functional-866600" to be "Ready" ...
	I0314 18:04:06.288072    5716 round_trippers.go:463] GET https://172.17.91.78:8441/api/v1/nodes/functional-866600
	I0314 18:04:06.288072    5716 round_trippers.go:469] Request Headers:
	I0314 18:04:06.288072    5716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:04:06.288072    5716 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:04:06.292436    5716 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 18:04:06.292436    5716 round_trippers.go:577] Response Headers:
	I0314 18:04:06.292436    5716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7afade58-6a21-4091-9b10-1e0f6959fe88
	I0314 18:04:06.292436    5716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5eedacab-8917-4088-9cb2-f35d60041e49
	I0314 18:04:06.292436    5716 round_trippers.go:580]     Date: Thu, 14 Mar 2024 18:04:06 GMT
	I0314 18:04:06.292436    5716 round_trippers.go:580]     Audit-Id: 13a3fe01-dcfc-4e0c-bb21-038e7d184fcd
	I0314 18:04:06.292436    5716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 18:04:06.292526    5716 round_trippers.go:580]     Content-Type: application/json
	I0314 18:04:06.292821    5716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-866600","uid":"d6b958a8-7514-4695-bae4-635d1e29c4f0","resourceVersion":"491","creationTimestamp":"2024-03-14T18:01:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-866600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"functional-866600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T18_01_24_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-03-14T18:01:20Z","fieldsType":"FieldsV1", [truncated 4785 chars]
	I0314 18:04:06.293295    5716 node_ready.go:49] node "functional-866600" has status "Ready":"True"
	I0314 18:04:06.293354    5716 node_ready.go:38] duration metric: took 5.2235ms for node "functional-866600" to be "Ready" ...
	I0314 18:04:06.293354    5716 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0314 18:04:06.314282    5716 round_trippers.go:463] GET https://172.17.91.78:8441/api/v1/namespaces/kube-system/pods
	I0314 18:04:06.314282    5716 round_trippers.go:469] Request Headers:
	I0314 18:04:06.314282    5716 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:04:06.314282    5716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:04:06.318556    5716 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 18:04:06.318556    5716 round_trippers.go:577] Response Headers:
	I0314 18:04:06.318556    5716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7afade58-6a21-4091-9b10-1e0f6959fe88
	I0314 18:04:06.318556    5716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5eedacab-8917-4088-9cb2-f35d60041e49
	I0314 18:04:06.318556    5716 round_trippers.go:580]     Date: Thu, 14 Mar 2024 18:04:06 GMT
	I0314 18:04:06.318556    5716 round_trippers.go:580]     Audit-Id: 55b88bd4-bed5-4c52-bda2-0f81424793fd
	I0314 18:04:06.318556    5716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 18:04:06.318556    5716 round_trippers.go:580]     Content-Type: application/json
	I0314 18:04:06.319470    5716 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"568"},"items":[{"metadata":{"name":"coredns-5dd5756b68-n84nx","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"5d8f04ff-70b9-4332-a120-8993958cfd33","resourceVersion":"558","creationTimestamp":"2024-03-14T18:01:38Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"a19c2344-e6b5-4ec0-a733-b2c1a49d774f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-14T18:01:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a19c2344-e6b5-4ec0-a733-b2c1a49d774f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 47991 chars]
	I0314 18:04:06.321559    5716 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-n84nx" in "kube-system" namespace to be "Ready" ...
	I0314 18:04:06.519085    5716 request.go:629] Waited for 197.3231ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.91.78:8441/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-n84nx
	I0314 18:04:06.519262    5716 round_trippers.go:463] GET https://172.17.91.78:8441/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-n84nx
	I0314 18:04:06.519262    5716 round_trippers.go:469] Request Headers:
	I0314 18:04:06.519262    5716 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:04:06.519262    5716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:04:06.524561    5716 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0314 18:04:06.524649    5716 round_trippers.go:577] Response Headers:
	I0314 18:04:06.524649    5716 round_trippers.go:580]     Audit-Id: 06cac6b6-3803-479a-9bc2-27fdc621aade
	I0314 18:04:06.524649    5716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 18:04:06.524649    5716 round_trippers.go:580]     Content-Type: application/json
	I0314 18:04:06.524649    5716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7afade58-6a21-4091-9b10-1e0f6959fe88
	I0314 18:04:06.524649    5716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5eedacab-8917-4088-9cb2-f35d60041e49
	I0314 18:04:06.524734    5716 round_trippers.go:580]     Date: Thu, 14 Mar 2024 18:04:06 GMT
	I0314 18:04:06.524734    5716 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-n84nx","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"5d8f04ff-70b9-4332-a120-8993958cfd33","resourceVersion":"558","creationTimestamp":"2024-03-14T18:01:38Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"a19c2344-e6b5-4ec0-a733-b2c1a49d774f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-14T18:01:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a19c2344-e6b5-4ec0-a733-b2c1a49d774f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6154 chars]
	I0314 18:04:06.708418    5716 request.go:629] Waited for 182.3238ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.91.78:8441/api/v1/nodes/functional-866600
	I0314 18:04:06.708650    5716 round_trippers.go:463] GET https://172.17.91.78:8441/api/v1/nodes/functional-866600
	I0314 18:04:06.708650    5716 round_trippers.go:469] Request Headers:
	I0314 18:04:06.708729    5716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:04:06.708763    5716 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:04:06.713298    5716 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 18:04:06.713298    5716 round_trippers.go:577] Response Headers:
	I0314 18:04:06.713710    5716 round_trippers.go:580]     Audit-Id: bb347cb2-8181-46ac-877d-d124bdbcbca4
	I0314 18:04:06.713710    5716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 18:04:06.713710    5716 round_trippers.go:580]     Content-Type: application/json
	I0314 18:04:06.713710    5716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7afade58-6a21-4091-9b10-1e0f6959fe88
	I0314 18:04:06.713710    5716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5eedacab-8917-4088-9cb2-f35d60041e49
	I0314 18:04:06.713710    5716 round_trippers.go:580]     Date: Thu, 14 Mar 2024 18:04:06 GMT
	I0314 18:04:06.714207    5716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-866600","uid":"d6b958a8-7514-4695-bae4-635d1e29c4f0","resourceVersion":"491","creationTimestamp":"2024-03-14T18:01:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-866600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"functional-866600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T18_01_24_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-03-14T18:01:20Z","fieldsType":"FieldsV1", [truncated 4785 chars]
	I0314 18:04:06.714632    5716 pod_ready.go:92] pod "coredns-5dd5756b68-n84nx" in "kube-system" namespace has status "Ready":"True"
	I0314 18:04:06.714689    5716 pod_ready.go:81] duration metric: took 393.1007ms for pod "coredns-5dd5756b68-n84nx" in "kube-system" namespace to be "Ready" ...
	I0314 18:04:06.714745    5716 pod_ready.go:78] waiting up to 6m0s for pod "etcd-functional-866600" in "kube-system" namespace to be "Ready" ...
	I0314 18:04:06.914309    5716 request.go:629] Waited for 199.2526ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.91.78:8441/api/v1/namespaces/kube-system/pods/etcd-functional-866600
	I0314 18:04:06.914409    5716 round_trippers.go:463] GET https://172.17.91.78:8441/api/v1/namespaces/kube-system/pods/etcd-functional-866600
	I0314 18:04:06.914409    5716 round_trippers.go:469] Request Headers:
	I0314 18:04:06.914409    5716 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:04:06.914409    5716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:04:06.920094    5716 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0314 18:04:06.920195    5716 round_trippers.go:577] Response Headers:
	I0314 18:04:06.920195    5716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7afade58-6a21-4091-9b10-1e0f6959fe88
	I0314 18:04:06.920195    5716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5eedacab-8917-4088-9cb2-f35d60041e49
	I0314 18:04:06.920195    5716 round_trippers.go:580]     Date: Thu, 14 Mar 2024 18:04:07 GMT
	I0314 18:04:06.920195    5716 round_trippers.go:580]     Audit-Id: 8abbe736-53e5-4345-88a1-9baf51097876
	I0314 18:04:06.920195    5716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 18:04:06.920195    5716 round_trippers.go:580]     Content-Type: application/json
	I0314 18:04:06.920422    5716 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-866600","namespace":"kube-system","uid":"22c38b7f-5371-47b4-b119-d9aec9d349cf","resourceVersion":"566","creationTimestamp":"2024-03-14T18:01:22Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.17.91.78:2379","kubernetes.io/config.hash":"abc7192aa143e5462050502ecfd46bc1","kubernetes.io/config.mirror":"abc7192aa143e5462050502ecfd46bc1","kubernetes.io/config.seen":"2024-03-14T18:01:15.863787877Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-866600","uid":"d6b958a8-7514-4695-bae4-635d1e29c4f0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-14T18:01:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6062 chars]
	I0314 18:04:07.119726    5716 request.go:629] Waited for 199.0834ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.91.78:8441/api/v1/nodes/functional-866600
	I0314 18:04:07.119726    5716 round_trippers.go:463] GET https://172.17.91.78:8441/api/v1/nodes/functional-866600
	I0314 18:04:07.119726    5716 round_trippers.go:469] Request Headers:
	I0314 18:04:07.119726    5716 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:04:07.119726    5716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:04:07.123826    5716 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 18:04:07.123919    5716 round_trippers.go:577] Response Headers:
	I0314 18:04:07.123919    5716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 18:04:07.123919    5716 round_trippers.go:580]     Content-Type: application/json
	I0314 18:04:07.123919    5716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7afade58-6a21-4091-9b10-1e0f6959fe88
	I0314 18:04:07.123919    5716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5eedacab-8917-4088-9cb2-f35d60041e49
	I0314 18:04:07.123919    5716 round_trippers.go:580]     Date: Thu, 14 Mar 2024 18:04:07 GMT
	I0314 18:04:07.123919    5716 round_trippers.go:580]     Audit-Id: b6539e70-b827-4d29-afa9-f6ed3244be2b
	I0314 18:04:07.123919    5716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-866600","uid":"d6b958a8-7514-4695-bae4-635d1e29c4f0","resourceVersion":"491","creationTimestamp":"2024-03-14T18:01:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-866600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"functional-866600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T18_01_24_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-03-14T18:01:20Z","fieldsType":"FieldsV1", [truncated 4785 chars]
	I0314 18:04:07.124456    5716 pod_ready.go:92] pod "etcd-functional-866600" in "kube-system" namespace has status "Ready":"True"
	I0314 18:04:07.124619    5716 pod_ready.go:81] duration metric: took 409.8446ms for pod "etcd-functional-866600" in "kube-system" namespace to be "Ready" ...
	I0314 18:04:07.124619    5716 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-functional-866600" in "kube-system" namespace to be "Ready" ...
	I0314 18:04:07.308257    5716 request.go:629] Waited for 183.4975ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.91.78:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-866600
	I0314 18:04:07.308426    5716 round_trippers.go:463] GET https://172.17.91.78:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-866600
	I0314 18:04:07.308426    5716 round_trippers.go:469] Request Headers:
	I0314 18:04:07.308426    5716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:04:07.308426    5716 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:04:07.315120    5716 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0314 18:04:07.315120    5716 round_trippers.go:577] Response Headers:
	I0314 18:04:07.315120    5716 round_trippers.go:580]     Date: Thu, 14 Mar 2024 18:04:07 GMT
	I0314 18:04:07.315120    5716 round_trippers.go:580]     Audit-Id: 6b2f8af0-c0aa-4709-ade1-1a28e2d03b1f
	I0314 18:04:07.315120    5716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 18:04:07.315120    5716 round_trippers.go:580]     Content-Type: application/json
	I0314 18:04:07.315120    5716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7afade58-6a21-4091-9b10-1e0f6959fe88
	I0314 18:04:07.315120    5716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5eedacab-8917-4088-9cb2-f35d60041e49
	I0314 18:04:07.315120    5716 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-866600","namespace":"kube-system","uid":"9849501a-615b-4ff4-9914-35f4e0e718aa","resourceVersion":"564","creationTimestamp":"2024-03-14T18:01:24Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.17.91.78:8441","kubernetes.io/config.hash":"2f4e6493e948c5ff2c579c0625751fb6","kubernetes.io/config.mirror":"2f4e6493e948c5ff2c579c0625751fb6","kubernetes.io/config.seen":"2024-03-14T18:01:24.642625382Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-866600","uid":"d6b958a8-7514-4695-bae4-635d1e29c4f0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-14T18:01:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7600 chars]
	I0314 18:04:07.518486    5716 request.go:629] Waited for 202.7052ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.91.78:8441/api/v1/nodes/functional-866600
	I0314 18:04:07.518815    5716 round_trippers.go:463] GET https://172.17.91.78:8441/api/v1/nodes/functional-866600
	I0314 18:04:07.518815    5716 round_trippers.go:469] Request Headers:
	I0314 18:04:07.518815    5716 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:04:07.518815    5716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:04:07.523021    5716 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 18:04:07.523075    5716 round_trippers.go:577] Response Headers:
	I0314 18:04:07.523075    5716 round_trippers.go:580]     Date: Thu, 14 Mar 2024 18:04:07 GMT
	I0314 18:04:07.523075    5716 round_trippers.go:580]     Audit-Id: 7e9bda10-9787-40e0-8588-e44080a2b9eb
	I0314 18:04:07.523075    5716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 18:04:07.523075    5716 round_trippers.go:580]     Content-Type: application/json
	I0314 18:04:07.523075    5716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7afade58-6a21-4091-9b10-1e0f6959fe88
	I0314 18:04:07.523075    5716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5eedacab-8917-4088-9cb2-f35d60041e49
	I0314 18:04:07.523075    5716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-866600","uid":"d6b958a8-7514-4695-bae4-635d1e29c4f0","resourceVersion":"491","creationTimestamp":"2024-03-14T18:01:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-866600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"functional-866600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T18_01_24_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-03-14T18:01:20Z","fieldsType":"FieldsV1", [truncated 4785 chars]
	I0314 18:04:07.523774    5716 pod_ready.go:92] pod "kube-apiserver-functional-866600" in "kube-system" namespace has status "Ready":"True"
	I0314 18:04:07.523774    5716 pod_ready.go:81] duration metric: took 399.1255ms for pod "kube-apiserver-functional-866600" in "kube-system" namespace to be "Ready" ...
	I0314 18:04:07.523774    5716 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-functional-866600" in "kube-system" namespace to be "Ready" ...
	I0314 18:04:07.708355    5716 request.go:629] Waited for 184.5674ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.91.78:8441/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-866600
	I0314 18:04:07.708355    5716 round_trippers.go:463] GET https://172.17.91.78:8441/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-866600
	I0314 18:04:07.708355    5716 round_trippers.go:469] Request Headers:
	I0314 18:04:07.708355    5716 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:04:07.708355    5716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:04:07.712980    5716 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 18:04:07.712980    5716 round_trippers.go:577] Response Headers:
	I0314 18:04:07.712980    5716 round_trippers.go:580]     Audit-Id: 561cab5a-b094-45a9-a9fe-bd0b2a0490f6
	I0314 18:04:07.712980    5716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 18:04:07.712980    5716 round_trippers.go:580]     Content-Type: application/json
	I0314 18:04:07.712980    5716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7afade58-6a21-4091-9b10-1e0f6959fe88
	I0314 18:04:07.713088    5716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5eedacab-8917-4088-9cb2-f35d60041e49
	I0314 18:04:07.713114    5716 round_trippers.go:580]     Date: Thu, 14 Mar 2024 18:04:07 GMT
	I0314 18:04:07.713181    5716 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-functional-866600","namespace":"kube-system","uid":"f415043c-6140-4e46-8769-1445681ccc85","resourceVersion":"560","creationTimestamp":"2024-03-14T18:01:24Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"7b30f7025a5ea99b6924c6f19e03fd8d","kubernetes.io/config.mirror":"7b30f7025a5ea99b6924c6f19e03fd8d","kubernetes.io/config.seen":"2024-03-14T18:01:24.642626682Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-866600","uid":"d6b958a8-7514-4695-bae4-635d1e29c4f0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-14T18:01:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes
.io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{"." [truncated 7173 chars]
	I0314 18:04:07.915295    5716 request.go:629] Waited for 200.7686ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.91.78:8441/api/v1/nodes/functional-866600
	I0314 18:04:07.915494    5716 round_trippers.go:463] GET https://172.17.91.78:8441/api/v1/nodes/functional-866600
	I0314 18:04:07.915494    5716 round_trippers.go:469] Request Headers:
	I0314 18:04:07.915494    5716 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:04:07.915494    5716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:04:07.918506    5716 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 18:04:07.919401    5716 round_trippers.go:577] Response Headers:
	I0314 18:04:07.919401    5716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5eedacab-8917-4088-9cb2-f35d60041e49
	I0314 18:04:07.919401    5716 round_trippers.go:580]     Date: Thu, 14 Mar 2024 18:04:08 GMT
	I0314 18:04:07.919401    5716 round_trippers.go:580]     Audit-Id: b094f13b-0055-47fe-a41a-f4db010d2a85
	I0314 18:04:07.919401    5716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 18:04:07.919401    5716 round_trippers.go:580]     Content-Type: application/json
	I0314 18:04:07.919401    5716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7afade58-6a21-4091-9b10-1e0f6959fe88
	I0314 18:04:07.919655    5716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-866600","uid":"d6b958a8-7514-4695-bae4-635d1e29c4f0","resourceVersion":"491","creationTimestamp":"2024-03-14T18:01:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-866600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"functional-866600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T18_01_24_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-03-14T18:01:20Z","fieldsType":"FieldsV1", [truncated 4785 chars]
	I0314 18:04:07.919732    5716 pod_ready.go:92] pod "kube-controller-manager-functional-866600" in "kube-system" namespace has status "Ready":"True"
	I0314 18:04:07.919732    5716 pod_ready.go:81] duration metric: took 395.9291ms for pod "kube-controller-manager-functional-866600" in "kube-system" namespace to be "Ready" ...
	I0314 18:04:07.919732    5716 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-7dppw" in "kube-system" namespace to be "Ready" ...
	I0314 18:04:07.972481    5716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 18:04:07.972545    5716 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:04:07.972857    5716 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0314 18:04:07.973502    5716 kapi.go:59] client config for functional-866600: &rest.Config{Host:"https://172.17.91.78:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\functional-866600\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\functional-866600\\client.key", CAFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), C
AData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1ec9180), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0314 18:04:07.974140    5716 addons.go:234] Setting addon default-storageclass=true in "functional-866600"
	W0314 18:04:07.974140    5716 addons.go:243] addon default-storageclass should already be in state true
	I0314 18:04:07.974140    5716 host.go:66] Checking if "functional-866600" exists ...
	I0314 18:04:07.974839    5716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-866600 ).state
	I0314 18:04:07.993419    5716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 18:04:07.993419    5716 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:04:07.998111    5716 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0314 18:04:08.000679    5716 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0314 18:04:08.000679    5716 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0314 18:04:08.000781    5716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-866600 ).state
	I0314 18:04:08.120567    5716 request.go:629] Waited for 200.5733ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.91.78:8441/api/v1/namespaces/kube-system/pods/kube-proxy-7dppw
	I0314 18:04:08.120567    5716 round_trippers.go:463] GET https://172.17.91.78:8441/api/v1/namespaces/kube-system/pods/kube-proxy-7dppw
	I0314 18:04:08.120567    5716 round_trippers.go:469] Request Headers:
	I0314 18:04:08.120567    5716 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:04:08.120567    5716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:04:08.123940    5716 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 18:04:08.124465    5716 round_trippers.go:577] Response Headers:
	I0314 18:04:08.124465    5716 round_trippers.go:580]     Audit-Id: 8ef6bbb8-ebcc-4160-8b73-5da2cce8a377
	I0314 18:04:08.124465    5716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 18:04:08.124465    5716 round_trippers.go:580]     Content-Type: application/json
	I0314 18:04:08.124465    5716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7afade58-6a21-4091-9b10-1e0f6959fe88
	I0314 18:04:08.124465    5716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5eedacab-8917-4088-9cb2-f35d60041e49
	I0314 18:04:08.124465    5716 round_trippers.go:580]     Date: Thu, 14 Mar 2024 18:04:08 GMT
	I0314 18:04:08.124647    5716 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-7dppw","generateName":"kube-proxy-","namespace":"kube-system","uid":"8123be17-49ac-450e-9ff2-48b35f8a9a0f","resourceVersion":"557","creationTimestamp":"2024-03-14T18:01:38Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"5c66890e-1d26-44c8-84a0-cba890186b64","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-14T18:01:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5c66890e-1d26-44c8-84a0-cba890186b64\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5736 chars]
	I0314 18:04:08.309913    5716 request.go:629] Waited for 184.6398ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.91.78:8441/api/v1/nodes/functional-866600
	I0314 18:04:08.310044    5716 round_trippers.go:463] GET https://172.17.91.78:8441/api/v1/nodes/functional-866600
	I0314 18:04:08.310044    5716 round_trippers.go:469] Request Headers:
	I0314 18:04:08.310044    5716 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:04:08.310140    5716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:04:08.315567    5716 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0314 18:04:08.315567    5716 round_trippers.go:577] Response Headers:
	I0314 18:04:08.315567    5716 round_trippers.go:580]     Content-Type: application/json
	I0314 18:04:08.315567    5716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7afade58-6a21-4091-9b10-1e0f6959fe88
	I0314 18:04:08.315567    5716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5eedacab-8917-4088-9cb2-f35d60041e49
	I0314 18:04:08.315567    5716 round_trippers.go:580]     Date: Thu, 14 Mar 2024 18:04:08 GMT
	I0314 18:04:08.315567    5716 round_trippers.go:580]     Audit-Id: a8385687-f629-4077-8f6d-09e4774b0572
	I0314 18:04:08.315567    5716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 18:04:08.315567    5716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-866600","uid":"d6b958a8-7514-4695-bae4-635d1e29c4f0","resourceVersion":"491","creationTimestamp":"2024-03-14T18:01:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-866600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"functional-866600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T18_01_24_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-03-14T18:01:20Z","fieldsType":"FieldsV1", [truncated 4785 chars]
	I0314 18:04:08.316561    5716 pod_ready.go:92] pod "kube-proxy-7dppw" in "kube-system" namespace has status "Ready":"True"
	I0314 18:04:08.316561    5716 pod_ready.go:81] duration metric: took 396.7992ms for pod "kube-proxy-7dppw" in "kube-system" namespace to be "Ready" ...
	I0314 18:04:08.316561    5716 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-functional-866600" in "kube-system" namespace to be "Ready" ...
	I0314 18:04:08.515002    5716 request.go:629] Waited for 198.4265ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.91.78:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-866600
	I0314 18:04:08.515342    5716 round_trippers.go:463] GET https://172.17.91.78:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-866600
	I0314 18:04:08.515468    5716 round_trippers.go:469] Request Headers:
	I0314 18:04:08.515468    5716 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:04:08.515468    5716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:04:08.541402    5716 round_trippers.go:574] Response Status: 200 OK in 25 milliseconds
	I0314 18:04:08.541402    5716 round_trippers.go:577] Response Headers:
	I0314 18:04:08.541402    5716 round_trippers.go:580]     Content-Type: application/json
	I0314 18:04:08.541402    5716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7afade58-6a21-4091-9b10-1e0f6959fe88
	I0314 18:04:08.541402    5716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5eedacab-8917-4088-9cb2-f35d60041e49
	I0314 18:04:08.541402    5716 round_trippers.go:580]     Date: Thu, 14 Mar 2024 18:04:08 GMT
	I0314 18:04:08.541402    5716 round_trippers.go:580]     Audit-Id: e2ee2e55-fa65-4604-a902-41112465daa4
	I0314 18:04:08.541402    5716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 18:04:08.543620    5716 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-functional-866600","namespace":"kube-system","uid":"ae9aa2ce-db1b-4105-9a2c-243505551b2c","resourceVersion":"562","creationTimestamp":"2024-03-14T18:01:24Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"7a19f9d78246bcc36cfab94a17fd28fb","kubernetes.io/config.mirror":"7a19f9d78246bcc36cfab94a17fd28fb","kubernetes.io/config.seen":"2024-03-14T18:01:24.642627682Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-866600","uid":"d6b958a8-7514-4695-bae4-635d1e29c4f0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-14T18:01:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{
},"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component": [truncated 4903 chars]
	I0314 18:04:08.720182    5716 request.go:629] Waited for 176.0539ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.91.78:8441/api/v1/nodes/functional-866600
	I0314 18:04:08.720391    5716 round_trippers.go:463] GET https://172.17.91.78:8441/api/v1/nodes/functional-866600
	I0314 18:04:08.720391    5716 round_trippers.go:469] Request Headers:
	I0314 18:04:08.720391    5716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:04:08.720391    5716 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:04:08.737067    5716 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I0314 18:04:08.737679    5716 round_trippers.go:577] Response Headers:
	I0314 18:04:08.737743    5716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5eedacab-8917-4088-9cb2-f35d60041e49
	I0314 18:04:08.737743    5716 round_trippers.go:580]     Date: Thu, 14 Mar 2024 18:04:09 GMT
	I0314 18:04:08.737743    5716 round_trippers.go:580]     Audit-Id: d60ecb19-0a1f-4eff-958f-c5650abf6068
	I0314 18:04:08.737743    5716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 18:04:08.737743    5716 round_trippers.go:580]     Content-Type: application/json
	I0314 18:04:08.737743    5716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7afade58-6a21-4091-9b10-1e0f6959fe88
	I0314 18:04:08.737743    5716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-866600","uid":"d6b958a8-7514-4695-bae4-635d1e29c4f0","resourceVersion":"491","creationTimestamp":"2024-03-14T18:01:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-866600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"functional-866600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T18_01_24_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-03-14T18:01:20Z","fieldsType":"FieldsV1", [truncated 4785 chars]
	I0314 18:04:08.738292    5716 pod_ready.go:92] pod "kube-scheduler-functional-866600" in "kube-system" namespace has status "Ready":"True"
	I0314 18:04:08.738365    5716 pod_ready.go:81] duration metric: took 421.7736ms for pod "kube-scheduler-functional-866600" in "kube-system" namespace to be "Ready" ...
	I0314 18:04:08.738365    5716 pod_ready.go:38] duration metric: took 2.4448315s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0314 18:04:08.738424    5716 api_server.go:52] waiting for apiserver process to appear ...
	I0314 18:04:08.748226    5716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 18:04:08.769216    5716 command_runner.go:130] > 7145
	I0314 18:04:08.769700    5716 api_server.go:72] duration metric: took 2.8029907s to wait for apiserver process to appear ...
	I0314 18:04:08.769700    5716 api_server.go:88] waiting for apiserver healthz status ...
	I0314 18:04:08.769700    5716 api_server.go:253] Checking apiserver healthz at https://172.17.91.78:8441/healthz ...
	I0314 18:04:08.779638    5716 api_server.go:279] https://172.17.91.78:8441/healthz returned 200:
	ok
	I0314 18:04:08.779840    5716 round_trippers.go:463] GET https://172.17.91.78:8441/version
	I0314 18:04:08.779840    5716 round_trippers.go:469] Request Headers:
	I0314 18:04:08.779908    5716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:04:08.779908    5716 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:04:08.781166    5716 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0314 18:04:08.781166    5716 round_trippers.go:577] Response Headers:
	I0314 18:04:08.781166    5716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 18:04:08.781166    5716 round_trippers.go:580]     Content-Type: application/json
	I0314 18:04:08.781166    5716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7afade58-6a21-4091-9b10-1e0f6959fe88
	I0314 18:04:08.781166    5716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5eedacab-8917-4088-9cb2-f35d60041e49
	I0314 18:04:08.781166    5716 round_trippers.go:580]     Content-Length: 264
	I0314 18:04:08.781166    5716 round_trippers.go:580]     Date: Thu, 14 Mar 2024 18:04:09 GMT
	I0314 18:04:08.781166    5716 round_trippers.go:580]     Audit-Id: e7e75923-2904-4836-a49c-d5f095060fa8
	I0314 18:04:08.781166    5716 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.4",
	  "gitCommit": "bae2c62678db2b5053817bc97181fcc2e8388103",
	  "gitTreeState": "clean",
	  "buildDate": "2023-11-15T16:48:54Z",
	  "goVersion": "go1.20.11",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0314 18:04:08.781166    5716 api_server.go:141] control plane version: v1.28.4
	I0314 18:04:08.781166    5716 api_server.go:131] duration metric: took 11.4643ms to wait for apiserver health ...
	I0314 18:04:08.781166    5716 system_pods.go:43] waiting for kube-system pods to appear ...
	I0314 18:04:08.910472    5716 request.go:629] Waited for 129.0106ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.91.78:8441/api/v1/namespaces/kube-system/pods
	I0314 18:04:08.910575    5716 round_trippers.go:463] GET https://172.17.91.78:8441/api/v1/namespaces/kube-system/pods
	I0314 18:04:08.910643    5716 round_trippers.go:469] Request Headers:
	I0314 18:04:08.910643    5716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:04:08.910643    5716 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:04:08.916323    5716 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0314 18:04:08.916404    5716 round_trippers.go:577] Response Headers:
	I0314 18:04:08.916404    5716 round_trippers.go:580]     Audit-Id: 67d802fc-e707-4b03-9b6b-956e01d728c4
	I0314 18:04:08.916404    5716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 18:04:08.916404    5716 round_trippers.go:580]     Content-Type: application/json
	I0314 18:04:08.916477    5716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7afade58-6a21-4091-9b10-1e0f6959fe88
	I0314 18:04:08.916477    5716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5eedacab-8917-4088-9cb2-f35d60041e49
	I0314 18:04:08.916477    5716 round_trippers.go:580]     Date: Thu, 14 Mar 2024 18:04:09 GMT
	I0314 18:04:08.918021    5716 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"574"},"items":[{"metadata":{"name":"coredns-5dd5756b68-n84nx","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"5d8f04ff-70b9-4332-a120-8993958cfd33","resourceVersion":"558","creationTimestamp":"2024-03-14T18:01:38Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"a19c2344-e6b5-4ec0-a733-b2c1a49d774f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-14T18:01:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a19c2344-e6b5-4ec0-a733-b2c1a49d774f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 47991 chars]
	I0314 18:04:08.920472    5716 system_pods.go:59] 7 kube-system pods found
	I0314 18:04:08.920528    5716 system_pods.go:61] "coredns-5dd5756b68-n84nx" [5d8f04ff-70b9-4332-a120-8993958cfd33] Running
	I0314 18:04:08.920586    5716 system_pods.go:61] "etcd-functional-866600" [22c38b7f-5371-47b4-b119-d9aec9d349cf] Running
	I0314 18:04:08.920586    5716 system_pods.go:61] "kube-apiserver-functional-866600" [9849501a-615b-4ff4-9914-35f4e0e718aa] Running
	I0314 18:04:08.920586    5716 system_pods.go:61] "kube-controller-manager-functional-866600" [f415043c-6140-4e46-8769-1445681ccc85] Running
	I0314 18:04:08.920586    5716 system_pods.go:61] "kube-proxy-7dppw" [8123be17-49ac-450e-9ff2-48b35f8a9a0f] Running
	I0314 18:04:08.920586    5716 system_pods.go:61] "kube-scheduler-functional-866600" [ae9aa2ce-db1b-4105-9a2c-243505551b2c] Running
	I0314 18:04:08.920586    5716 system_pods.go:61] "storage-provisioner" [74f7dcf3-94a7-441e-a9c5-207e2bbd1efe] Running
	I0314 18:04:08.920586    5716 system_pods.go:74] duration metric: took 139.4099ms to wait for pod list to return data ...
	I0314 18:04:08.920636    5716 default_sa.go:34] waiting for default service account to be created ...
	I0314 18:04:09.118069    5716 request.go:629] Waited for 197.234ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.91.78:8441/api/v1/namespaces/default/serviceaccounts
	I0314 18:04:09.118069    5716 round_trippers.go:463] GET https://172.17.91.78:8441/api/v1/namespaces/default/serviceaccounts
	I0314 18:04:09.118069    5716 round_trippers.go:469] Request Headers:
	I0314 18:04:09.118069    5716 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:04:09.118069    5716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:04:09.122438    5716 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 18:04:09.122438    5716 round_trippers.go:577] Response Headers:
	I0314 18:04:09.122438    5716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7afade58-6a21-4091-9b10-1e0f6959fe88
	I0314 18:04:09.122438    5716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5eedacab-8917-4088-9cb2-f35d60041e49
	I0314 18:04:09.122438    5716 round_trippers.go:580]     Content-Length: 261
	I0314 18:04:09.122438    5716 round_trippers.go:580]     Date: Thu, 14 Mar 2024 18:04:09 GMT
	I0314 18:04:09.122438    5716 round_trippers.go:580]     Audit-Id: cbfa7603-d465-4edc-84f7-accb6b9f7001
	I0314 18:04:09.122438    5716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 18:04:09.122438    5716 round_trippers.go:580]     Content-Type: application/json
	I0314 18:04:09.122438    5716 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"574"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"4fbf1033-e261-4c82-b4f1-882204628962","resourceVersion":"310","creationTimestamp":"2024-03-14T18:01:38Z"}}]}
	I0314 18:04:09.122438    5716 default_sa.go:45] found service account: "default"
	I0314 18:04:09.122438    5716 default_sa.go:55] duration metric: took 201.788ms for default service account to be created ...
	I0314 18:04:09.122438    5716 system_pods.go:116] waiting for k8s-apps to be running ...
	I0314 18:04:09.308948    5716 request.go:629] Waited for 186.225ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.91.78:8441/api/v1/namespaces/kube-system/pods
	I0314 18:04:09.308948    5716 round_trippers.go:463] GET https://172.17.91.78:8441/api/v1/namespaces/kube-system/pods
	I0314 18:04:09.309058    5716 round_trippers.go:469] Request Headers:
	I0314 18:04:09.309058    5716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:04:09.309058    5716 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:04:09.313343    5716 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 18:04:09.314341    5716 round_trippers.go:577] Response Headers:
	I0314 18:04:09.314341    5716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5eedacab-8917-4088-9cb2-f35d60041e49
	I0314 18:04:09.314341    5716 round_trippers.go:580]     Date: Thu, 14 Mar 2024 18:04:09 GMT
	I0314 18:04:09.314341    5716 round_trippers.go:580]     Audit-Id: 793a0d3b-308a-4df3-92c5-510300d6c7f9
	I0314 18:04:09.314341    5716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 18:04:09.314341    5716 round_trippers.go:580]     Content-Type: application/json
	I0314 18:04:09.314341    5716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7afade58-6a21-4091-9b10-1e0f6959fe88
	I0314 18:04:09.315232    5716 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"574"},"items":[{"metadata":{"name":"coredns-5dd5756b68-n84nx","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"5d8f04ff-70b9-4332-a120-8993958cfd33","resourceVersion":"558","creationTimestamp":"2024-03-14T18:01:38Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"a19c2344-e6b5-4ec0-a733-b2c1a49d774f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-14T18:01:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a19c2344-e6b5-4ec0-a733-b2c1a49d774f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 47991 chars]
	I0314 18:04:09.317323    5716 system_pods.go:86] 7 kube-system pods found
	I0314 18:04:09.317323    5716 system_pods.go:89] "coredns-5dd5756b68-n84nx" [5d8f04ff-70b9-4332-a120-8993958cfd33] Running
	I0314 18:04:09.317323    5716 system_pods.go:89] "etcd-functional-866600" [22c38b7f-5371-47b4-b119-d9aec9d349cf] Running
	I0314 18:04:09.317323    5716 system_pods.go:89] "kube-apiserver-functional-866600" [9849501a-615b-4ff4-9914-35f4e0e718aa] Running
	I0314 18:04:09.317323    5716 system_pods.go:89] "kube-controller-manager-functional-866600" [f415043c-6140-4e46-8769-1445681ccc85] Running
	I0314 18:04:09.317323    5716 system_pods.go:89] "kube-proxy-7dppw" [8123be17-49ac-450e-9ff2-48b35f8a9a0f] Running
	I0314 18:04:09.317469    5716 system_pods.go:89] "kube-scheduler-functional-866600" [ae9aa2ce-db1b-4105-9a2c-243505551b2c] Running
	I0314 18:04:09.317469    5716 system_pods.go:89] "storage-provisioner" [74f7dcf3-94a7-441e-a9c5-207e2bbd1efe] Running
	I0314 18:04:09.317469    5716 system_pods.go:126] duration metric: took 195.0162ms to wait for k8s-apps to be running ...
	I0314 18:04:09.317469    5716 system_svc.go:44] waiting for kubelet service to be running ....
	I0314 18:04:09.326203    5716 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0314 18:04:09.350537    5716 system_svc.go:56] duration metric: took 33.0652ms WaitForService to wait for kubelet
	I0314 18:04:09.350613    5716 kubeadm.go:576] duration metric: took 3.3838602s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0314 18:04:09.350613    5716 node_conditions.go:102] verifying NodePressure condition ...
	I0314 18:04:09.516332    5716 request.go:629] Waited for 165.4026ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.91.78:8441/api/v1/nodes
	I0314 18:04:09.516332    5716 round_trippers.go:463] GET https://172.17.91.78:8441/api/v1/nodes
	I0314 18:04:09.516332    5716 round_trippers.go:469] Request Headers:
	I0314 18:04:09.516332    5716 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:04:09.516332    5716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:04:09.519902    5716 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 18:04:09.520705    5716 round_trippers.go:577] Response Headers:
	I0314 18:04:09.520761    5716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5eedacab-8917-4088-9cb2-f35d60041e49
	I0314 18:04:09.520761    5716 round_trippers.go:580]     Date: Thu, 14 Mar 2024 18:04:09 GMT
	I0314 18:04:09.520761    5716 round_trippers.go:580]     Audit-Id: a5122dbf-1752-4732-8302-801bd7ee63ba
	I0314 18:04:09.520761    5716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 18:04:09.520761    5716 round_trippers.go:580]     Content-Type: application/json
	I0314 18:04:09.520761    5716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7afade58-6a21-4091-9b10-1e0f6959fe88
	I0314 18:04:09.520761    5716 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"574"},"items":[{"metadata":{"name":"functional-866600","uid":"d6b958a8-7514-4695-bae4-635d1e29c4f0","resourceVersion":"491","creationTimestamp":"2024-03-14T18:01:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-866600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"functional-866600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T18_01_24_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedF
ields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","ti [truncated 4838 chars]
	I0314 18:04:09.521445    5716 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0314 18:04:09.521445    5716 node_conditions.go:123] node cpu capacity is 2
	I0314 18:04:09.521445    5716 node_conditions.go:105] duration metric: took 170.8194ms to run NodePressure ...
	I0314 18:04:09.521445    5716 start.go:240] waiting for startup goroutines ...
	I0314 18:04:10.035041    5716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 18:04:10.035142    5716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 18:04:10.035142    5716 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:04:10.035142    5716 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:04:10.035237    5716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-866600 ).networkadapters[0]).ipaddresses[0]
	I0314 18:04:10.035237    5716 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0314 18:04:10.035237    5716 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0314 18:04:10.035338    5716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-866600 ).state
	I0314 18:04:12.051722    5716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 18:04:12.051722    5716 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:04:12.052727    5716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-866600 ).networkadapters[0]).ipaddresses[0]
	I0314 18:04:12.467497    5716 main.go:141] libmachine: [stdout =====>] : 172.17.91.78
	
	I0314 18:04:12.467497    5716 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:04:12.467497    5716 sshutil.go:53] new ssh client: &{IP:172.17.91.78 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\functional-866600\id_rsa Username:docker}
	I0314 18:04:12.602473    5716 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0314 18:04:13.616954    5716 command_runner.go:130] > serviceaccount/storage-provisioner unchanged
	I0314 18:04:13.616954    5716 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner unchanged
	I0314 18:04:13.616954    5716 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner unchanged
	I0314 18:04:13.616954    5716 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner unchanged
	I0314 18:04:13.616954    5716 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath unchanged
	I0314 18:04:13.616954    5716 command_runner.go:130] > pod/storage-provisioner configured
	I0314 18:04:13.616954    5716 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.0144065s)
	I0314 18:04:14.426424    5716 main.go:141] libmachine: [stdout =====>] : 172.17.91.78
	
	I0314 18:04:14.427091    5716 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:04:14.427091    5716 sshutil.go:53] new ssh client: &{IP:172.17.91.78 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\functional-866600\id_rsa Username:docker}
	I0314 18:04:14.551264    5716 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0314 18:04:14.795461    5716 command_runner.go:130] > storageclass.storage.k8s.io/standard unchanged
	I0314 18:04:14.795754    5716 round_trippers.go:463] GET https://172.17.91.78:8441/apis/storage.k8s.io/v1/storageclasses
	I0314 18:04:14.795819    5716 round_trippers.go:469] Request Headers:
	I0314 18:04:14.795819    5716 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:04:14.795819    5716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:04:14.799405    5716 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 18:04:14.799405    5716 round_trippers.go:577] Response Headers:
	I0314 18:04:14.799405    5716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7afade58-6a21-4091-9b10-1e0f6959fe88
	I0314 18:04:14.799405    5716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5eedacab-8917-4088-9cb2-f35d60041e49
	I0314 18:04:14.799405    5716 round_trippers.go:580]     Content-Length: 1273
	I0314 18:04:14.799405    5716 round_trippers.go:580]     Date: Thu, 14 Mar 2024 18:04:15 GMT
	I0314 18:04:14.799405    5716 round_trippers.go:580]     Audit-Id: c746da1f-f1da-420e-9f3e-4ff6d7c4a31f
	I0314 18:04:14.799405    5716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 18:04:14.799405    5716 round_trippers.go:580]     Content-Type: application/json
	I0314 18:04:14.799405    5716 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"574"},"items":[{"metadata":{"name":"standard","uid":"2888c9e3-d785-462d-8523-d629181314c2","resourceVersion":"388","creationTimestamp":"2024-03-14T18:01:47Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-03-14T18:01:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I0314 18:04:14.800487    5716 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"2888c9e3-d785-462d-8523-d629181314c2","resourceVersion":"388","creationTimestamp":"2024-03-14T18:01:47Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-03-14T18:01:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0314 18:04:14.800590    5716 round_trippers.go:463] PUT https://172.17.91.78:8441/apis/storage.k8s.io/v1/storageclasses/standard
	I0314 18:04:14.800590    5716 round_trippers.go:469] Request Headers:
	I0314 18:04:14.800590    5716 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:04:14.800648    5716 round_trippers.go:473]     Content-Type: application/json
	I0314 18:04:14.800648    5716 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:04:14.805044    5716 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 18:04:14.805044    5716 round_trippers.go:577] Response Headers:
	I0314 18:04:14.805044    5716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7afade58-6a21-4091-9b10-1e0f6959fe88
	I0314 18:04:14.805044    5716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5eedacab-8917-4088-9cb2-f35d60041e49
	I0314 18:04:14.805044    5716 round_trippers.go:580]     Content-Length: 1220
	I0314 18:04:14.805044    5716 round_trippers.go:580]     Date: Thu, 14 Mar 2024 18:04:15 GMT
	I0314 18:04:14.805044    5716 round_trippers.go:580]     Audit-Id: ac1850df-4897-4577-8da5-20af5e8f2b42
	I0314 18:04:14.805044    5716 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 18:04:14.805044    5716 round_trippers.go:580]     Content-Type: application/json
	I0314 18:04:14.805044    5716 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"2888c9e3-d785-462d-8523-d629181314c2","resourceVersion":"388","creationTimestamp":"2024-03-14T18:01:47Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-03-14T18:01:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0314 18:04:14.811699    5716 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0314 18:04:14.815183    5716 addons.go:505] duration metric: took 8.8482732s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0314 18:04:14.815183    5716 start.go:245] waiting for cluster config update ...
	I0314 18:04:14.815183    5716 start.go:254] writing updated cluster config ...
	I0314 18:04:14.823827    5716 ssh_runner.go:195] Run: rm -f paused
	I0314 18:04:14.947806    5716 start.go:600] kubectl: 1.29.2, cluster: 1.28.4 (minor skew: 1)
	I0314 18:04:14.950574    5716 out.go:177] * Done! kubectl is now configured to use "functional-866600" cluster and "default" namespace by default
	
	
	==> Docker <==
	Mar 14 18:03:52 functional-866600 dockerd[5604]: time="2024-03-14T18:03:52.871174779Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 14 18:03:52 functional-866600 dockerd[5604]: time="2024-03-14T18:03:52.871286089Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 14 18:03:56 functional-866600 cri-dockerd[5829]: time="2024-03-14T18:03:56Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	Mar 14 18:03:58 functional-866600 dockerd[5604]: time="2024-03-14T18:03:58.418829284Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 14 18:03:58 functional-866600 dockerd[5604]: time="2024-03-14T18:03:58.418943894Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 14 18:03:58 functional-866600 dockerd[5604]: time="2024-03-14T18:03:58.418958695Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 14 18:03:58 functional-866600 dockerd[5604]: time="2024-03-14T18:03:58.419653153Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 14 18:03:58 functional-866600 dockerd[5604]: time="2024-03-14T18:03:58.526633929Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 14 18:03:58 functional-866600 dockerd[5604]: time="2024-03-14T18:03:58.526916352Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 14 18:03:58 functional-866600 dockerd[5604]: time="2024-03-14T18:03:58.527067265Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 14 18:03:58 functional-866600 dockerd[5604]: time="2024-03-14T18:03:58.527925836Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 14 18:03:58 functional-866600 dockerd[5604]: time="2024-03-14T18:03:58.589851074Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 14 18:03:58 functional-866600 dockerd[5604]: time="2024-03-14T18:03:58.590144598Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 14 18:03:58 functional-866600 dockerd[5604]: time="2024-03-14T18:03:58.590241906Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 14 18:03:58 functional-866600 dockerd[5604]: time="2024-03-14T18:03:58.592407386Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 14 18:03:58 functional-866600 cri-dockerd[5829]: time="2024-03-14T18:03:58Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/5f2cf238e6eb5e6b20f8841a5be2b4ff09cc5cbeab2e1ead0a142da681d2d5bc/resolv.conf as [nameserver 172.17.80.1]"
	Mar 14 18:03:58 functional-866600 cri-dockerd[5829]: time="2024-03-14T18:03:58Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/321473f253c4caf6d490479847a206427e00f445e7114b88024ab87cb42d3247/resolv.conf as [nameserver 172.17.80.1]"
	Mar 14 18:03:58 functional-866600 dockerd[5604]: time="2024-03-14T18:03:58.950476194Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 14 18:03:58 functional-866600 dockerd[5604]: time="2024-03-14T18:03:58.950809022Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 14 18:03:58 functional-866600 dockerd[5604]: time="2024-03-14T18:03:58.950891429Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 14 18:03:58 functional-866600 dockerd[5604]: time="2024-03-14T18:03:58.951060043Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 14 18:03:58 functional-866600 dockerd[5604]: time="2024-03-14T18:03:58.960851155Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 14 18:03:58 functional-866600 dockerd[5604]: time="2024-03-14T18:03:58.960899759Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 14 18:03:58 functional-866600 dockerd[5604]: time="2024-03-14T18:03:58.960912260Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 14 18:03:58 functional-866600 dockerd[5604]: time="2024-03-14T18:03:58.961001567Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	885ec235a9b59       6e38f40d628db       About a minute ago   Running             storage-provisioner       1                   321473f253c4c       storage-provisioner
	f2d1880a85759       83f6cc407eed8       About a minute ago   Running             kube-proxy                1                   5f2cf238e6eb5       kube-proxy-7dppw
	cb1a796015144       ead0a4a53df89       About a minute ago   Running             coredns                   1                   f1acce864d5d8       coredns-5dd5756b68-n84nx
	1067fb44d98eb       73deb9a3f7025       About a minute ago   Running             etcd                      2                   b67b2eb37c749       etcd-functional-866600
	1e7a8d359afb7       7fe0e6f37db33       About a minute ago   Running             kube-apiserver            1                   b214f4f9e9edd       kube-apiserver-functional-866600
	a51172c26d796       e3db313c6dbc0       About a minute ago   Running             kube-scheduler            2                   98766cb871220       kube-scheduler-functional-866600
	1bf795c136e86       d058aa5ab969c       About a minute ago   Running             kube-controller-manager   2                   39ecf8ac63942       kube-controller-manager-functional-866600
	0ab078b3b1cb2       73deb9a3f7025       2 minutes ago        Exited              etcd                      1                   e1d44ecb441a1       etcd-functional-866600
	1e6c09ec79cb5       e3db313c6dbc0       2 minutes ago        Exited              kube-scheduler            1                   c3d217441c89d       kube-scheduler-functional-866600
	eb1790abcad20       d058aa5ab969c       2 minutes ago        Exited              kube-controller-manager   1                   9bce7b51a10dc       kube-controller-manager-functional-866600
	44e064493f5c0       6e38f40d628db       4 minutes ago        Exited              storage-provisioner       0                   04cd8910e4682       storage-provisioner
	35b40dfba1a9e       ead0a4a53df89       4 minutes ago        Exited              coredns                   0                   ad2a39b82075c       coredns-5dd5756b68-n84nx
	8eeafd22647ed       83f6cc407eed8       4 minutes ago        Exited              kube-proxy                0                   36c6f45193652       kube-proxy-7dppw
	b459c7260fab4       7fe0e6f37db33       4 minutes ago        Exited              kube-apiserver            0                   7e30d4621f75d       kube-apiserver-functional-866600
	
	
	==> coredns [35b40dfba1a9] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = d518b2f22d7013b4ce33ee954d9f8802810eac8bed02a1cf0be20d76208a6f83258316421f15df605ab13f1704501370ffcd7655fbac5818a200880248c94b94
	[INFO] Reloading complete
	[INFO] 127.0.0.1:48560 - 18335 "HINFO IN 4362133751033808714.5612392780188437185. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.047081757s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [cb1a79601514] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = d518b2f22d7013b4ce33ee954d9f8802810eac8bed02a1cf0be20d76208a6f83258316421f15df605ab13f1704501370ffcd7655fbac5818a200880248c94b94
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:40397 - 23131 "HINFO IN 8689373053677289354.363342939867080315. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.132530596s
	
	
	==> describe nodes <==
	Name:               functional-866600
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-866600
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c6f78a3db54ac629870afb44fb5bc8be9e04a8c7
	                    minikube.k8s.io/name=functional-866600
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_14T18_01_24_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 14 Mar 2024 18:01:20 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-866600
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 14 Mar 2024 18:05:48 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 14 Mar 2024 18:05:27 +0000   Thu, 14 Mar 2024 18:01:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 14 Mar 2024 18:05:27 +0000   Thu, 14 Mar 2024 18:01:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 14 Mar 2024 18:05:27 +0000   Thu, 14 Mar 2024 18:01:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 14 Mar 2024 18:05:27 +0000   Thu, 14 Mar 2024 18:01:28 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.17.91.78
	  Hostname:    functional-866600
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912876Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912876Ki
	  pods:               110
	System Info:
	  Machine ID:                 db39fbee31db4b818b1426d384612ad6
	  System UUID:                6cb28456-7aa0-0c4d-bd5c-e6c6a7c8b152
	  Boot ID:                    260a3acc-840a-48f4-88f2-80ca830a04ea
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://25.0.4
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-n84nx                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     4m13s
	  kube-system                 etcd-functional-866600                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         4m29s
	  kube-system                 kube-apiserver-functional-866600             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m27s
	  kube-system                 kube-controller-manager-functional-866600    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m27s
	  kube-system                 kube-proxy-7dppw                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m13s
	  kube-system                 kube-scheduler-functional-866600             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m27s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m6s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (4%!)(MISSING)  170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age              From             Message
	  ----    ------                   ----             ----             -------
	  Normal  Starting                 4m11s            kube-proxy       
	  Normal  Starting                 112s             kube-proxy       
	  Normal  NodeHasSufficientPID     4m27s            kubelet          Node functional-866600 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m27s            kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m27s            kubelet          Node functional-866600 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m27s            kubelet          Node functional-866600 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 4m27s            kubelet          Starting kubelet.
	  Normal  NodeReady                4m23s            kubelet          Node functional-866600 status is now: NodeReady
	  Normal  RegisteredNode           4m14s            node-controller  Node functional-866600 event: Registered Node functional-866600 in Controller
	  Normal  Starting                 2m               kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m (x8 over 2m)  kubelet          Node functional-866600 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m (x8 over 2m)  kubelet          Node functional-866600 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m (x7 over 2m)  kubelet          Node functional-866600 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m               kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           103s             node-controller  Node functional-866600 event: Registered Node functional-866600 in Controller
	
	
	==> dmesg <==
	[  +5.128399] systemd-fstab-generator[1525]: Ignoring "noauto" option for root device
	[  +0.097667] kauditd_printk_skb: 51 callbacks suppressed
	[  +5.625070] systemd-fstab-generator[1795]: Ignoring "noauto" option for root device
	[  +0.101755] kauditd_printk_skb: 12 callbacks suppressed
	[  +9.302200] systemd-fstab-generator[2790]: Ignoring "noauto" option for root device
	[  +0.129915] kauditd_printk_skb: 62 callbacks suppressed
	[ +13.993959] systemd-fstab-generator[3513]: Ignoring "noauto" option for root device
	[  +0.180371] kauditd_printk_skb: 12 callbacks suppressed
	[  +7.467469] kauditd_printk_skb: 80 callbacks suppressed
	[Mar14 18:02] kauditd_printk_skb: 8 callbacks suppressed
	[Mar14 18:03] systemd-fstab-generator[5122]: Ignoring "noauto" option for root device
	[  +0.610339] systemd-fstab-generator[5158]: Ignoring "noauto" option for root device
	[  +0.258574] systemd-fstab-generator[5170]: Ignoring "noauto" option for root device
	[  +0.285482] systemd-fstab-generator[5185]: Ignoring "noauto" option for root device
	[  +5.233614] kauditd_printk_skb: 89 callbacks suppressed
	[  +7.885380] systemd-fstab-generator[5777]: Ignoring "noauto" option for root device
	[  +0.204473] systemd-fstab-generator[5789]: Ignoring "noauto" option for root device
	[  +0.206510] systemd-fstab-generator[5801]: Ignoring "noauto" option for root device
	[  +0.259347] systemd-fstab-generator[5816]: Ignoring "noauto" option for root device
	[  +0.832876] systemd-fstab-generator[5968]: Ignoring "noauto" option for root device
	[  +0.825013] kauditd_printk_skb: 140 callbacks suppressed
	[  +2.777894] systemd-fstab-generator[6701]: Ignoring "noauto" option for root device
	[  +7.522166] kauditd_printk_skb: 98 callbacks suppressed
	[Mar14 18:04] systemd-fstab-generator[7607]: Ignoring "noauto" option for root device
	[  +0.159080] kauditd_printk_skb: 17 callbacks suppressed
	
	
	==> etcd [0ab078b3b1cb] <==
	{"level":"warn","ts":"2024-03-14T18:03:49.102801Z","caller":"embed/config.go:673","msg":"Running http and grpc server on single port. This is not recommended for production."}
	{"level":"info","ts":"2024-03-14T18:03:49.102873Z","caller":"etcdmain/etcd.go:73","msg":"Running: ","args":["etcd","--advertise-client-urls=https://172.17.91.78:2379","--cert-file=/var/lib/minikube/certs/etcd/server.crt","--client-cert-auth=true","--data-dir=/var/lib/minikube/etcd","--experimental-initial-corrupt-check=true","--experimental-watch-progress-notify-interval=5s","--initial-advertise-peer-urls=https://172.17.91.78:2380","--initial-cluster=functional-866600=https://172.17.91.78:2380","--key-file=/var/lib/minikube/certs/etcd/server.key","--listen-client-urls=https://127.0.0.1:2379,https://172.17.91.78:2379","--listen-metrics-urls=http://127.0.0.1:2381","--listen-peer-urls=https://172.17.91.78:2380","--name=functional-866600","--peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt","--peer-client-cert-auth=true","--peer-key-file=/var/lib/minikube/certs/etcd/peer.key","--peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt","--proxy-refresh-interval=70000","--snapshot-count=10000","--trus
ted-ca-file=/var/lib/minikube/certs/etcd/ca.crt"]}
	{"level":"info","ts":"2024-03-14T18:03:49.102934Z","caller":"etcdmain/etcd.go:116","msg":"server has been already initialized","data-dir":"/var/lib/minikube/etcd","dir-type":"member"}
	{"level":"warn","ts":"2024-03-14T18:03:49.102956Z","caller":"embed/config.go:673","msg":"Running http and grpc server on single port. This is not recommended for production."}
	{"level":"info","ts":"2024-03-14T18:03:49.102964Z","caller":"embed/etcd.go:127","msg":"configuring peer listeners","listen-peer-urls":["https://172.17.91.78:2380"]}
	{"level":"info","ts":"2024-03-14T18:03:49.102997Z","caller":"embed/etcd.go:495","msg":"starting with peer TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-03-14T18:03:49.103482Z","caller":"embed/etcd.go:135","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://172.17.91.78:2379"]}
	{"level":"info","ts":"2024-03-14T18:03:49.103791Z","caller":"embed/etcd.go:309","msg":"starting an etcd server","etcd-version":"3.5.9","git-sha":"bdbbde998","go-version":"go1.19.9","go-os":"linux","go-arch":"amd64","max-cpu-set":2,"max-cpu-available":2,"member-initialized":true,"name":"functional-866600","data-dir":"/var/lib/minikube/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/minikube/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://172.17.91.78:2380"],"listen-peer-urls":["https://172.17.91.78:2380"],"advertise-client-urls":["https://172.17.91.78:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.17.91.78:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"cors":["*"],"host-whitelist":["*"],"initial-cluster":"","initial-cluster-state":"new","initial-cluster
-token":"","quota-backend-bytes":2147483648,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"initial-corrupt-check":true,"corrupt-check-time-interval":"0s","compact-check-time-enabled":false,"compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s"}
	{"level":"info","ts":"2024-03-14T18:03:49.118897Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"14.866825ms"}
	{"level":"info","ts":"2024-03-14T18:03:49.129175Z","caller":"etcdserver/server.go:530","msg":"No snapshot found. Recovering WAL from scratch!"}
	{"level":"info","ts":"2024-03-14T18:03:49.153156Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"eaa600fcd186dc78","local-member-id":"1555fc3056a8dcbd","commit-index":524}
	{"level":"info","ts":"2024-03-14T18:03:49.153281Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1555fc3056a8dcbd switched to configuration voters=()"}
	{"level":"info","ts":"2024-03-14T18:03:49.153314Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1555fc3056a8dcbd became follower at term 2"}
	{"level":"info","ts":"2024-03-14T18:03:49.153329Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft 1555fc3056a8dcbd [peers: [], term: 2, commit: 524, applied: 0, lastindex: 524, lastterm: 2]"}
	
	
	==> etcd [1067fb44d98e] <==
	{"level":"info","ts":"2024-03-14T18:03:53.179896Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-03-14T18:03:53.179989Z","caller":"etcdserver/server.go:754","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-03-14T18:03:53.184198Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-03-14T18:03:53.184308Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-03-14T18:03:53.184321Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-03-14T18:03:53.18483Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"172.17.91.78:2380"}
	{"level":"info","ts":"2024-03-14T18:03:53.184868Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"172.17.91.78:2380"}
	{"level":"info","ts":"2024-03-14T18:03:53.185351Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1555fc3056a8dcbd switched to configuration voters=(1537412132359429309)"}
	{"level":"info","ts":"2024-03-14T18:03:53.185435Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"eaa600fcd186dc78","local-member-id":"1555fc3056a8dcbd","added-peer-id":"1555fc3056a8dcbd","added-peer-peer-urls":["https://172.17.91.78:2380"]}
	{"level":"info","ts":"2024-03-14T18:03:53.185541Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"eaa600fcd186dc78","local-member-id":"1555fc3056a8dcbd","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-14T18:03:53.185572Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-14T18:03:54.705304Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1555fc3056a8dcbd is starting a new election at term 2"}
	{"level":"info","ts":"2024-03-14T18:03:54.705347Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1555fc3056a8dcbd became pre-candidate at term 2"}
	{"level":"info","ts":"2024-03-14T18:03:54.705381Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1555fc3056a8dcbd received MsgPreVoteResp from 1555fc3056a8dcbd at term 2"}
	{"level":"info","ts":"2024-03-14T18:03:54.705395Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1555fc3056a8dcbd became candidate at term 3"}
	{"level":"info","ts":"2024-03-14T18:03:54.705408Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1555fc3056a8dcbd received MsgVoteResp from 1555fc3056a8dcbd at term 3"}
	{"level":"info","ts":"2024-03-14T18:03:54.705418Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1555fc3056a8dcbd became leader at term 3"}
	{"level":"info","ts":"2024-03-14T18:03:54.705427Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 1555fc3056a8dcbd elected leader 1555fc3056a8dcbd at term 3"}
	{"level":"info","ts":"2024-03-14T18:03:54.714821Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"1555fc3056a8dcbd","local-member-attributes":"{Name:functional-866600 ClientURLs:[https://172.17.91.78:2379]}","request-path":"/0/members/1555fc3056a8dcbd/attributes","cluster-id":"eaa600fcd186dc78","publish-timeout":"7s"}
	{"level":"info","ts":"2024-03-14T18:03:54.714837Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-14T18:03:54.715149Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-14T18:03:54.716384Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.17.91.78:2379"}
	{"level":"info","ts":"2024-03-14T18:03:54.716438Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-03-14T18:03:54.716647Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-03-14T18:03:54.716662Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 18:05:51 up 6 min,  0 users,  load average: 0.42, 0.57, 0.27
	Linux functional-866600 5.10.207 #1 SMP Wed Mar 13 22:01:28 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [1e7a8d359afb] <==
	I0314 18:03:56.237262       1 shared_informer.go:311] Waiting for caches to sync for cluster_authentication_trust_controller
	I0314 18:03:56.238356       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0314 18:03:56.238613       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0314 18:03:56.234646       1 gc_controller.go:78] Starting apiserver lease garbage collector
	I0314 18:03:56.379265       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0314 18:03:56.382227       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0314 18:03:56.383222       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0314 18:03:56.392524       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0314 18:03:56.397439       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0314 18:03:56.426257       1 shared_informer.go:318] Caches are synced for configmaps
	I0314 18:03:56.428600       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I0314 18:03:56.430400       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I0314 18:03:56.436149       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0314 18:03:56.436260       1 aggregator.go:166] initial CRD sync complete...
	I0314 18:03:56.436408       1 autoregister_controller.go:141] Starting autoregister controller
	I0314 18:03:56.436498       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0314 18:03:56.436567       1 cache.go:39] Caches are synced for autoregister controller
	I0314 18:03:57.231873       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0314 18:03:58.282954       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0314 18:03:58.310380       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0314 18:03:58.416684       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0314 18:03:58.500995       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0314 18:03:58.553976       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0314 18:04:08.973329       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0314 18:04:09.018135       1 controller.go:624] quota admission added evaluator for: endpoints
	
	
	==> kube-apiserver [b459c7260fab] <==
	W0314 18:03:42.203151       1 logging.go:59] [core] [Channel #148 SubChannel #149] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0314 18:03:42.207004       1 logging.go:59] [core] [Channel #124 SubChannel #125] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0314 18:03:42.207019       1 logging.go:59] [core] [Channel #25 SubChannel #26] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0314 18:03:42.256260       1 logging.go:59] [core] [Channel #103 SubChannel #104] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0314 18:03:42.267195       1 logging.go:59] [core] [Channel #127 SubChannel #128] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0314 18:03:42.281587       1 logging.go:59] [core] [Channel #58 SubChannel #59] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0314 18:03:42.316758       1 logging.go:59] [core] [Channel #133 SubChannel #134] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0314 18:03:42.319520       1 logging.go:59] [core] [Channel #97 SubChannel #98] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0314 18:03:42.345896       1 logging.go:59] [core] [Channel #28 SubChannel #29] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0314 18:03:42.357216       1 logging.go:59] [core] [Channel #112 SubChannel #113] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0314 18:03:42.444872       1 logging.go:59] [core] [Channel #70 SubChannel #71] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0314 18:03:42.516280       1 logging.go:59] [core] [Channel #109 SubChannel #110] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0314 18:03:42.525177       1 logging.go:59] [core] [Channel #22 SubChannel #23] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0314 18:03:42.525301       1 logging.go:59] [core] [Channel #106 SubChannel #107] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0314 18:03:42.597975       1 logging.go:59] [core] [Channel #160 SubChannel #161] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0314 18:03:42.618470       1 logging.go:59] [core] [Channel #73 SubChannel #74] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0314 18:03:42.658444       1 logging.go:59] [core] [Channel #7 SubChannel #8] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0314 18:03:42.696916       1 logging.go:59] [core] [Channel #76 SubChannel #77] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0314 18:03:42.703845       1 logging.go:59] [core] [Channel #82 SubChannel #83] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0314 18:03:42.750772       1 logging.go:59] [core] [Channel #43 SubChannel #44] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0314 18:03:42.825844       1 logging.go:59] [core] [Channel #19 SubChannel #20] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0314 18:03:42.894456       1 logging.go:59] [core] [Channel #145 SubChannel #146] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0314 18:03:42.905930       1 logging.go:59] [core] [Channel #31 SubChannel #32] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0314 18:03:42.954263       1 logging.go:59] [core] [Channel #34 SubChannel #35] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0314 18:03:42.971220       1 logging.go:59] [core] [Channel #61 SubChannel #62] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [1bf795c136e8] <==
	I0314 18:04:08.842209       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0314 18:04:08.844526       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-legacy-unknown
	I0314 18:04:08.850527       1 shared_informer.go:318] Caches are synced for GC
	I0314 18:04:08.852685       1 shared_informer.go:318] Caches are synced for PV protection
	I0314 18:04:08.859740       1 shared_informer.go:318] Caches are synced for namespace
	I0314 18:04:08.865316       1 shared_informer.go:318] Caches are synced for service account
	I0314 18:04:08.867854       1 shared_informer.go:318] Caches are synced for stateful set
	I0314 18:04:08.870359       1 shared_informer.go:318] Caches are synced for TTL after finished
	I0314 18:04:08.876213       1 shared_informer.go:318] Caches are synced for bootstrap_signer
	I0314 18:04:08.888037       1 shared_informer.go:318] Caches are synced for daemon sets
	I0314 18:04:08.913245       1 shared_informer.go:318] Caches are synced for resource quota
	I0314 18:04:08.924812       1 shared_informer.go:318] Caches are synced for ReplicationController
	I0314 18:04:08.934238       1 shared_informer.go:318] Caches are synced for disruption
	I0314 18:04:08.939459       1 shared_informer.go:318] Caches are synced for taint
	I0314 18:04:08.939801       1 node_lifecycle_controller.go:1225] "Initializing eviction metric for zone" zone=""
	I0314 18:04:08.940076       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="functional-866600"
	I0314 18:04:08.940620       1 node_lifecycle_controller.go:1071] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I0314 18:04:08.940663       1 taint_manager.go:205] "Starting NoExecuteTaintManager"
	I0314 18:04:08.940869       1 taint_manager.go:210] "Sending events to api server"
	I0314 18:04:08.941985       1 event.go:307] "Event occurred" object="functional-866600" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node functional-866600 event: Registered Node functional-866600 in Controller"
	I0314 18:04:08.954239       1 shared_informer.go:318] Caches are synced for resource quota
	I0314 18:04:08.995457       1 shared_informer.go:318] Caches are synced for attach detach
	I0314 18:04:09.408667       1 shared_informer.go:318] Caches are synced for garbage collector
	I0314 18:04:09.408767       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0314 18:04:09.424680       1 shared_informer.go:318] Caches are synced for garbage collector
	
	
	==> kube-controller-manager [eb1790abcad2] <==
	
	
	==> kube-proxy [8eeafd22647e] <==
	I0314 18:01:39.706981       1 server_others.go:69] "Using iptables proxy"
	I0314 18:01:39.819254       1 node.go:141] Successfully retrieved node IP: 172.17.91.78
	I0314 18:01:40.085417       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0314 18:01:40.085460       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0314 18:01:40.089587       1 server_others.go:152] "Using iptables Proxier"
	I0314 18:01:40.089640       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0314 18:01:40.089947       1 server.go:846] "Version info" version="v1.28.4"
	I0314 18:01:40.089979       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0314 18:01:40.097677       1 config.go:188] "Starting service config controller"
	I0314 18:01:40.097740       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0314 18:01:40.097813       1 config.go:97] "Starting endpoint slice config controller"
	I0314 18:01:40.097821       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0314 18:01:40.102364       1 config.go:315] "Starting node config controller"
	I0314 18:01:40.102388       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0314 18:01:40.198393       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0314 18:01:40.198475       1 shared_informer.go:318] Caches are synced for service config
	I0314 18:01:40.203238       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-proxy [f2d1880a8575] <==
	I0314 18:03:59.128432       1 server_others.go:69] "Using iptables proxy"
	I0314 18:03:59.140418       1 node.go:141] Successfully retrieved node IP: 172.17.91.78
	I0314 18:03:59.196204       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0314 18:03:59.196286       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0314 18:03:59.200012       1 server_others.go:152] "Using iptables Proxier"
	I0314 18:03:59.200239       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0314 18:03:59.201195       1 server.go:846] "Version info" version="v1.28.4"
	I0314 18:03:59.201228       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0314 18:03:59.202266       1 config.go:188] "Starting service config controller"
	I0314 18:03:59.202305       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0314 18:03:59.202334       1 config.go:97] "Starting endpoint slice config controller"
	I0314 18:03:59.202343       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0314 18:03:59.202875       1 config.go:315] "Starting node config controller"
	I0314 18:03:59.202914       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0314 18:03:59.302413       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0314 18:03:59.303063       1 shared_informer.go:318] Caches are synced for node config
	I0314 18:03:59.302445       1 shared_informer.go:318] Caches are synced for service config
	
	
	==> kube-scheduler [1e6c09ec79cb] <==
	
	
	==> kube-scheduler [a51172c26d79] <==
	I0314 18:03:53.888159       1 serving.go:348] Generated self-signed cert in-memory
	W0314 18:03:56.316555       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0314 18:03:56.316625       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0314 18:03:56.316638       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0314 18:03:56.316646       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0314 18:03:56.388543       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.4"
	I0314 18:03:56.390838       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0314 18:03:56.397666       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0314 18:03:56.398804       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0314 18:03:56.400360       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0314 18:03:56.400716       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0314 18:03:56.504254       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Mar 14 18:03:56 functional-866600 kubelet[6708]: I0314 18:03:56.460597    6708 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8123be17-49ac-450e-9ff2-48b35f8a9a0f-xtables-lock\") pod \"kube-proxy-7dppw\" (UID: \"8123be17-49ac-450e-9ff2-48b35f8a9a0f\") " pod="kube-system/kube-proxy-7dppw"
	Mar 14 18:03:56 functional-866600 kubelet[6708]: I0314 18:03:56.460787    6708 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8123be17-49ac-450e-9ff2-48b35f8a9a0f-lib-modules\") pod \"kube-proxy-7dppw\" (UID: \"8123be17-49ac-450e-9ff2-48b35f8a9a0f\") " pod="kube-system/kube-proxy-7dppw"
	Mar 14 18:03:56 functional-866600 kubelet[6708]: E0314 18:03:56.823754    6708 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-apiserver-functional-866600\" already exists" pod="kube-system/kube-apiserver-functional-866600"
	Mar 14 18:03:57 functional-866600 kubelet[6708]: E0314 18:03:57.461698    6708 configmap.go:199] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition
	Mar 14 18:03:57 functional-866600 kubelet[6708]: E0314 18:03:57.461883    6708 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5d8f04ff-70b9-4332-a120-8993958cfd33-config-volume podName:5d8f04ff-70b9-4332-a120-8993958cfd33 nodeName:}" failed. No retries permitted until 2024-03-14 18:03:57.961857255 +0000 UTC m=+6.835060084 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/5d8f04ff-70b9-4332-a120-8993958cfd33-config-volume") pod "coredns-5dd5756b68-n84nx" (UID: "5d8f04ff-70b9-4332-a120-8993958cfd33") : failed to sync configmap cache: timed out waiting for the condition
	Mar 14 18:03:57 functional-866600 kubelet[6708]: E0314 18:03:57.484219    6708 projected.go:292] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition
	Mar 14 18:03:57 functional-866600 kubelet[6708]: E0314 18:03:57.484355    6708 projected.go:198] Error preparing data for projected volume kube-api-access-pbwvz for pod kube-system/storage-provisioner: failed to sync configmap cache: timed out waiting for the condition
	Mar 14 18:03:57 functional-866600 kubelet[6708]: E0314 18:03:57.484454    6708 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/74f7dcf3-94a7-441e-a9c5-207e2bbd1efe-kube-api-access-pbwvz podName:74f7dcf3-94a7-441e-a9c5-207e2bbd1efe nodeName:}" failed. No retries permitted until 2024-03-14 18:03:57.984406627 +0000 UTC m=+6.857609356 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-pbwvz" (UniqueName: "kubernetes.io/projected/74f7dcf3-94a7-441e-a9c5-207e2bbd1efe-kube-api-access-pbwvz") pod "storage-provisioner" (UID: "74f7dcf3-94a7-441e-a9c5-207e2bbd1efe") : failed to sync configmap cache: timed out waiting for the condition
	Mar 14 18:03:57 functional-866600 kubelet[6708]: E0314 18:03:57.484274    6708 projected.go:292] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition
	Mar 14 18:03:57 functional-866600 kubelet[6708]: E0314 18:03:57.484484    6708 projected.go:198] Error preparing data for projected volume kube-api-access-j54ct for pod kube-system/kube-proxy-7dppw: failed to sync configmap cache: timed out waiting for the condition
	Mar 14 18:03:57 functional-866600 kubelet[6708]: E0314 18:03:57.484540    6708 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8123be17-49ac-450e-9ff2-48b35f8a9a0f-kube-api-access-j54ct podName:8123be17-49ac-450e-9ff2-48b35f8a9a0f nodeName:}" failed. No retries permitted until 2024-03-14 18:03:57.984530837 +0000 UTC m=+6.857733666 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-j54ct" (UniqueName: "kubernetes.io/projected/8123be17-49ac-450e-9ff2-48b35f8a9a0f-kube-api-access-j54ct") pod "kube-proxy-7dppw" (UID: "8123be17-49ac-450e-9ff2-48b35f8a9a0f") : failed to sync configmap cache: timed out waiting for the condition
	Mar 14 18:03:57 functional-866600 kubelet[6708]: E0314 18:03:57.487743    6708 projected.go:292] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition
	Mar 14 18:03:57 functional-866600 kubelet[6708]: E0314 18:03:57.487847    6708 projected.go:198] Error preparing data for projected volume kube-api-access-gr85m for pod kube-system/coredns-5dd5756b68-n84nx: failed to sync configmap cache: timed out waiting for the condition
	Mar 14 18:03:57 functional-866600 kubelet[6708]: E0314 18:03:57.487928    6708 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5d8f04ff-70b9-4332-a120-8993958cfd33-kube-api-access-gr85m podName:5d8f04ff-70b9-4332-a120-8993958cfd33 nodeName:}" failed. No retries permitted until 2024-03-14 18:03:57.987912518 +0000 UTC m=+6.861115247 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-gr85m" (UniqueName: "kubernetes.io/projected/5d8f04ff-70b9-4332-a120-8993958cfd33-kube-api-access-gr85m") pod "coredns-5dd5756b68-n84nx" (UID: "5d8f04ff-70b9-4332-a120-8993958cfd33") : failed to sync configmap cache: timed out waiting for the condition
	Mar 14 18:03:58 functional-866600 kubelet[6708]: I0314 18:03:58.108193    6708 scope.go:117] "RemoveContainer" containerID="35b40dfba1a9ecf0135d1320eaae966241ac79e8c8bcd8864819825bd9101ee4"
	Mar 14 18:04:51 functional-866600 kubelet[6708]: E0314 18:04:51.425422    6708 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 14 18:04:51 functional-866600 kubelet[6708]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 14 18:04:51 functional-866600 kubelet[6708]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 14 18:04:51 functional-866600 kubelet[6708]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 14 18:04:51 functional-866600 kubelet[6708]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 14 18:05:51 functional-866600 kubelet[6708]: E0314 18:05:51.423934    6708 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 14 18:05:51 functional-866600 kubelet[6708]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 14 18:05:51 functional-866600 kubelet[6708]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 14 18:05:51 functional-866600 kubelet[6708]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 14 18:05:51 functional-866600 kubelet[6708]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	
	
	==> storage-provisioner [44e064493f5c] <==
	I0314 18:01:46.359418       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0314 18:01:46.372133       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0314 18:01:46.372190       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0314 18:01:46.440595       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0314 18:01:46.441005       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-866600_c2156b6f-9121-408b-9660-5fefbb7a8e01!
	I0314 18:01:46.441476       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"7645130a-0454-49e7-94af-d64998a0a24d", APIVersion:"v1", ResourceVersion:"385", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-866600_c2156b6f-9121-408b-9660-5fefbb7a8e01 became leader
	I0314 18:01:46.543627       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-866600_c2156b6f-9121-408b-9660-5fefbb7a8e01!
	
	
	==> storage-provisioner [885ec235a9b5] <==
	I0314 18:03:59.063798       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0314 18:03:59.076515       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0314 18:03:59.078476       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0314 18:04:16.491447       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0314 18:04:16.491749       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"7645130a-0454-49e7-94af-d64998a0a24d", APIVersion:"v1", ResourceVersion:"575", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-866600_5ddafd0c-797e-4325-9647-deec44f4b881 became leader
	I0314 18:04:16.492864       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-866600_5ddafd0c-797e-4325-9647-deec44f4b881!
	I0314 18:04:16.593747       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-866600_5ddafd0c-797e-4325-9647-deec44f4b881!
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0314 18:05:44.224851    8708 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-866600 -n functional-866600
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-866600 -n functional-866600: (11.1593435s)
helpers_test.go:261: (dbg) Run:  kubectl --context functional-866600 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestFunctional/serial/MinikubeKubectlCmdDirectly FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/serial/MinikubeKubectlCmdDirectly (31.24s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (1.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-866600 config unset cpus
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-866600 config unset cpus" to be -""- but got *"W0314 18:08:37.875048   10392 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube7\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified."*
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-866600 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-866600 config get cpus: exit status 14 (237.929ms)

                                                
                                                
** stderr ** 
	W0314 18:08:38.176863   10648 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-866600 config get cpus" to be -"Error: specified key could not be found in config"- but got *"W0314 18:08:38.176863   10648 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube7\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\nError: specified key could not be found in config"*
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-866600 config set cpus 2
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-866600 config set cpus 2" to be -"! These changes will take effect upon a minikube delete and then a minikube start"- but got *"W0314 18:08:38.400929    3492 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube7\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\n! These changes will take effect upon a minikube delete and then a minikube start"*
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-866600 config get cpus
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-866600 config get cpus" to be -""- but got *"W0314 18:08:38.672613   14124 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube7\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified."*
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-866600 config unset cpus
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-866600 config unset cpus" to be -""- but got *"W0314 18:08:38.917232   10504 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube7\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified."*
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-866600 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-866600 config get cpus: exit status 14 (223.8921ms)

                                                
                                                
** stderr ** 
	W0314 18:08:39.161356      32 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-866600 config get cpus" to be -"Error: specified key could not be found in config"- but got *"W0314 18:08:39.161356      32 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube7\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\nError: specified key could not be found in config"*
--- FAIL: TestFunctional/parallel/ConfigCmd (1.52s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (15.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-866600 service --namespace=default --https --url hello-node
functional_test.go:1505: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-866600 service --namespace=default --https --url hello-node: exit status 1 (15.0231609s)

                                                
                                                
** stderr ** 
	W0314 18:09:20.579699    6904 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
functional_test.go:1507: failed to get service url. args "out/minikube-windows-amd64.exe -p functional-866600 service --namespace=default --https --url hello-node" : exit status 1
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (15.02s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (15.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-866600 service hello-node --url --format={{.IP}}
functional_test.go:1536: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-866600 service hello-node --url --format={{.IP}}: exit status 1 (15.0150793s)

                                                
                                                
** stderr ** 
	W0314 18:09:35.607631    1728 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
functional_test.go:1538: failed to get service url with custom format. args "out/minikube-windows-amd64.exe -p functional-866600 service hello-node --url --format={{.IP}}": exit status 1
functional_test.go:1544: "" is not a valid IP
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (15.02s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (15.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-866600 service hello-node --url
functional_test.go:1555: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-866600 service hello-node --url: exit status 1 (15.0227698s)

                                                
                                                
** stderr ** 
	W0314 18:09:50.630013   13872 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
functional_test.go:1557: failed to get service url. args: "out/minikube-windows-amd64.exe -p functional-866600 service hello-node --url": exit status 1
functional_test.go:1561: found endpoint for hello-node: 
functional_test.go:1569: expected scheme to be -"http"- got scheme: *""*
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (15.02s)

                                                
                                    
x
+
TestMutliControlPlane/serial/PingHostFromPods (65.07s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-832100 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-832100 -- exec busybox-5b5d89c9d6-9wj82 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-832100 -- exec busybox-5b5d89c9d6-9wj82 -- sh -c "ping -c 1 172.17.80.1"
ha_test.go:218: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p ha-832100 -- exec busybox-5b5d89c9d6-9wj82 -- sh -c "ping -c 1 172.17.80.1": exit status 1 (10.4314651s)

                                                
                                                
-- stdout --
	PING 172.17.80.1 (172.17.80.1): 56 data bytes
	
	--- 172.17.80.1 ping statistics ---
	1 packets transmitted, 0 packets received, 100% packet loss

                                                
                                                
-- /stdout --
** stderr ** 
	W0314 18:28:02.242979   11712 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	command terminated with exit code 1

                                                
                                                
** /stderr **
ha_test.go:219: Failed to ping host (172.17.80.1) from pod (busybox-5b5d89c9d6-9wj82): exit status 1
ha_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-832100 -- exec busybox-5b5d89c9d6-qjmj7 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-832100 -- exec busybox-5b5d89c9d6-qjmj7 -- sh -c "ping -c 1 172.17.80.1"
E0314 18:28:17.964346   11052 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-953400\client.crt: The system cannot find the path specified.
ha_test.go:218: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p ha-832100 -- exec busybox-5b5d89c9d6-qjmj7 -- sh -c "ping -c 1 172.17.80.1": exit status 1 (10.4137878s)

                                                
                                                
-- stdout --
	PING 172.17.80.1 (172.17.80.1): 56 data bytes
	
	--- 172.17.80.1 ping statistics ---
	1 packets transmitted, 0 packets received, 100% packet loss

                                                
                                                
-- /stdout --
** stderr ** 
	W0314 18:28:13.114696   13160 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	command terminated with exit code 1

                                                
                                                
** /stderr **
ha_test.go:219: Failed to ping host (172.17.80.1) from pod (busybox-5b5d89c9d6-qjmj7): exit status 1
ha_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-832100 -- exec busybox-5b5d89c9d6-zncln -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-832100 -- exec busybox-5b5d89c9d6-zncln -- sh -c "ping -c 1 172.17.80.1"
ha_test.go:218: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p ha-832100 -- exec busybox-5b5d89c9d6-zncln -- sh -c "ping -c 1 172.17.80.1": exit status 1 (10.4359823s)

                                                
                                                
-- stdout --
	PING 172.17.80.1 (172.17.80.1): 56 data bytes
	
	--- 172.17.80.1 ping statistics ---
	1 packets transmitted, 0 packets received, 100% packet loss

                                                
                                                
-- /stdout --
** stderr ** 
	W0314 18:28:23.988036   11704 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	command terminated with exit code 1

                                                
                                                
** /stderr **
ha_test.go:219: Failed to ping host (172.17.80.1) from pod (busybox-5b5d89c9d6-zncln): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-832100 -n ha-832100
E0314 18:28:38.213602   11052 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-866600\client.crt: The system cannot find the path specified.
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-832100 -n ha-832100: (11.2360932s)
helpers_test.go:244: <<< TestMutliControlPlane/serial/PingHostFromPods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMutliControlPlane/serial/PingHostFromPods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-832100 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p ha-832100 logs -n 25: (8.0676992s)
helpers_test.go:252: TestMutliControlPlane/serial/PingHostFromPods logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| Command |                 Args                 |      Profile      |       User        | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| image   | functional-866600 image build -t     | functional-866600 | minikube7\jenkins | v1.32.0 | 14 Mar 24 18:11 UTC | 14 Mar 24 18:12 UTC |
	|         | localhost/my-image:functional-866600 |                   |                   |         |                     |                     |
	|         | testdata\build --alsologtostderr     |                   |                   |         |                     |                     |
	| image   | functional-866600                    | functional-866600 | minikube7\jenkins | v1.32.0 | 14 Mar 24 18:11 UTC | 14 Mar 24 18:12 UTC |
	|         | image ls --format table              |                   |                   |         |                     |                     |
	|         | --alsologtostderr                    |                   |                   |         |                     |                     |
	| image   | functional-866600 image ls           | functional-866600 | minikube7\jenkins | v1.32.0 | 14 Mar 24 18:12 UTC | 14 Mar 24 18:12 UTC |
	| delete  | -p functional-866600                 | functional-866600 | minikube7\jenkins | v1.32.0 | 14 Mar 24 18:15 UTC | 14 Mar 24 18:16 UTC |
	| start   | -p ha-832100 --wait=true             | ha-832100         | minikube7\jenkins | v1.32.0 | 14 Mar 24 18:16 UTC | 14 Mar 24 18:27 UTC |
	|         | --memory=2200 --ha                   |                   |                   |         |                     |                     |
	|         | -v=7 --alsologtostderr               |                   |                   |         |                     |                     |
	|         | --driver=hyperv                      |                   |                   |         |                     |                     |
	| kubectl | -p ha-832100 -- apply -f             | ha-832100         | minikube7\jenkins | v1.32.0 | 14 Mar 24 18:27 UTC | 14 Mar 24 18:27 UTC |
	|         | ./testdata/ha/ha-pod-dns-test.yaml   |                   |                   |         |                     |                     |
	| kubectl | -p ha-832100 -- rollout status       | ha-832100         | minikube7\jenkins | v1.32.0 | 14 Mar 24 18:27 UTC | 14 Mar 24 18:27 UTC |
	|         | deployment/busybox                   |                   |                   |         |                     |                     |
	| kubectl | -p ha-832100 -- get pods -o          | ha-832100         | minikube7\jenkins | v1.32.0 | 14 Mar 24 18:27 UTC | 14 Mar 24 18:27 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |                   |                   |         |                     |                     |
	| kubectl | -p ha-832100 -- get pods -o          | ha-832100         | minikube7\jenkins | v1.32.0 | 14 Mar 24 18:27 UTC | 14 Mar 24 18:27 UTC |
	|         | jsonpath='{.items[*].metadata.name}' |                   |                   |         |                     |                     |
	| kubectl | -p ha-832100 -- exec                 | ha-832100         | minikube7\jenkins | v1.32.0 | 14 Mar 24 18:27 UTC | 14 Mar 24 18:27 UTC |
	|         | busybox-5b5d89c9d6-9wj82 --          |                   |                   |         |                     |                     |
	|         | nslookup kubernetes.io               |                   |                   |         |                     |                     |
	| kubectl | -p ha-832100 -- exec                 | ha-832100         | minikube7\jenkins | v1.32.0 | 14 Mar 24 18:27 UTC | 14 Mar 24 18:27 UTC |
	|         | busybox-5b5d89c9d6-qjmj7 --          |                   |                   |         |                     |                     |
	|         | nslookup kubernetes.io               |                   |                   |         |                     |                     |
	| kubectl | -p ha-832100 -- exec                 | ha-832100         | minikube7\jenkins | v1.32.0 | 14 Mar 24 18:27 UTC | 14 Mar 24 18:27 UTC |
	|         | busybox-5b5d89c9d6-zncln --          |                   |                   |         |                     |                     |
	|         | nslookup kubernetes.io               |                   |                   |         |                     |                     |
	| kubectl | -p ha-832100 -- exec                 | ha-832100         | minikube7\jenkins | v1.32.0 | 14 Mar 24 18:27 UTC | 14 Mar 24 18:27 UTC |
	|         | busybox-5b5d89c9d6-9wj82 --          |                   |                   |         |                     |                     |
	|         | nslookup kubernetes.default          |                   |                   |         |                     |                     |
	| kubectl | -p ha-832100 -- exec                 | ha-832100         | minikube7\jenkins | v1.32.0 | 14 Mar 24 18:27 UTC | 14 Mar 24 18:27 UTC |
	|         | busybox-5b5d89c9d6-qjmj7 --          |                   |                   |         |                     |                     |
	|         | nslookup kubernetes.default          |                   |                   |         |                     |                     |
	| kubectl | -p ha-832100 -- exec                 | ha-832100         | minikube7\jenkins | v1.32.0 | 14 Mar 24 18:27 UTC | 14 Mar 24 18:28 UTC |
	|         | busybox-5b5d89c9d6-zncln --          |                   |                   |         |                     |                     |
	|         | nslookup kubernetes.default          |                   |                   |         |                     |                     |
	| kubectl | -p ha-832100 -- exec                 | ha-832100         | minikube7\jenkins | v1.32.0 | 14 Mar 24 18:28 UTC | 14 Mar 24 18:28 UTC |
	|         | busybox-5b5d89c9d6-9wj82 -- nslookup |                   |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |                   |                   |         |                     |                     |
	| kubectl | -p ha-832100 -- exec                 | ha-832100         | minikube7\jenkins | v1.32.0 | 14 Mar 24 18:28 UTC | 14 Mar 24 18:28 UTC |
	|         | busybox-5b5d89c9d6-qjmj7 -- nslookup |                   |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |                   |                   |         |                     |                     |
	| kubectl | -p ha-832100 -- exec                 | ha-832100         | minikube7\jenkins | v1.32.0 | 14 Mar 24 18:28 UTC | 14 Mar 24 18:28 UTC |
	|         | busybox-5b5d89c9d6-zncln -- nslookup |                   |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |                   |                   |         |                     |                     |
	| kubectl | -p ha-832100 -- get pods -o          | ha-832100         | minikube7\jenkins | v1.32.0 | 14 Mar 24 18:28 UTC | 14 Mar 24 18:28 UTC |
	|         | jsonpath='{.items[*].metadata.name}' |                   |                   |         |                     |                     |
	| kubectl | -p ha-832100 -- exec                 | ha-832100         | minikube7\jenkins | v1.32.0 | 14 Mar 24 18:28 UTC | 14 Mar 24 18:28 UTC |
	|         | busybox-5b5d89c9d6-9wj82             |                   |                   |         |                     |                     |
	|         | -- sh -c nslookup                    |                   |                   |         |                     |                     |
	|         | host.minikube.internal | awk         |                   |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |                   |                   |         |                     |                     |
	| kubectl | -p ha-832100 -- exec                 | ha-832100         | minikube7\jenkins | v1.32.0 | 14 Mar 24 18:28 UTC |                     |
	|         | busybox-5b5d89c9d6-9wj82 -- sh       |                   |                   |         |                     |                     |
	|         | -c ping -c 1 172.17.80.1             |                   |                   |         |                     |                     |
	| kubectl | -p ha-832100 -- exec                 | ha-832100         | minikube7\jenkins | v1.32.0 | 14 Mar 24 18:28 UTC | 14 Mar 24 18:28 UTC |
	|         | busybox-5b5d89c9d6-qjmj7             |                   |                   |         |                     |                     |
	|         | -- sh -c nslookup                    |                   |                   |         |                     |                     |
	|         | host.minikube.internal | awk         |                   |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |                   |                   |         |                     |                     |
	| kubectl | -p ha-832100 -- exec                 | ha-832100         | minikube7\jenkins | v1.32.0 | 14 Mar 24 18:28 UTC |                     |
	|         | busybox-5b5d89c9d6-qjmj7 -- sh       |                   |                   |         |                     |                     |
	|         | -c ping -c 1 172.17.80.1             |                   |                   |         |                     |                     |
	| kubectl | -p ha-832100 -- exec                 | ha-832100         | minikube7\jenkins | v1.32.0 | 14 Mar 24 18:28 UTC | 14 Mar 24 18:28 UTC |
	|         | busybox-5b5d89c9d6-zncln             |                   |                   |         |                     |                     |
	|         | -- sh -c nslookup                    |                   |                   |         |                     |                     |
	|         | host.minikube.internal | awk         |                   |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |                   |                   |         |                     |                     |
	| kubectl | -p ha-832100 -- exec                 | ha-832100         | minikube7\jenkins | v1.32.0 | 14 Mar 24 18:28 UTC |                     |
	|         | busybox-5b5d89c9d6-zncln -- sh       |                   |                   |         |                     |                     |
	|         | -c ping -c 1 172.17.80.1             |                   |                   |         |                     |                     |
	|---------|--------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/14 18:16:19
	Running on machine: minikube7
	Binary: Built with gc go1.22.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0314 18:16:19.570103    4456 out.go:291] Setting OutFile to fd 1484 ...
	I0314 18:16:19.570103    4456 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 18:16:19.570103    4456 out.go:304] Setting ErrFile to fd 1488...
	I0314 18:16:19.570103    4456 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 18:16:19.590119    4456 out.go:298] Setting JSON to false
	I0314 18:16:19.594110    4456 start.go:129] hostinfo: {"hostname":"minikube7","uptime":61984,"bootTime":1710378195,"procs":195,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4046 Build 19045.4046","kernelVersion":"10.0.19045.4046 Build 19045.4046","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f66ed2ea-6c04-4a6b-8eea-b2fb0953e990"}
	W0314 18:16:19.594110    4456 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0314 18:16:19.600257    4456 out.go:177] * [ha-832100] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4046 Build 19045.4046
	I0314 18:16:19.603483    4456 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0314 18:16:19.603483    4456 notify.go:220] Checking for updates...
	I0314 18:16:19.606301    4456 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0314 18:16:19.608697    4456 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	I0314 18:16:19.610828    4456 out.go:177]   - MINIKUBE_LOCATION=18384
	I0314 18:16:19.613298    4456 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0314 18:16:19.615748    4456 driver.go:392] Setting default libvirt URI to qemu:///system
	I0314 18:16:24.552186    4456 out.go:177] * Using the hyperv driver based on user configuration
	I0314 18:16:24.555353    4456 start.go:297] selected driver: hyperv
	I0314 18:16:24.555353    4456 start.go:901] validating driver "hyperv" against <nil>
	I0314 18:16:24.555353    4456 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0314 18:16:24.600539    4456 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0314 18:16:24.602440    4456 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0314 18:16:24.602440    4456 cni.go:84] Creating CNI manager for ""
	I0314 18:16:24.602440    4456 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0314 18:16:24.602440    4456 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0314 18:16:24.602440    4456 start.go:340] cluster config:
	{Name:ha-832100 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:ha-832100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthS
ock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 18:16:24.603083    4456 iso.go:125] acquiring lock: {Name:mk1b3e73402180391a20a865a9454da445c269fc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0314 18:16:24.608581    4456 out.go:177] * Starting "ha-832100" primary control-plane node in "ha-832100" cluster
	I0314 18:16:24.611056    4456 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0314 18:16:24.611056    4456 preload.go:147] Found local preload: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I0314 18:16:24.611056    4456 cache.go:56] Caching tarball of preloaded images
	I0314 18:16:24.611056    4456 preload.go:173] Found C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0314 18:16:24.611056    4456 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0314 18:16:24.612103    4456 profile.go:142] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-832100\config.json ...
	I0314 18:16:24.612103    4456 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-832100\config.json: {Name:mk7260dd1ee06e834018ca0cc2517aa0aa781219 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 18:16:24.613610    4456 start.go:360] acquireMachinesLock for ha-832100: {Name:mk814f158b6187cc9297257c36fdbe0d2871c950 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0314 18:16:24.613610    4456 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-832100"
	I0314 18:16:24.613610    4456 start.go:93] Provisioning new machine with config: &{Name:ha-832100 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.28.4 ClusterName:ha-832100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0314 18:16:24.613610    4456 start.go:125] createHost starting for "" (driver="hyperv")
	I0314 18:16:24.618610    4456 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0314 18:16:24.618610    4456 start.go:159] libmachine.API.Create for "ha-832100" (driver="hyperv")
	I0314 18:16:24.619612    4456 client.go:168] LocalClient.Create starting
	I0314 18:16:24.619612    4456 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem
	I0314 18:16:24.619612    4456 main.go:141] libmachine: Decoding PEM data...
	I0314 18:16:24.619612    4456 main.go:141] libmachine: Parsing certificate...
	I0314 18:16:24.619612    4456 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem
	I0314 18:16:24.619612    4456 main.go:141] libmachine: Decoding PEM data...
	I0314 18:16:24.619612    4456 main.go:141] libmachine: Parsing certificate...
	I0314 18:16:24.619612    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0314 18:16:26.574347    4456 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0314 18:16:26.574347    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:16:26.574975    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0314 18:16:28.230177    4456 main.go:141] libmachine: [stdout =====>] : False
	
	I0314 18:16:28.230177    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:16:28.230177    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0314 18:16:29.635494    4456 main.go:141] libmachine: [stdout =====>] : True
	
	I0314 18:16:29.636228    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:16:29.636228    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0314 18:16:33.032214    4456 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0314 18:16:33.033373    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:16:33.035821    4456 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube7/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.32.1-1710348681-18375-amd64.iso...
	I0314 18:16:33.399593    4456 main.go:141] libmachine: Creating SSH key...
	I0314 18:16:33.824314    4456 main.go:141] libmachine: Creating VM...
	I0314 18:16:33.824314    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0314 18:16:36.435806    4456 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0314 18:16:36.435806    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:16:36.435984    4456 main.go:141] libmachine: Using switch "Default Switch"
	I0314 18:16:36.436068    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0314 18:16:38.103622    4456 main.go:141] libmachine: [stdout =====>] : True
	
	I0314 18:16:38.103821    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:16:38.103821    4456 main.go:141] libmachine: Creating VHD
	I0314 18:16:38.103929    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\ha-832100\fixed.vhd' -SizeBytes 10MB -Fixed
	I0314 18:16:41.682302    4456 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube7
	Path                    : C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\ha-832100\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 714516BD-7790-4516-B766-B7B00B9D56C7
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0314 18:16:41.682492    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:16:41.682492    4456 main.go:141] libmachine: Writing magic tar header
	I0314 18:16:41.682580    4456 main.go:141] libmachine: Writing SSH key tar header
	I0314 18:16:41.691276    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\ha-832100\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\ha-832100\disk.vhd' -VHDType Dynamic -DeleteSource
	I0314 18:16:44.743032    4456 main.go:141] libmachine: [stdout =====>] : 
	I0314 18:16:44.743032    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:16:44.743032    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\ha-832100\disk.vhd' -SizeBytes 20000MB
	I0314 18:16:47.143393    4456 main.go:141] libmachine: [stdout =====>] : 
	I0314 18:16:47.143393    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:16:47.143826    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-832100 -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\ha-832100' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0314 18:16:50.516514    4456 main.go:141] libmachine: [stdout =====>] : 
	Name      State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----      ----- ----------- ----------------- ------   ------             -------
	ha-832100 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0314 18:16:50.516514    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:16:50.516575    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-832100 -DynamicMemoryEnabled $false
	I0314 18:16:52.613917    4456 main.go:141] libmachine: [stdout =====>] : 
	I0314 18:16:52.613917    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:16:52.614286    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-832100 -Count 2
	I0314 18:16:54.642826    4456 main.go:141] libmachine: [stdout =====>] : 
	I0314 18:16:54.642826    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:16:54.643279    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-832100 -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\ha-832100\boot2docker.iso'
	I0314 18:16:57.078584    4456 main.go:141] libmachine: [stdout =====>] : 
	I0314 18:16:57.079474    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:16:57.079474    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-832100 -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\ha-832100\disk.vhd'
	I0314 18:16:59.532932    4456 main.go:141] libmachine: [stdout =====>] : 
	I0314 18:16:59.533013    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:16:59.533013    4456 main.go:141] libmachine: Starting VM...
	I0314 18:16:59.533013    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-832100
	I0314 18:17:02.467216    4456 main.go:141] libmachine: [stdout =====>] : 
	I0314 18:17:02.468233    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:17:02.468233    4456 main.go:141] libmachine: Waiting for host to start...
	I0314 18:17:02.468460    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-832100 ).state
	I0314 18:17:04.515225    4456 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 18:17:04.515782    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:17:04.515860    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-832100 ).networkadapters[0]).ipaddresses[0]
	I0314 18:17:06.832825    4456 main.go:141] libmachine: [stdout =====>] : 
	I0314 18:17:06.832825    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:17:07.838067    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-832100 ).state
	I0314 18:17:09.858535    4456 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 18:17:09.858535    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:17:09.859025    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-832100 ).networkadapters[0]).ipaddresses[0]
	I0314 18:17:12.195879    4456 main.go:141] libmachine: [stdout =====>] : 
	I0314 18:17:12.196830    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:17:13.199730    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-832100 ).state
	I0314 18:17:15.225520    4456 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 18:17:15.225520    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:17:15.225520    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-832100 ).networkadapters[0]).ipaddresses[0]
	I0314 18:17:17.506496    4456 main.go:141] libmachine: [stdout =====>] : 
	I0314 18:17:17.510526    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:17:18.516861    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-832100 ).state
	I0314 18:17:20.524104    4456 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 18:17:20.524104    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:17:20.524104    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-832100 ).networkadapters[0]).ipaddresses[0]
	I0314 18:17:22.834956    4456 main.go:141] libmachine: [stdout =====>] : 
	I0314 18:17:22.834956    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:17:23.847829    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-832100 ).state
	I0314 18:17:25.869745    4456 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 18:17:25.870727    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:17:25.870727    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-832100 ).networkadapters[0]).ipaddresses[0]
	I0314 18:17:28.240340    4456 main.go:141] libmachine: [stdout =====>] : 172.17.90.10
	
	I0314 18:17:28.240340    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:17:28.240901    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-832100 ).state
	I0314 18:17:30.221096    4456 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 18:17:30.221986    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:17:30.221986    4456 machine.go:94] provisionDockerMachine start ...
	I0314 18:17:30.222171    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-832100 ).state
	I0314 18:17:32.207017    4456 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 18:17:32.207017    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:17:32.207017    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-832100 ).networkadapters[0]).ipaddresses[0]
	I0314 18:17:34.578365    4456 main.go:141] libmachine: [stdout =====>] : 172.17.90.10
	
	I0314 18:17:34.583581    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:17:34.588078    4456 main.go:141] libmachine: Using SSH client type: native
	I0314 18:17:34.598019    4456 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xac9f80] 0xaccb60 <nil>  [] 0s} 172.17.90.10 22 <nil> <nil>}
	I0314 18:17:34.599032    4456 main.go:141] libmachine: About to run SSH command:
	hostname
	I0314 18:17:34.733533    4456 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0314 18:17:34.733533    4456 buildroot.go:166] provisioning hostname "ha-832100"
	I0314 18:17:34.733620    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-832100 ).state
	I0314 18:17:36.719479    4456 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 18:17:36.719945    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:17:36.720139    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-832100 ).networkadapters[0]).ipaddresses[0]
	I0314 18:17:39.059374    4456 main.go:141] libmachine: [stdout =====>] : 172.17.90.10
	
	I0314 18:17:39.059374    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:17:39.063548    4456 main.go:141] libmachine: Using SSH client type: native
	I0314 18:17:39.064000    4456 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xac9f80] 0xaccb60 <nil>  [] 0s} 172.17.90.10 22 <nil> <nil>}
	I0314 18:17:39.064073    4456 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-832100 && echo "ha-832100" | sudo tee /etc/hostname
	I0314 18:17:39.214222    4456 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-832100
	
	I0314 18:17:39.214360    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-832100 ).state
	I0314 18:17:41.204669    4456 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 18:17:41.204669    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:17:41.205254    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-832100 ).networkadapters[0]).ipaddresses[0]
	I0314 18:17:43.526184    4456 main.go:141] libmachine: [stdout =====>] : 172.17.90.10
	
	I0314 18:17:43.526499    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:17:43.530496    4456 main.go:141] libmachine: Using SSH client type: native
	I0314 18:17:43.530971    4456 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xac9f80] 0xaccb60 <nil>  [] 0s} 172.17.90.10 22 <nil> <nil>}
	I0314 18:17:43.530971    4456 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-832100' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-832100/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-832100' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0314 18:17:43.672815    4456 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0314 18:17:43.672815    4456 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube7\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube7\minikube-integration\.minikube}
	I0314 18:17:43.672815    4456 buildroot.go:174] setting up certificates
	I0314 18:17:43.672815    4456 provision.go:84] configureAuth start
	I0314 18:17:43.673628    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-832100 ).state
	I0314 18:17:45.658422    4456 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 18:17:45.658422    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:17:45.659276    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-832100 ).networkadapters[0]).ipaddresses[0]
	I0314 18:17:48.050355    4456 main.go:141] libmachine: [stdout =====>] : 172.17.90.10
	
	I0314 18:17:48.051145    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:17:48.051145    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-832100 ).state
	I0314 18:17:50.017154    4456 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 18:17:50.017154    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:17:50.017517    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-832100 ).networkadapters[0]).ipaddresses[0]
	I0314 18:17:52.355606    4456 main.go:141] libmachine: [stdout =====>] : 172.17.90.10
	
	I0314 18:17:52.355606    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:17:52.356127    4456 provision.go:143] copyHostCerts
	I0314 18:17:52.356219    4456 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem
	I0314 18:17:52.356599    4456 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem, removing ...
	I0314 18:17:52.356685    4456 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.pem
	I0314 18:17:52.357086    4456 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0314 18:17:52.357602    4456 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem
	I0314 18:17:52.358274    4456 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem, removing ...
	I0314 18:17:52.358274    4456 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cert.pem
	I0314 18:17:52.358661    4456 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0314 18:17:52.359619    4456 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem
	I0314 18:17:52.359845    4456 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem, removing ...
	I0314 18:17:52.359903    4456 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\key.pem
	I0314 18:17:52.360187    4456 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem (1679 bytes)
	I0314 18:17:52.360952    4456 provision.go:117] generating server cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-832100 san=[127.0.0.1 172.17.90.10 ha-832100 localhost minikube]
	I0314 18:17:52.480194    4456 provision.go:177] copyRemoteCerts
	I0314 18:17:52.489181    4456 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0314 18:17:52.489181    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-832100 ).state
	I0314 18:17:54.464307    4456 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 18:17:54.464307    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:17:54.464385    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-832100 ).networkadapters[0]).ipaddresses[0]
	I0314 18:17:56.832309    4456 main.go:141] libmachine: [stdout =====>] : 172.17.90.10
	
	I0314 18:17:56.832950    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:17:56.832950    4456 sshutil.go:53] new ssh client: &{IP:172.17.90.10 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\ha-832100\id_rsa Username:docker}
	I0314 18:17:56.939853    4456 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.4503374s)
	I0314 18:17:56.939920    4456 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0314 18:17:56.940311    4456 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0314 18:17:56.983139    4456 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0314 18:17:56.983283    4456 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0314 18:17:57.023840    4456 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0314 18:17:57.024253    4456 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1196 bytes)
	I0314 18:17:57.069071    4456 provision.go:87] duration metric: took 13.3946102s to configureAuth
	I0314 18:17:57.069173    4456 buildroot.go:189] setting minikube options for container-runtime
	I0314 18:17:57.069987    4456 config.go:182] Loaded profile config "ha-832100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0314 18:17:57.070073    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-832100 ).state
	I0314 18:17:59.046331    4456 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 18:17:59.046331    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:17:59.046440    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-832100 ).networkadapters[0]).ipaddresses[0]
	I0314 18:18:01.433498    4456 main.go:141] libmachine: [stdout =====>] : 172.17.90.10
	
	I0314 18:18:01.433498    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:18:01.437608    4456 main.go:141] libmachine: Using SSH client type: native
	I0314 18:18:01.438011    4456 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xac9f80] 0xaccb60 <nil>  [] 0s} 172.17.90.10 22 <nil> <nil>}
	I0314 18:18:01.438011    4456 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0314 18:18:01.567376    4456 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0314 18:18:01.567376    4456 buildroot.go:70] root file system type: tmpfs
	I0314 18:18:01.568079    4456 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0314 18:18:01.568281    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-832100 ).state
	I0314 18:18:03.577590    4456 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 18:18:03.578127    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:18:03.578206    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-832100 ).networkadapters[0]).ipaddresses[0]
	I0314 18:18:05.974751    4456 main.go:141] libmachine: [stdout =====>] : 172.17.90.10
	
	I0314 18:18:05.974751    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:18:05.979974    4456 main.go:141] libmachine: Using SSH client type: native
	I0314 18:18:05.979974    4456 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xac9f80] 0xaccb60 <nil>  [] 0s} 172.17.90.10 22 <nil> <nil>}
	I0314 18:18:05.980499    4456 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0314 18:18:06.135976    4456 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0314 18:18:06.135976    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-832100 ).state
	I0314 18:18:08.130369    4456 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 18:18:08.130437    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:18:08.130437    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-832100 ).networkadapters[0]).ipaddresses[0]
	I0314 18:18:10.512425    4456 main.go:141] libmachine: [stdout =====>] : 172.17.90.10
	
	I0314 18:18:10.512425    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:18:10.516576    4456 main.go:141] libmachine: Using SSH client type: native
	I0314 18:18:10.516851    4456 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xac9f80] 0xaccb60 <nil>  [] 0s} 172.17.90.10 22 <nil> <nil>}
	I0314 18:18:10.516851    4456 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0314 18:18:12.636100    4456 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0314 18:18:12.636100    4456 machine.go:97] duration metric: took 42.410923s to provisionDockerMachine
	I0314 18:18:12.636100    4456 client.go:171] duration metric: took 1m48.0083436s to LocalClient.Create
	I0314 18:18:12.636100    4456 start.go:167] duration metric: took 1m48.0093453s to libmachine.API.Create "ha-832100"
	I0314 18:18:12.636100    4456 start.go:293] postStartSetup for "ha-832100" (driver="hyperv")
	I0314 18:18:12.636100    4456 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0314 18:18:12.645942    4456 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0314 18:18:12.645942    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-832100 ).state
	I0314 18:18:14.642369    4456 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 18:18:14.642369    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:18:14.642369    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-832100 ).networkadapters[0]).ipaddresses[0]
	I0314 18:18:17.025833    4456 main.go:141] libmachine: [stdout =====>] : 172.17.90.10
	
	I0314 18:18:17.025833    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:18:17.026295    4456 sshutil.go:53] new ssh client: &{IP:172.17.90.10 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\ha-832100\id_rsa Username:docker}
	I0314 18:18:17.124029    4456 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.4777515s)
	I0314 18:18:17.133691    4456 ssh_runner.go:195] Run: cat /etc/os-release
	I0314 18:18:17.140244    4456 info.go:137] Remote host: Buildroot 2023.02.9
	I0314 18:18:17.140244    4456 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\addons for local assets ...
	I0314 18:18:17.140773    4456 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\files for local assets ...
	I0314 18:18:17.140985    4456 filesync.go:149] local asset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\110522.pem -> 110522.pem in /etc/ssl/certs
	I0314 18:18:17.140985    4456 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\110522.pem -> /etc/ssl/certs/110522.pem
	I0314 18:18:17.150172    4456 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0314 18:18:17.166649    4456 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\110522.pem --> /etc/ssl/certs/110522.pem (1708 bytes)
	I0314 18:18:17.208803    4456 start.go:296] duration metric: took 4.5723093s for postStartSetup
	I0314 18:18:17.211304    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-832100 ).state
	I0314 18:18:19.181226    4456 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 18:18:19.181226    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:18:19.181301    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-832100 ).networkadapters[0]).ipaddresses[0]
	I0314 18:18:21.590953    4456 main.go:141] libmachine: [stdout =====>] : 172.17.90.10
	
	I0314 18:18:21.591818    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:18:21.591891    4456 profile.go:142] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-832100\config.json ...
	I0314 18:18:21.594083    4456 start.go:128] duration metric: took 1m56.9716561s to createHost
	I0314 18:18:21.594171    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-832100 ).state
	I0314 18:18:23.560059    4456 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 18:18:23.560200    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:18:23.560257    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-832100 ).networkadapters[0]).ipaddresses[0]
	I0314 18:18:25.943383    4456 main.go:141] libmachine: [stdout =====>] : 172.17.90.10
	
	I0314 18:18:25.943383    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:18:25.947291    4456 main.go:141] libmachine: Using SSH client type: native
	I0314 18:18:25.947970    4456 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xac9f80] 0xaccb60 <nil>  [] 0s} 172.17.90.10 22 <nil> <nil>}
	I0314 18:18:25.947970    4456 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0314 18:18:26.074570    4456 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710440306.334716873
	
	I0314 18:18:26.074570    4456 fix.go:216] guest clock: 1710440306.334716873
	I0314 18:18:26.074570    4456 fix.go:229] Guest: 2024-03-14 18:18:26.334716873 +0000 UTC Remote: 2024-03-14 18:18:21.5941717 +0000 UTC m=+122.153683001 (delta=4.740545173s)
	I0314 18:18:26.074570    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-832100 ).state
	I0314 18:18:28.059838    4456 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 18:18:28.059838    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:18:28.060392    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-832100 ).networkadapters[0]).ipaddresses[0]
	I0314 18:18:30.430433    4456 main.go:141] libmachine: [stdout =====>] : 172.17.90.10
	
	I0314 18:18:30.430433    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:18:30.435575    4456 main.go:141] libmachine: Using SSH client type: native
	I0314 18:18:30.435639    4456 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xac9f80] 0xaccb60 <nil>  [] 0s} 172.17.90.10 22 <nil> <nil>}
	I0314 18:18:30.435639    4456 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1710440306
	I0314 18:18:30.573055    4456 main.go:141] libmachine: SSH cmd err, output: <nil>: Thu Mar 14 18:18:26 UTC 2024
	
	I0314 18:18:30.573055    4456 fix.go:236] clock set: Thu Mar 14 18:18:26 UTC 2024
	 (err=<nil>)
	I0314 18:18:30.573055    4456 start.go:83] releasing machines lock for "ha-832100", held for 2m5.9499552s
	I0314 18:18:30.573824    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-832100 ).state
	I0314 18:18:32.595388    4456 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 18:18:32.595520    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:18:32.595605    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-832100 ).networkadapters[0]).ipaddresses[0]
	I0314 18:18:34.974371    4456 main.go:141] libmachine: [stdout =====>] : 172.17.90.10
	
	I0314 18:18:34.974371    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:18:34.978905    4456 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0314 18:18:34.978980    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-832100 ).state
	I0314 18:18:34.989738    4456 ssh_runner.go:195] Run: cat /version.json
	I0314 18:18:34.989738    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-832100 ).state
	I0314 18:18:36.979652    4456 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 18:18:36.980560    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:18:36.980560    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-832100 ).networkadapters[0]).ipaddresses[0]
	I0314 18:18:37.034815    4456 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 18:18:37.034815    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:18:37.034914    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-832100 ).networkadapters[0]).ipaddresses[0]
	I0314 18:18:39.426710    4456 main.go:141] libmachine: [stdout =====>] : 172.17.90.10
	
	I0314 18:18:39.426710    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:18:39.426710    4456 sshutil.go:53] new ssh client: &{IP:172.17.90.10 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\ha-832100\id_rsa Username:docker}
	I0314 18:18:39.445096    4456 main.go:141] libmachine: [stdout =====>] : 172.17.90.10
	
	I0314 18:18:39.445096    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:18:39.445096    4456 sshutil.go:53] new ssh client: &{IP:172.17.90.10 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\ha-832100\id_rsa Username:docker}
	I0314 18:18:39.527570    4456 ssh_runner.go:235] Completed: cat /version.json: (4.5374924s)
	I0314 18:18:39.543418    4456 ssh_runner.go:195] Run: systemctl --version
	I0314 18:18:39.663394    4456 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.6840628s)
	I0314 18:18:39.675325    4456 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0314 18:18:39.684605    4456 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0314 18:18:39.693680    4456 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0314 18:18:39.719997    4456 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0314 18:18:39.719997    4456 start.go:494] detecting cgroup driver to use...
	I0314 18:18:39.720748    4456 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0314 18:18:39.761611    4456 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0314 18:18:39.787989    4456 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0314 18:18:39.807207    4456 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0314 18:18:39.815120    4456 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0314 18:18:39.843529    4456 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0314 18:18:39.871811    4456 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0314 18:18:39.901060    4456 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0314 18:18:39.930414    4456 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0314 18:18:39.960525    4456 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0314 18:18:39.989233    4456 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0314 18:18:40.014374    4456 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0314 18:18:40.043459    4456 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 18:18:40.224375    4456 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0314 18:18:40.254873    4456 start.go:494] detecting cgroup driver to use...
	I0314 18:18:40.264215    4456 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0314 18:18:40.296123    4456 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0314 18:18:40.326827    4456 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0314 18:18:40.369696    4456 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0314 18:18:40.401357    4456 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0314 18:18:40.431556    4456 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0314 18:18:40.502128    4456 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0314 18:18:40.526311    4456 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0314 18:18:40.570400    4456 ssh_runner.go:195] Run: which cri-dockerd
	I0314 18:18:40.586347    4456 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0314 18:18:40.602878    4456 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0314 18:18:40.639724    4456 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0314 18:18:40.823799    4456 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0314 18:18:41.002917    4456 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0314 18:18:41.002917    4456 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0314 18:18:41.043462    4456 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 18:18:41.222389    4456 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0314 18:18:43.732007    4456 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5094301s)
	I0314 18:18:43.740716    4456 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0314 18:18:43.775083    4456 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0314 18:18:43.807756    4456 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0314 18:18:43.994640    4456 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0314 18:18:44.186775    4456 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 18:18:44.374442    4456 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0314 18:18:44.411100    4456 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0314 18:18:44.444240    4456 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 18:18:44.633751    4456 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0314 18:18:44.736416    4456 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0314 18:18:44.749404    4456 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0314 18:18:44.758930    4456 start.go:562] Will wait 60s for crictl version
	I0314 18:18:44.771523    4456 ssh_runner.go:195] Run: which crictl
	I0314 18:18:44.785524    4456 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0314 18:18:44.853451    4456 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  25.0.4
	RuntimeApiVersion:  v1
	I0314 18:18:44.860706    4456 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0314 18:18:44.899884    4456 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0314 18:18:44.937279    4456 out.go:204] * Preparing Kubernetes v1.28.4 on Docker 25.0.4 ...
	I0314 18:18:44.937381    4456 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0314 18:18:44.940889    4456 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0314 18:18:44.940889    4456 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0314 18:18:44.940889    4456 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0314 18:18:44.940889    4456 ip.go:207] Found interface: {Index:7 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:82:e8:09 Flags:up|broadcast|multicast|running}
	I0314 18:18:44.942906    4456 ip.go:210] interface addr: fe80::e3be:cf7e:6bd2:b964/64
	I0314 18:18:44.942906    4456 ip.go:210] interface addr: 172.17.80.1/20
	I0314 18:18:44.951913    4456 ssh_runner.go:195] Run: grep 172.17.80.1	host.minikube.internal$ /etc/hosts
	I0314 18:18:44.958036    4456 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.17.80.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0314 18:18:44.989374    4456 kubeadm.go:877] updating cluster {Name:ha-832100 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4
ClusterName:ha-832100 Namespace:default APIServerHAVIP:172.17.95.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.90.10 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:
[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0314 18:18:44.989607    4456 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0314 18:18:44.995852    4456 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0314 18:18:45.018479    4456 docker.go:685] Got preloaded images: 
	I0314 18:18:45.018479    4456 docker.go:691] registry.k8s.io/kube-apiserver:v1.28.4 wasn't preloaded
	I0314 18:18:45.027985    4456 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0314 18:18:45.054351    4456 ssh_runner.go:195] Run: which lz4
	I0314 18:18:45.059928    4456 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0314 18:18:45.068729    4456 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0314 18:18:45.074694    4456 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0314 18:18:45.074907    4456 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (423165415 bytes)
	I0314 18:18:46.912610    4456 docker.go:649] duration metric: took 1.8525431s to copy over tarball
	I0314 18:18:46.923411    4456 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0314 18:18:57.175149    4456 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (10.2509711s)
	I0314 18:18:57.175278    4456 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0314 18:18:57.243832    4456 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0314 18:18:57.261769    4456 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2629 bytes)
	I0314 18:18:57.301293    4456 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 18:18:57.491895    4456 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0314 18:19:00.670420    4456 ssh_runner.go:235] Completed: sudo systemctl restart docker: (3.1782876s)
	I0314 18:19:00.683094    4456 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0314 18:19:00.708931    4456 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.4
	registry.k8s.io/kube-proxy:v1.28.4
	registry.k8s.io/kube-controller-manager:v1.28.4
	registry.k8s.io/kube-scheduler:v1.28.4
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0314 18:19:00.709027    4456 cache_images.go:84] Images are preloaded, skipping loading
	I0314 18:19:00.709027    4456 kubeadm.go:928] updating node { 172.17.90.10 8443 v1.28.4 docker true true} ...
	I0314 18:19:00.709194    4456 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-832100 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.17.90.10
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:ha-832100 Namespace:default APIServerHAVIP:172.17.95.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0314 18:19:00.718289    4456 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0314 18:19:00.752901    4456 cni.go:84] Creating CNI manager for ""
	I0314 18:19:00.752901    4456 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0314 18:19:00.752978    4456 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0314 18:19:00.753031    4456 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.17.90.10 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-832100 NodeName:ha-832100 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.17.90.10"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.17.90.10 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/ma
nifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0314 18:19:00.753031    4456 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.17.90.10
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-832100"
	  kubeletExtraArgs:
	    node-ip: 172.17.90.10
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.17.90.10"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0314 18:19:00.753031    4456 kube-vip.go:105] generating kube-vip config ...
	I0314 18:19:00.753031    4456 kube-vip.go:125] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.17.95.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0314 18:19:00.761680    4456 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0314 18:19:00.778879    4456 binaries.go:44] Found k8s binaries, skipping transfer
	I0314 18:19:00.788003    4456 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0314 18:19:00.804013    4456 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0314 18:19:00.837584    4456 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0314 18:19:00.869535    4456 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2151 bytes)
	I0314 18:19:00.899726    4456 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1345 bytes)
	I0314 18:19:00.936437    4456 ssh_runner.go:195] Run: grep 172.17.95.254	control-plane.minikube.internal$ /etc/hosts
	I0314 18:19:00.942599    4456 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.17.95.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0314 18:19:00.971518    4456 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 18:19:01.163200    4456 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0314 18:19:01.189974    4456 certs.go:68] Setting up C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-832100 for IP: 172.17.90.10
	I0314 18:19:01.190071    4456 certs.go:194] generating shared ca certs ...
	I0314 18:19:01.190108    4456 certs.go:226] acquiring lock for ca certs: {Name:mk3bd6e475aadf28590677101b36f61c132d4290 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 18:19:01.190745    4456 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key
	I0314 18:19:01.190999    4456 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key
	I0314 18:19:01.191183    4456 certs.go:256] generating profile certs ...
	I0314 18:19:01.191220    4456 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-832100\client.key
	I0314 18:19:01.191220    4456 crypto.go:68] Generating cert C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-832100\client.crt with IP's: []
	I0314 18:19:01.463738    4456 crypto.go:156] Writing cert to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-832100\client.crt ...
	I0314 18:19:01.463738    4456 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-832100\client.crt: {Name:mke7ee85d592d623b3614c18b0b008ebca64d685 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 18:19:01.464740    4456 crypto.go:164] Writing key to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-832100\client.key ...
	I0314 18:19:01.464740    4456 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-832100\client.key: {Name:mkdce32fffea6e89971c206f5b31259fa396197c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 18:19:01.465747    4456 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-832100\apiserver.key.9d03ba8b
	I0314 18:19:01.466641    4456 crypto.go:68] Generating cert C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-832100\apiserver.crt.9d03ba8b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.17.90.10 172.17.95.254]
	I0314 18:19:02.138161    4456 crypto.go:156] Writing cert to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-832100\apiserver.crt.9d03ba8b ...
	I0314 18:19:02.138161    4456 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-832100\apiserver.crt.9d03ba8b: {Name:mk6b3b16c8ed352ed751c3eb6da317e96d566d2e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 18:19:02.140164    4456 crypto.go:164] Writing key to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-832100\apiserver.key.9d03ba8b ...
	I0314 18:19:02.140164    4456 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-832100\apiserver.key.9d03ba8b: {Name:mkf92df0e21d368f7173a4c5e155dc40a1b2ed63 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 18:19:02.141457    4456 certs.go:381] copying C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-832100\apiserver.crt.9d03ba8b -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-832100\apiserver.crt
	I0314 18:19:02.151651    4456 certs.go:385] copying C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-832100\apiserver.key.9d03ba8b -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-832100\apiserver.key
	I0314 18:19:02.152652    4456 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-832100\proxy-client.key
	I0314 18:19:02.152652    4456 crypto.go:68] Generating cert C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-832100\proxy-client.crt with IP's: []
	I0314 18:19:02.292452    4456 crypto.go:156] Writing cert to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-832100\proxy-client.crt ...
	I0314 18:19:02.292452    4456 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-832100\proxy-client.crt: {Name:mk51d01dcd9f3462515c3f3cd9453163da1a210a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 18:19:02.293472    4456 crypto.go:164] Writing key to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-832100\proxy-client.key ...
	I0314 18:19:02.293472    4456 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-832100\proxy-client.key: {Name:mk1b33d1bbd689220bdf6afe70b77dac85333b02 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 18:19:02.295078    4456 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0314 18:19:02.295078    4456 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0314 18:19:02.295078    4456 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0314 18:19:02.295078    4456 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0314 18:19:02.296174    4456 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-832100\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0314 18:19:02.296286    4456 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-832100\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0314 18:19:02.296286    4456 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-832100\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0314 18:19:02.303929    4456 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-832100\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0314 18:19:02.305075    4456 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\11052.pem (1338 bytes)
	W0314 18:19:02.305075    4456 certs.go:480] ignoring C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\11052_empty.pem, impossibly tiny 0 bytes
	I0314 18:19:02.305075    4456 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0314 18:19:02.306120    4456 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0314 18:19:02.306120    4456 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0314 18:19:02.306120    4456 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0314 18:19:02.306721    4456 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\110522.pem (1708 bytes)
	I0314 18:19:02.306879    4456 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\110522.pem -> /usr/share/ca-certificates/110522.pem
	I0314 18:19:02.306879    4456 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0314 18:19:02.306879    4456 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\11052.pem -> /usr/share/ca-certificates/11052.pem
	I0314 18:19:02.308314    4456 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0314 18:19:02.351854    4456 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0314 18:19:02.397991    4456 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0314 18:19:02.439484    4456 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0314 18:19:02.489177    4456 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-832100\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0314 18:19:02.531122    4456 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-832100\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0314 18:19:02.573853    4456 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-832100\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0314 18:19:02.614786    4456 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-832100\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0314 18:19:02.656146    4456 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\110522.pem --> /usr/share/ca-certificates/110522.pem (1708 bytes)
	I0314 18:19:02.701887    4456 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0314 18:19:02.744436    4456 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\11052.pem --> /usr/share/ca-certificates/11052.pem (1338 bytes)
	I0314 18:19:02.788793    4456 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0314 18:19:02.829103    4456 ssh_runner.go:195] Run: openssl version
	I0314 18:19:02.846387    4456 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11052.pem && ln -fs /usr/share/ca-certificates/11052.pem /etc/ssl/certs/11052.pem"
	I0314 18:19:02.873747    4456 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11052.pem
	I0314 18:19:02.880753    4456 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 14 17:58 /usr/share/ca-certificates/11052.pem
	I0314 18:19:02.890042    4456 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11052.pem
	I0314 18:19:02.907607    4456 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11052.pem /etc/ssl/certs/51391683.0"
	I0314 18:19:02.933906    4456 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110522.pem && ln -fs /usr/share/ca-certificates/110522.pem /etc/ssl/certs/110522.pem"
	I0314 18:19:02.960438    4456 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/110522.pem
	I0314 18:19:02.967437    4456 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 14 17:58 /usr/share/ca-certificates/110522.pem
	I0314 18:19:02.976216    4456 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110522.pem
	I0314 18:19:02.994051    4456 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110522.pem /etc/ssl/certs/3ec20f2e.0"
	I0314 18:19:03.022347    4456 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0314 18:19:03.050752    4456 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0314 18:19:03.058794    4456 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 14 17:44 /usr/share/ca-certificates/minikubeCA.pem
	I0314 18:19:03.067761    4456 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0314 18:19:03.086070    4456 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0314 18:19:03.114991    4456 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0314 18:19:03.121913    4456 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0314 18:19:03.122244    4456 kubeadm.go:391] StartCluster: {Name:ha-832100 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 Clu
sterName:ha-832100 Namespace:default APIServerHAVIP:172.17.95.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.90.10 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 18:19:03.128989    4456 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0314 18:19:03.168084    4456 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0314 18:19:03.195172    4456 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0314 18:19:03.221740    4456 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0314 18:19:03.238594    4456 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0314 18:19:03.238659    4456 kubeadm.go:156] found existing configuration files:
	
	I0314 18:19:03.247055    4456 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0314 18:19:03.263809    4456 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0314 18:19:03.276206    4456 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0314 18:19:03.303087    4456 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0314 18:19:03.321025    4456 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0314 18:19:03.330144    4456 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0314 18:19:03.359230    4456 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0314 18:19:03.376060    4456 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0314 18:19:03.385009    4456 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0314 18:19:03.412096    4456 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0314 18:19:03.427993    4456 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0314 18:19:03.440304    4456 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0314 18:19:03.457014    4456 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0314 18:19:03.869365    4456 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0314 18:19:18.642386    4456 kubeadm.go:309] [init] Using Kubernetes version: v1.28.4
	I0314 18:19:18.642447    4456 kubeadm.go:309] [preflight] Running pre-flight checks
	I0314 18:19:18.642750    4456 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0314 18:19:18.643097    4456 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0314 18:19:18.643097    4456 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0314 18:19:18.643097    4456 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0314 18:19:18.645704    4456 out.go:204]   - Generating certificates and keys ...
	I0314 18:19:18.645704    4456 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0314 18:19:18.645704    4456 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0314 18:19:18.645704    4456 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0314 18:19:18.645704    4456 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0314 18:19:18.645704    4456 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0314 18:19:18.646698    4456 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0314 18:19:18.646698    4456 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0314 18:19:18.646698    4456 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [ha-832100 localhost] and IPs [172.17.90.10 127.0.0.1 ::1]
	I0314 18:19:18.646698    4456 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0314 18:19:18.646698    4456 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [ha-832100 localhost] and IPs [172.17.90.10 127.0.0.1 ::1]
	I0314 18:19:18.646698    4456 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0314 18:19:18.647712    4456 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0314 18:19:18.647712    4456 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0314 18:19:18.647712    4456 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0314 18:19:18.647712    4456 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0314 18:19:18.647712    4456 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0314 18:19:18.647712    4456 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0314 18:19:18.647712    4456 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0314 18:19:18.647712    4456 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0314 18:19:18.648703    4456 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0314 18:19:18.650703    4456 out.go:204]   - Booting up control plane ...
	I0314 18:19:18.650703    4456 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0314 18:19:18.650703    4456 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0314 18:19:18.650703    4456 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0314 18:19:18.650703    4456 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0314 18:19:18.651707    4456 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0314 18:19:18.651707    4456 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0314 18:19:18.651707    4456 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0314 18:19:18.651707    4456 kubeadm.go:309] [apiclient] All control plane components are healthy after 8.607376 seconds
	I0314 18:19:18.651707    4456 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0314 18:19:18.652704    4456 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0314 18:19:18.652704    4456 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0314 18:19:18.652704    4456 kubeadm.go:309] [mark-control-plane] Marking the node ha-832100 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0314 18:19:18.652704    4456 kubeadm.go:309] [bootstrap-token] Using token: 9rmtes.0i3jfqfb19kabi9y
	I0314 18:19:18.656707    4456 out.go:204]   - Configuring RBAC rules ...
	I0314 18:19:18.656707    4456 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0314 18:19:18.656707    4456 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0314 18:19:18.657711    4456 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0314 18:19:18.657711    4456 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0314 18:19:18.657711    4456 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0314 18:19:18.657711    4456 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0314 18:19:18.658717    4456 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0314 18:19:18.658717    4456 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0314 18:19:18.658717    4456 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0314 18:19:18.658717    4456 kubeadm.go:309] 
	I0314 18:19:18.658717    4456 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0314 18:19:18.658717    4456 kubeadm.go:309] 
	I0314 18:19:18.658717    4456 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0314 18:19:18.658717    4456 kubeadm.go:309] 
	I0314 18:19:18.658717    4456 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0314 18:19:18.658717    4456 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0314 18:19:18.658717    4456 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0314 18:19:18.658717    4456 kubeadm.go:309] 
	I0314 18:19:18.658717    4456 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0314 18:19:18.659726    4456 kubeadm.go:309] 
	I0314 18:19:18.659726    4456 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0314 18:19:18.659726    4456 kubeadm.go:309] 
	I0314 18:19:18.659726    4456 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0314 18:19:18.659726    4456 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0314 18:19:18.659726    4456 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0314 18:19:18.659726    4456 kubeadm.go:309] 
	I0314 18:19:18.659726    4456 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0314 18:19:18.659726    4456 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0314 18:19:18.659726    4456 kubeadm.go:309] 
	I0314 18:19:18.660708    4456 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 9rmtes.0i3jfqfb19kabi9y \
	I0314 18:19:18.660708    4456 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:d6cca618a9b89cd38081689b02ff56bf93130e2cdd7cca4172dee33bbb1d34eb \
	I0314 18:19:18.660708    4456 kubeadm.go:309] 	--control-plane 
	I0314 18:19:18.660708    4456 kubeadm.go:309] 
	I0314 18:19:18.660708    4456 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0314 18:19:18.660708    4456 kubeadm.go:309] 
	I0314 18:19:18.660708    4456 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 9rmtes.0i3jfqfb19kabi9y \
	I0314 18:19:18.660708    4456 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:d6cca618a9b89cd38081689b02ff56bf93130e2cdd7cca4172dee33bbb1d34eb 
	I0314 18:19:18.660708    4456 cni.go:84] Creating CNI manager for ""
	I0314 18:19:18.660708    4456 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0314 18:19:18.664709    4456 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0314 18:19:18.678306    4456 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0314 18:19:18.686282    4456 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0314 18:19:18.686282    4456 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0314 18:19:18.729001    4456 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0314 18:19:20.248255    4456 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.5191406s)
	I0314 18:19:20.248255    4456 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0314 18:19:20.259272    4456 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 18:19:20.260258    4456 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-832100 minikube.k8s.io/updated_at=2024_03_14T18_19_20_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=c6f78a3db54ac629870afb44fb5bc8be9e04a8c7 minikube.k8s.io/name=ha-832100 minikube.k8s.io/primary=true
	I0314 18:19:20.265696    4456 ops.go:34] apiserver oom_adj: -16
	I0314 18:19:20.444909    4456 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 18:19:20.953298    4456 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 18:19:21.457360    4456 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 18:19:21.956138    4456 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 18:19:22.444787    4456 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 18:19:22.946295    4456 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 18:19:23.446967    4456 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 18:19:23.949730    4456 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 18:19:24.453183    4456 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 18:19:24.952493    4456 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 18:19:25.455636    4456 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 18:19:25.964798    4456 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 18:19:26.461958    4456 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 18:19:26.945667    4456 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 18:19:27.450236    4456 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 18:19:27.950498    4456 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 18:19:28.456067    4456 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 18:19:28.957103    4456 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 18:19:29.447930    4456 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 18:19:29.953230    4456 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 18:19:30.454542    4456 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 18:19:30.957463    4456 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 18:19:31.459529    4456 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 18:19:31.949434    4456 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 18:19:32.102377    4456 kubeadm.go:1106] duration metric: took 11.8532378s to wait for elevateKubeSystemPrivileges
	W0314 18:19:32.102377    4456 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0314 18:19:32.102377    4456 kubeadm.go:393] duration metric: took 28.9779701s to StartCluster
	I0314 18:19:32.103377    4456 settings.go:142] acquiring lock: {Name:mk2f48f1c2db86c45c5c20d13312e07e9c171d09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 18:19:32.103377    4456 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0314 18:19:32.104392    4456 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\kubeconfig: {Name:mk3b9816be6135fef42a073128e9ed356868417a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 18:19:32.106379    4456 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0314 18:19:32.106379    4456 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0314 18:19:32.106379    4456 start.go:232] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:172.17.90.10 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0314 18:19:32.106379    4456 addons.go:69] Setting default-storageclass=true in profile "ha-832100"
	I0314 18:19:32.106379    4456 start.go:240] waiting for startup goroutines ...
	I0314 18:19:32.106379    4456 addons.go:69] Setting storage-provisioner=true in profile "ha-832100"
	I0314 18:19:32.106379    4456 addons.go:234] Setting addon storage-provisioner=true in "ha-832100"
	I0314 18:19:32.106379    4456 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-832100"
	I0314 18:19:32.106379    4456 host.go:66] Checking if "ha-832100" exists ...
	I0314 18:19:32.106379    4456 config.go:182] Loaded profile config "ha-832100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0314 18:19:32.107390    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-832100 ).state
	I0314 18:19:32.107390    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-832100 ).state
	I0314 18:19:32.304139    4456 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.17.80.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0314 18:19:32.890121    4456 start.go:948] {"host.minikube.internal": 172.17.80.1} host record injected into CoreDNS's ConfigMap
	I0314 18:19:34.206141    4456 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 18:19:34.206348    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:19:34.208924    4456 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0314 18:19:34.206694    4456 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 18:19:34.209021    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:19:34.210206    4456 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0314 18:19:34.211183    4456 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0314 18:19:34.211776    4456 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0314 18:19:34.211776    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-832100 ).state
	I0314 18:19:34.211972    4456 kapi.go:59] client config for ha-832100: &rest.Config{Host:"https://172.17.95.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\ha-832100\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\ha-832100\\client.key", CAFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Nex
tProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1ec9180), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0314 18:19:34.213336    4456 cert_rotation.go:137] Starting client certificate rotation controller
	I0314 18:19:34.213336    4456 addons.go:234] Setting addon default-storageclass=true in "ha-832100"
	I0314 18:19:34.213336    4456 host.go:66] Checking if "ha-832100" exists ...
	I0314 18:19:34.214492    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-832100 ).state
	I0314 18:19:36.335141    4456 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 18:19:36.335311    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:19:36.335141    4456 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 18:19:36.335392    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:19:36.335392    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-832100 ).networkadapters[0]).ipaddresses[0]
	I0314 18:19:36.335392    4456 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0314 18:19:36.335392    4456 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0314 18:19:36.335392    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-832100 ).state
	I0314 18:19:38.402635    4456 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 18:19:38.402635    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:19:38.402635    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-832100 ).networkadapters[0]).ipaddresses[0]
	I0314 18:19:38.858045    4456 main.go:141] libmachine: [stdout =====>] : 172.17.90.10
	
	I0314 18:19:38.858826    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:19:38.858826    4456 sshutil.go:53] new ssh client: &{IP:172.17.90.10 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\ha-832100\id_rsa Username:docker}
	I0314 18:19:39.003206    4456 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0314 18:19:40.860077    4456 main.go:141] libmachine: [stdout =====>] : 172.17.90.10
	
	I0314 18:19:40.860589    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:19:40.860974    4456 sshutil.go:53] new ssh client: &{IP:172.17.90.10 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\ha-832100\id_rsa Username:docker}
	I0314 18:19:40.991850    4456 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0314 18:19:41.264681    4456 round_trippers.go:463] GET https://172.17.95.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0314 18:19:41.265221    4456 round_trippers.go:469] Request Headers:
	I0314 18:19:41.265221    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:19:41.265302    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:19:41.278457    4456 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0314 18:19:41.279333    4456 round_trippers.go:463] PUT https://172.17.95.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0314 18:19:41.279375    4456 round_trippers.go:469] Request Headers:
	I0314 18:19:41.279375    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:19:41.279375    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:19:41.279375    4456 round_trippers.go:473]     Content-Type: application/json
	I0314 18:19:41.283205    4456 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 18:19:41.286182    4456 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0314 18:19:41.288972    4456 addons.go:505] duration metric: took 9.1819088s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0314 18:19:41.289115    4456 start.go:245] waiting for cluster config update ...
	I0314 18:19:41.289115    4456 start.go:254] writing updated cluster config ...
	I0314 18:19:41.291529    4456 out.go:177] 
	I0314 18:19:41.301360    4456 config.go:182] Loaded profile config "ha-832100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0314 18:19:41.301360    4456 profile.go:142] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-832100\config.json ...
	I0314 18:19:41.306904    4456 out.go:177] * Starting "ha-832100-m02" control-plane node in "ha-832100" cluster
	I0314 18:19:41.310057    4456 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0314 18:19:41.310057    4456 cache.go:56] Caching tarball of preloaded images
	I0314 18:19:41.310583    4456 preload.go:173] Found C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0314 18:19:41.310583    4456 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0314 18:19:41.311122    4456 profile.go:142] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-832100\config.json ...
	I0314 18:19:41.315361    4456 start.go:360] acquireMachinesLock for ha-832100-m02: {Name:mk814f158b6187cc9297257c36fdbe0d2871c950 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0314 18:19:41.316362    4456 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-832100-m02"
	I0314 18:19:41.316763    4456 start.go:93] Provisioning new machine with config: &{Name:ha-832100 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.28.4 ClusterName:ha-832100 Namespace:default APIServerHAVIP:172.17.95.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.90.10 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:
0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0314 18:19:41.317056    4456 start.go:125] createHost starting for "m02" (driver="hyperv")
	I0314 18:19:41.320193    4456 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0314 18:19:41.320193    4456 start.go:159] libmachine.API.Create for "ha-832100" (driver="hyperv")
	I0314 18:19:41.320193    4456 client.go:168] LocalClient.Create starting
	I0314 18:19:41.320921    4456 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem
	I0314 18:19:41.321106    4456 main.go:141] libmachine: Decoding PEM data...
	I0314 18:19:41.321106    4456 main.go:141] libmachine: Parsing certificate...
	I0314 18:19:41.321289    4456 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem
	I0314 18:19:41.321472    4456 main.go:141] libmachine: Decoding PEM data...
	I0314 18:19:41.321533    4456 main.go:141] libmachine: Parsing certificate...
	I0314 18:19:41.321634    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0314 18:19:43.131412    4456 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0314 18:19:43.131581    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:19:43.131581    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0314 18:19:44.749899    4456 main.go:141] libmachine: [stdout =====>] : False
	
	I0314 18:19:44.750152    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:19:44.750225    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0314 18:19:46.143569    4456 main.go:141] libmachine: [stdout =====>] : True
	
	I0314 18:19:46.144198    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:19:46.144198    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0314 18:19:49.507067    4456 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0314 18:19:49.507116    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:19:49.510906    4456 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube7/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.32.1-1710348681-18375-amd64.iso...
	I0314 18:19:49.843434    4456 main.go:141] libmachine: Creating SSH key...
	I0314 18:19:49.942462    4456 main.go:141] libmachine: Creating VM...
	I0314 18:19:49.942462    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0314 18:19:52.604816    4456 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0314 18:19:52.604816    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:19:52.606490    4456 main.go:141] libmachine: Using switch "Default Switch"
	I0314 18:19:52.606490    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0314 18:19:54.259707    4456 main.go:141] libmachine: [stdout =====>] : True
	
	I0314 18:19:54.259707    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:19:54.259707    4456 main.go:141] libmachine: Creating VHD
	I0314 18:19:54.259707    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\ha-832100-m02\fixed.vhd' -SizeBytes 10MB -Fixed
	I0314 18:19:57.856082    4456 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube7
	Path                    : C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\ha-832100-m02\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 4D3D8511-8E83-4933-80F8-706AD157DDD8
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0314 18:19:57.856168    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:19:57.856168    4456 main.go:141] libmachine: Writing magic tar header
	I0314 18:19:57.856254    4456 main.go:141] libmachine: Writing SSH key tar header
	I0314 18:19:57.856608    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\ha-832100-m02\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\ha-832100-m02\disk.vhd' -VHDType Dynamic -DeleteSource
	I0314 18:20:00.888266    4456 main.go:141] libmachine: [stdout =====>] : 
	I0314 18:20:00.888266    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:20:00.888714    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\ha-832100-m02\disk.vhd' -SizeBytes 20000MB
	I0314 18:20:03.330163    4456 main.go:141] libmachine: [stdout =====>] : 
	I0314 18:20:03.330163    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:20:03.330488    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-832100-m02 -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\ha-832100-m02' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0314 18:20:06.720664    4456 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	ha-832100-m02 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0314 18:20:06.720664    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:20:06.720664    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-832100-m02 -DynamicMemoryEnabled $false
	I0314 18:20:08.838290    4456 main.go:141] libmachine: [stdout =====>] : 
	I0314 18:20:08.838290    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:20:08.839154    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-832100-m02 -Count 2
	I0314 18:20:10.912748    4456 main.go:141] libmachine: [stdout =====>] : 
	I0314 18:20:10.912748    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:20:10.912748    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-832100-m02 -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\ha-832100-m02\boot2docker.iso'
	I0314 18:20:13.325996    4456 main.go:141] libmachine: [stdout =====>] : 
	I0314 18:20:13.326077    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:20:13.326077    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-832100-m02 -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\ha-832100-m02\disk.vhd'
	I0314 18:20:15.801006    4456 main.go:141] libmachine: [stdout =====>] : 
	I0314 18:20:15.801006    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:20:15.801006    4456 main.go:141] libmachine: Starting VM...
	I0314 18:20:15.801006    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-832100-m02
	I0314 18:20:18.705799    4456 main.go:141] libmachine: [stdout =====>] : 
	I0314 18:20:18.705843    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:20:18.705843    4456 main.go:141] libmachine: Waiting for host to start...
	I0314 18:20:18.705887    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-832100-m02 ).state
	I0314 18:20:20.795858    4456 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 18:20:20.795858    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:20:20.795858    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-832100-m02 ).networkadapters[0]).ipaddresses[0]
	I0314 18:20:23.080745    4456 main.go:141] libmachine: [stdout =====>] : 
	I0314 18:20:23.080745    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:20:24.086566    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-832100-m02 ).state
	I0314 18:20:26.118343    4456 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 18:20:26.118532    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:20:26.118532    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-832100-m02 ).networkadapters[0]).ipaddresses[0]
	I0314 18:20:28.432690    4456 main.go:141] libmachine: [stdout =====>] : 
	I0314 18:20:28.432739    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:20:29.446734    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-832100-m02 ).state
	I0314 18:20:31.510457    4456 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 18:20:31.511401    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:20:31.511455    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-832100-m02 ).networkadapters[0]).ipaddresses[0]
	I0314 18:20:33.831428    4456 main.go:141] libmachine: [stdout =====>] : 
	I0314 18:20:33.831428    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:20:34.836006    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-832100-m02 ).state
	I0314 18:20:36.891805    4456 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 18:20:36.892536    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:20:36.892536    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-832100-m02 ).networkadapters[0]).ipaddresses[0]
	I0314 18:20:39.199446    4456 main.go:141] libmachine: [stdout =====>] : 
	I0314 18:20:39.199630    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:20:40.210024    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-832100-m02 ).state
	I0314 18:20:42.252333    4456 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 18:20:42.252425    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:20:42.252500    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-832100-m02 ).networkadapters[0]).ipaddresses[0]
	I0314 18:20:44.589079    4456 main.go:141] libmachine: [stdout =====>] : 172.17.92.203
	
	I0314 18:20:44.590096    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:20:44.590144    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-832100-m02 ).state
	I0314 18:20:46.539989    4456 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 18:20:46.540894    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:20:46.540894    4456 machine.go:94] provisionDockerMachine start ...
	I0314 18:20:46.541174    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-832100-m02 ).state
	I0314 18:20:48.515482    4456 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 18:20:48.515482    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:20:48.515543    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-832100-m02 ).networkadapters[0]).ipaddresses[0]
	I0314 18:20:50.895160    4456 main.go:141] libmachine: [stdout =====>] : 172.17.92.203
	
	I0314 18:20:50.895535    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:20:50.900320    4456 main.go:141] libmachine: Using SSH client type: native
	I0314 18:20:50.900392    4456 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xac9f80] 0xaccb60 <nil>  [] 0s} 172.17.92.203 22 <nil> <nil>}
	I0314 18:20:50.900392    4456 main.go:141] libmachine: About to run SSH command:
	hostname
	I0314 18:20:51.035434    4456 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0314 18:20:51.035434    4456 buildroot.go:166] provisioning hostname "ha-832100-m02"
	I0314 18:20:51.035434    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-832100-m02 ).state
	I0314 18:20:53.030125    4456 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 18:20:53.031170    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:20:53.031170    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-832100-m02 ).networkadapters[0]).ipaddresses[0]
	I0314 18:20:55.368399    4456 main.go:141] libmachine: [stdout =====>] : 172.17.92.203
	
	I0314 18:20:55.368399    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:20:55.372041    4456 main.go:141] libmachine: Using SSH client type: native
	I0314 18:20:55.372726    4456 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xac9f80] 0xaccb60 <nil>  [] 0s} 172.17.92.203 22 <nil> <nil>}
	I0314 18:20:55.372726    4456 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-832100-m02 && echo "ha-832100-m02" | sudo tee /etc/hostname
	I0314 18:20:55.527906    4456 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-832100-m02
	
	I0314 18:20:55.527906    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-832100-m02 ).state
	I0314 18:20:57.495035    4456 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 18:20:57.495035    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:20:57.495035    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-832100-m02 ).networkadapters[0]).ipaddresses[0]
	I0314 18:20:59.820054    4456 main.go:141] libmachine: [stdout =====>] : 172.17.92.203
	
	I0314 18:20:59.820054    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:20:59.824161    4456 main.go:141] libmachine: Using SSH client type: native
	I0314 18:20:59.824161    4456 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xac9f80] 0xaccb60 <nil>  [] 0s} 172.17.92.203 22 <nil> <nil>}
	I0314 18:20:59.824161    4456 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-832100-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-832100-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-832100-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0314 18:20:59.966337    4456 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0314 18:20:59.966414    4456 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube7\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube7\minikube-integration\.minikube}
	I0314 18:20:59.966414    4456 buildroot.go:174] setting up certificates
	I0314 18:20:59.966462    4456 provision.go:84] configureAuth start
	I0314 18:20:59.966508    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-832100-m02 ).state
	I0314 18:21:01.957124    4456 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 18:21:01.957124    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:21:01.957215    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-832100-m02 ).networkadapters[0]).ipaddresses[0]
	I0314 18:21:04.303301    4456 main.go:141] libmachine: [stdout =====>] : 172.17.92.203
	
	I0314 18:21:04.303301    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:21:04.303550    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-832100-m02 ).state
	I0314 18:21:06.279950    4456 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 18:21:06.279950    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:21:06.280007    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-832100-m02 ).networkadapters[0]).ipaddresses[0]
	I0314 18:21:08.643961    4456 main.go:141] libmachine: [stdout =====>] : 172.17.92.203
	
	I0314 18:21:08.643961    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:21:08.643961    4456 provision.go:143] copyHostCerts
	I0314 18:21:08.644204    4456 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem
	I0314 18:21:08.644251    4456 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem, removing ...
	I0314 18:21:08.644251    4456 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.pem
	I0314 18:21:08.644782    4456 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0314 18:21:08.645386    4456 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem
	I0314 18:21:08.645386    4456 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem, removing ...
	I0314 18:21:08.645386    4456 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cert.pem
	I0314 18:21:08.645984    4456 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0314 18:21:08.646683    4456 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem
	I0314 18:21:08.646683    4456 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem, removing ...
	I0314 18:21:08.646683    4456 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\key.pem
	I0314 18:21:08.647224    4456 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem (1679 bytes)
	I0314 18:21:08.648071    4456 provision.go:117] generating server cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-832100-m02 san=[127.0.0.1 172.17.92.203 ha-832100-m02 localhost minikube]
	I0314 18:21:08.715064    4456 provision.go:177] copyRemoteCerts
	I0314 18:21:08.724847    4456 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0314 18:21:08.724847    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-832100-m02 ).state
	I0314 18:21:10.728304    4456 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 18:21:10.728304    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:21:10.728810    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-832100-m02 ).networkadapters[0]).ipaddresses[0]
	I0314 18:21:13.044529    4456 main.go:141] libmachine: [stdout =====>] : 172.17.92.203
	
	I0314 18:21:13.044807    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:21:13.045208    4456 sshutil.go:53] new ssh client: &{IP:172.17.92.203 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\ha-832100-m02\id_rsa Username:docker}
	I0314 18:21:13.150465    4456 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.4251901s)
	I0314 18:21:13.150465    4456 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0314 18:21:13.150975    4456 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0314 18:21:13.195393    4456 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0314 18:21:13.195806    4456 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0314 18:21:13.236818    4456 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0314 18:21:13.236818    4456 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0314 18:21:13.278167    4456 provision.go:87] duration metric: took 13.3107223s to configureAuth
	I0314 18:21:13.278167    4456 buildroot.go:189] setting minikube options for container-runtime
	I0314 18:21:13.278167    4456 config.go:182] Loaded profile config "ha-832100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0314 18:21:13.278764    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-832100-m02 ).state
	I0314 18:21:15.240111    4456 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 18:21:15.241054    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:21:15.241134    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-832100-m02 ).networkadapters[0]).ipaddresses[0]
	I0314 18:21:17.579794    4456 main.go:141] libmachine: [stdout =====>] : 172.17.92.203
	
	I0314 18:21:17.579918    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:21:17.584051    4456 main.go:141] libmachine: Using SSH client type: native
	I0314 18:21:17.584213    4456 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xac9f80] 0xaccb60 <nil>  [] 0s} 172.17.92.203 22 <nil> <nil>}
	I0314 18:21:17.584213    4456 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0314 18:21:17.722389    4456 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0314 18:21:17.722475    4456 buildroot.go:70] root file system type: tmpfs
	I0314 18:21:17.722475    4456 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0314 18:21:17.722475    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-832100-m02 ).state
	I0314 18:21:19.728591    4456 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 18:21:19.728591    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:21:19.728689    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-832100-m02 ).networkadapters[0]).ipaddresses[0]
	I0314 18:21:22.102428    4456 main.go:141] libmachine: [stdout =====>] : 172.17.92.203
	
	I0314 18:21:22.102428    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:21:22.108989    4456 main.go:141] libmachine: Using SSH client type: native
	I0314 18:21:22.108989    4456 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xac9f80] 0xaccb60 <nil>  [] 0s} 172.17.92.203 22 <nil> <nil>}
	I0314 18:21:22.108989    4456 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.17.90.10"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0314 18:21:22.276365    4456 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.17.90.10
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0314 18:21:22.276468    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-832100-m02 ).state
	I0314 18:21:24.258775    4456 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 18:21:24.259188    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:21:24.259188    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-832100-m02 ).networkadapters[0]).ipaddresses[0]
	I0314 18:21:26.623481    4456 main.go:141] libmachine: [stdout =====>] : 172.17.92.203
	
	I0314 18:21:26.623481    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:21:26.627753    4456 main.go:141] libmachine: Using SSH client type: native
	I0314 18:21:26.628277    4456 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xac9f80] 0xaccb60 <nil>  [] 0s} 172.17.92.203 22 <nil> <nil>}
	I0314 18:21:26.628357    4456 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0314 18:21:28.742697    4456 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0314 18:21:28.742748    4456 machine.go:97] duration metric: took 42.1985223s to provisionDockerMachine
	I0314 18:21:28.742748    4456 client.go:171] duration metric: took 1m47.4146004s to LocalClient.Create
	I0314 18:21:28.742867    4456 start.go:167] duration metric: took 1m47.4146691s to libmachine.API.Create "ha-832100"
	I0314 18:21:28.742921    4456 start.go:293] postStartSetup for "ha-832100-m02" (driver="hyperv")
	I0314 18:21:28.742921    4456 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0314 18:21:28.751936    4456 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0314 18:21:28.751936    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-832100-m02 ).state
	I0314 18:21:30.706802    4456 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 18:21:30.707839    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:21:30.707839    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-832100-m02 ).networkadapters[0]).ipaddresses[0]
	I0314 18:21:33.081790    4456 main.go:141] libmachine: [stdout =====>] : 172.17.92.203
	
	I0314 18:21:33.081790    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:21:33.082137    4456 sshutil.go:53] new ssh client: &{IP:172.17.92.203 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\ha-832100-m02\id_rsa Username:docker}
	I0314 18:21:33.192817    4456 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.4405549s)
	I0314 18:21:33.201417    4456 ssh_runner.go:195] Run: cat /etc/os-release
	I0314 18:21:33.208749    4456 info.go:137] Remote host: Buildroot 2023.02.9
	I0314 18:21:33.208749    4456 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\addons for local assets ...
	I0314 18:21:33.209174    4456 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\files for local assets ...
	I0314 18:21:33.209707    4456 filesync.go:149] local asset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\110522.pem -> 110522.pem in /etc/ssl/certs
	I0314 18:21:33.209707    4456 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\110522.pem -> /etc/ssl/certs/110522.pem
	I0314 18:21:33.218523    4456 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0314 18:21:33.235981    4456 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\110522.pem --> /etc/ssl/certs/110522.pem (1708 bytes)
	I0314 18:21:33.278610    4456 start.go:296] duration metric: took 4.5353558s for postStartSetup
	I0314 18:21:33.280832    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-832100-m02 ).state
	I0314 18:21:35.259792    4456 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 18:21:35.260507    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:21:35.260507    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-832100-m02 ).networkadapters[0]).ipaddresses[0]
	I0314 18:21:37.630630    4456 main.go:141] libmachine: [stdout =====>] : 172.17.92.203
	
	I0314 18:21:37.630630    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:21:37.630954    4456 profile.go:142] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-832100\config.json ...
	I0314 18:21:37.632730    4456 start.go:128] duration metric: took 1m56.3070648s to createHost
	I0314 18:21:37.632837    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-832100-m02 ).state
	I0314 18:21:39.604201    4456 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 18:21:39.604230    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:21:39.604361    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-832100-m02 ).networkadapters[0]).ipaddresses[0]
	I0314 18:21:41.978201    4456 main.go:141] libmachine: [stdout =====>] : 172.17.92.203
	
	I0314 18:21:41.978201    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:21:41.982110    4456 main.go:141] libmachine: Using SSH client type: native
	I0314 18:21:41.982544    4456 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xac9f80] 0xaccb60 <nil>  [] 0s} 172.17.92.203 22 <nil> <nil>}
	I0314 18:21:41.982544    4456 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0314 18:21:42.119239    4456 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710440502.378736030
	
	I0314 18:21:42.119318    4456 fix.go:216] guest clock: 1710440502.378736030
	I0314 18:21:42.119394    4456 fix.go:229] Guest: 2024-03-14 18:21:42.37873603 +0000 UTC Remote: 2024-03-14 18:21:37.63273 +0000 UTC m=+318.177673601 (delta=4.74600603s)
	I0314 18:21:42.119467    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-832100-m02 ).state
	I0314 18:21:44.102908    4456 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 18:21:44.102908    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:21:44.103007    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-832100-m02 ).networkadapters[0]).ipaddresses[0]
	I0314 18:21:46.466549    4456 main.go:141] libmachine: [stdout =====>] : 172.17.92.203
	
	I0314 18:21:46.466865    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:21:46.470719    4456 main.go:141] libmachine: Using SSH client type: native
	I0314 18:21:46.471099    4456 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xac9f80] 0xaccb60 <nil>  [] 0s} 172.17.92.203 22 <nil> <nil>}
	I0314 18:21:46.471099    4456 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1710440502
	I0314 18:21:46.625861    4456 main.go:141] libmachine: SSH cmd err, output: <nil>: Thu Mar 14 18:21:42 UTC 2024
	
	I0314 18:21:46.625861    4456 fix.go:236] clock set: Thu Mar 14 18:21:42 UTC 2024
	 (err=<nil>)
	I0314 18:21:46.625861    4456 start.go:83] releasing machines lock for "ha-832100-m02", held for 2m5.3000853s
	I0314 18:21:46.625861    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-832100-m02 ).state
	I0314 18:21:48.579633    4456 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 18:21:48.580629    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:21:48.580695    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-832100-m02 ).networkadapters[0]).ipaddresses[0]
	I0314 18:21:50.951108    4456 main.go:141] libmachine: [stdout =====>] : 172.17.92.203
	
	I0314 18:21:50.951108    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:21:50.954485    4456 out.go:177] * Found network options:
	I0314 18:21:50.956768    4456 out.go:177]   - NO_PROXY=172.17.90.10
	W0314 18:21:50.958596    4456 proxy.go:119] fail to check proxy env: Error ip not in block
	I0314 18:21:50.961097    4456 out.go:177]   - NO_PROXY=172.17.90.10
	W0314 18:21:50.962375    4456 proxy.go:119] fail to check proxy env: Error ip not in block
	W0314 18:21:50.963204    4456 proxy.go:119] fail to check proxy env: Error ip not in block
	I0314 18:21:50.965202    4456 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0314 18:21:50.965202    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-832100-m02 ).state
	I0314 18:21:50.973404    4456 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0314 18:21:50.973404    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-832100-m02 ).state
	I0314 18:21:52.971332    4456 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 18:21:52.971332    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:21:52.971420    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-832100-m02 ).networkadapters[0]).ipaddresses[0]
	I0314 18:21:53.007148    4456 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 18:21:53.007148    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:21:53.007413    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-832100-m02 ).networkadapters[0]).ipaddresses[0]
	I0314 18:21:55.382068    4456 main.go:141] libmachine: [stdout =====>] : 172.17.92.203
	
	I0314 18:21:55.382123    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:21:55.382602    4456 sshutil.go:53] new ssh client: &{IP:172.17.92.203 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\ha-832100-m02\id_rsa Username:docker}
	I0314 18:21:55.410024    4456 main.go:141] libmachine: [stdout =====>] : 172.17.92.203
	
	I0314 18:21:55.411133    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:21:55.411429    4456 sshutil.go:53] new ssh client: &{IP:172.17.92.203 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\ha-832100-m02\id_rsa Username:docker}
	I0314 18:21:55.553319    4456 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (4.5795789s)
	I0314 18:21:55.553319    4456 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.5877799s)
	W0314 18:21:55.553319    4456 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0314 18:21:55.561463    4456 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0314 18:21:55.588655    4456 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0314 18:21:55.588711    4456 start.go:494] detecting cgroup driver to use...
	I0314 18:21:55.588768    4456 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0314 18:21:55.629346    4456 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0314 18:21:55.657509    4456 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0314 18:21:55.676540    4456 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0314 18:21:55.685048    4456 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0314 18:21:55.714301    4456 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0314 18:21:55.741093    4456 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0314 18:21:55.768933    4456 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0314 18:21:55.797287    4456 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0314 18:21:55.825160    4456 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0314 18:21:55.853888    4456 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0314 18:21:55.879472    4456 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0314 18:21:55.906001    4456 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 18:21:56.101249    4456 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0314 18:21:56.132783    4456 start.go:494] detecting cgroup driver to use...
	I0314 18:21:56.143142    4456 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0314 18:21:56.174635    4456 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0314 18:21:56.206349    4456 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0314 18:21:56.241135    4456 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0314 18:21:56.274036    4456 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0314 18:21:56.306547    4456 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0314 18:21:56.363873    4456 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0314 18:21:56.390643    4456 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0314 18:21:56.431733    4456 ssh_runner.go:195] Run: which cri-dockerd
	I0314 18:21:56.447557    4456 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0314 18:21:56.465136    4456 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0314 18:21:56.503837    4456 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0314 18:21:56.692789    4456 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0314 18:21:56.862304    4456 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0314 18:21:56.862304    4456 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0314 18:21:56.903157    4456 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 18:21:57.095477    4456 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0314 18:21:59.579702    4456 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.4840429s)
	I0314 18:21:59.587869    4456 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0314 18:21:59.620122    4456 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0314 18:21:59.651516    4456 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0314 18:21:59.841504    4456 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0314 18:22:00.030840    4456 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 18:22:00.223549    4456 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0314 18:22:00.265825    4456 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0314 18:22:00.298977    4456 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 18:22:00.476299    4456 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0314 18:22:00.568571    4456 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0314 18:22:00.578931    4456 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0314 18:22:00.587020    4456 start.go:562] Will wait 60s for crictl version
	I0314 18:22:00.595526    4456 ssh_runner.go:195] Run: which crictl
	I0314 18:22:00.610218    4456 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0314 18:22:00.676046    4456 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  25.0.4
	RuntimeApiVersion:  v1
	I0314 18:22:00.682932    4456 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0314 18:22:00.724216    4456 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0314 18:22:00.760230    4456 out.go:204] * Preparing Kubernetes v1.28.4 on Docker 25.0.4 ...
	I0314 18:22:00.762534    4456 out.go:177]   - env NO_PROXY=172.17.90.10
	I0314 18:22:00.764590    4456 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0314 18:22:00.767581    4456 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0314 18:22:00.767581    4456 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0314 18:22:00.767581    4456 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0314 18:22:00.767581    4456 ip.go:207] Found interface: {Index:7 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:82:e8:09 Flags:up|broadcast|multicast|running}
	I0314 18:22:00.770579    4456 ip.go:210] interface addr: fe80::e3be:cf7e:6bd2:b964/64
	I0314 18:22:00.770579    4456 ip.go:210] interface addr: 172.17.80.1/20
	I0314 18:22:00.778578    4456 ssh_runner.go:195] Run: grep 172.17.80.1	host.minikube.internal$ /etc/hosts
	I0314 18:22:00.785303    4456 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.17.80.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0314 18:22:00.805099    4456 mustload.go:65] Loading cluster: ha-832100
	I0314 18:22:00.805629    4456 config.go:182] Loaded profile config "ha-832100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0314 18:22:00.805950    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-832100 ).state
	I0314 18:22:02.783943    4456 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 18:22:02.783943    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:22:02.783943    4456 host.go:66] Checking if "ha-832100" exists ...
	I0314 18:22:02.784595    4456 certs.go:68] Setting up C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-832100 for IP: 172.17.92.203
	I0314 18:22:02.784595    4456 certs.go:194] generating shared ca certs ...
	I0314 18:22:02.784595    4456 certs.go:226] acquiring lock for ca certs: {Name:mk3bd6e475aadf28590677101b36f61c132d4290 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 18:22:02.785156    4456 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key
	I0314 18:22:02.785379    4456 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key
	I0314 18:22:02.785604    4456 certs.go:256] generating profile certs ...
	I0314 18:22:02.785798    4456 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-832100\client.key
	I0314 18:22:02.785798    4456 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-832100\apiserver.key.eb63f332
	I0314 18:22:02.785798    4456 crypto.go:68] Generating cert C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-832100\apiserver.crt.eb63f332 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.17.90.10 172.17.92.203 172.17.95.254]
	I0314 18:22:03.076603    4456 crypto.go:156] Writing cert to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-832100\apiserver.crt.eb63f332 ...
	I0314 18:22:03.076603    4456 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-832100\apiserver.crt.eb63f332: {Name:mka9d3bf3027e4ef73e17f329886422d122d9fb4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 18:22:03.077597    4456 crypto.go:164] Writing key to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-832100\apiserver.key.eb63f332 ...
	I0314 18:22:03.078615    4456 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-832100\apiserver.key.eb63f332: {Name:mk9bbe53e98d6a302e589182eb50882786a3f049 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 18:22:03.078792    4456 certs.go:381] copying C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-832100\apiserver.crt.eb63f332 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-832100\apiserver.crt
	I0314 18:22:03.089883    4456 certs.go:385] copying C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-832100\apiserver.key.eb63f332 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-832100\apiserver.key
	I0314 18:22:03.095968    4456 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-832100\proxy-client.key
	I0314 18:22:03.095968    4456 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0314 18:22:03.095968    4456 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0314 18:22:03.096988    4456 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0314 18:22:03.097140    4456 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0314 18:22:03.097188    4456 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-832100\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0314 18:22:03.097323    4456 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-832100\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0314 18:22:03.097423    4456 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-832100\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0314 18:22:03.097423    4456 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-832100\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0314 18:22:03.097423    4456 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\11052.pem (1338 bytes)
	W0314 18:22:03.098074    4456 certs.go:480] ignoring C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\11052_empty.pem, impossibly tiny 0 bytes
	I0314 18:22:03.098152    4456 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0314 18:22:03.098440    4456 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0314 18:22:03.098675    4456 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0314 18:22:03.098869    4456 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0314 18:22:03.099056    4456 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\110522.pem (1708 bytes)
	I0314 18:22:03.099368    4456 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\11052.pem -> /usr/share/ca-certificates/11052.pem
	I0314 18:22:03.099368    4456 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\110522.pem -> /usr/share/ca-certificates/110522.pem
	I0314 18:22:03.099576    4456 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0314 18:22:03.099733    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-832100 ).state
	I0314 18:22:05.099737    4456 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 18:22:05.099875    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:22:05.100091    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-832100 ).networkadapters[0]).ipaddresses[0]
	I0314 18:22:07.489184    4456 main.go:141] libmachine: [stdout =====>] : 172.17.90.10
	
	I0314 18:22:07.489184    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:22:07.489627    4456 sshutil.go:53] new ssh client: &{IP:172.17.90.10 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\ha-832100\id_rsa Username:docker}
	I0314 18:22:07.584214    4456 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0314 18:22:07.592013    4456 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0314 18:22:07.620274    4456 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0314 18:22:07.626977    4456 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0314 18:22:07.653502    4456 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0314 18:22:07.660186    4456 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0314 18:22:07.688875    4456 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0314 18:22:07.695642    4456 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0314 18:22:07.723707    4456 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0314 18:22:07.730252    4456 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0314 18:22:07.758006    4456 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0314 18:22:07.764776    4456 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0314 18:22:07.782880    4456 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0314 18:22:07.826679    4456 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0314 18:22:07.877138    4456 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0314 18:22:07.917686    4456 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0314 18:22:07.957897    4456 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-832100\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0314 18:22:07.999256    4456 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-832100\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0314 18:22:08.041079    4456 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-832100\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0314 18:22:08.085453    4456 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-832100\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0314 18:22:08.126309    4456 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\11052.pem --> /usr/share/ca-certificates/11052.pem (1338 bytes)
	I0314 18:22:08.169183    4456 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\110522.pem --> /usr/share/ca-certificates/110522.pem (1708 bytes)
	I0314 18:22:08.210358    4456 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0314 18:22:08.251733    4456 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0314 18:22:08.280623    4456 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0314 18:22:08.308117    4456 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0314 18:22:08.340433    4456 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0314 18:22:08.372023    4456 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0314 18:22:08.399929    4456 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0314 18:22:08.428519    4456 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0314 18:22:08.465981    4456 ssh_runner.go:195] Run: openssl version
	I0314 18:22:08.483293    4456 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11052.pem && ln -fs /usr/share/ca-certificates/11052.pem /etc/ssl/certs/11052.pem"
	I0314 18:22:08.510652    4456 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11052.pem
	I0314 18:22:08.517357    4456 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 14 17:58 /usr/share/ca-certificates/11052.pem
	I0314 18:22:08.526056    4456 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11052.pem
	I0314 18:22:08.542485    4456 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11052.pem /etc/ssl/certs/51391683.0"
	I0314 18:22:08.568656    4456 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110522.pem && ln -fs /usr/share/ca-certificates/110522.pem /etc/ssl/certs/110522.pem"
	I0314 18:22:08.599940    4456 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/110522.pem
	I0314 18:22:08.606468    4456 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 14 17:58 /usr/share/ca-certificates/110522.pem
	I0314 18:22:08.615236    4456 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110522.pem
	I0314 18:22:08.632437    4456 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110522.pem /etc/ssl/certs/3ec20f2e.0"
	I0314 18:22:08.660797    4456 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0314 18:22:08.687584    4456 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0314 18:22:08.695295    4456 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 14 17:44 /usr/share/ca-certificates/minikubeCA.pem
	I0314 18:22:08.703907    4456 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0314 18:22:08.722737    4456 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0314 18:22:08.749230    4456 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0314 18:22:08.755242    4456 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0314 18:22:08.755242    4456 kubeadm.go:928] updating node {m02 172.17.92.203 8443 v1.28.4 docker true true} ...
	I0314 18:22:08.755941    4456 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-832100-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.17.92.203
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:ha-832100 Namespace:default APIServerHAVIP:172.17.95.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0314 18:22:08.756008    4456 kube-vip.go:105] generating kube-vip config ...
	I0314 18:22:08.756041    4456 kube-vip.go:125] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.17.95.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0314 18:22:08.765523    4456 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0314 18:22:08.782650    4456 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.28.4: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.28.4': No such file or directory
	
	Initiating transfer...
	I0314 18:22:08.790898    4456 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.28.4
	I0314 18:22:08.810926    4456 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubeadm.sha256 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubeadm
	I0314 18:22:08.810926    4456 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubelet.sha256 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubelet
	I0314 18:22:08.810926    4456 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl.sha256 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubectl
	I0314 18:22:09.727232    4456 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubectl -> /var/lib/minikube/binaries/v1.28.4/kubectl
	I0314 18:22:09.739949    4456 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubectl
	I0314 18:22:09.750877    4456 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubectl': No such file or directory
	I0314 18:22:09.751537    4456 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubectl --> /var/lib/minikube/binaries/v1.28.4/kubectl (49885184 bytes)
	I0314 18:22:17.844585    4456 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubeadm -> /var/lib/minikube/binaries/v1.28.4/kubeadm
	I0314 18:22:17.854214    4456 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubeadm
	I0314 18:22:17.861030    4456 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubeadm': No such file or directory
	I0314 18:22:17.861222    4456 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubeadm --> /var/lib/minikube/binaries/v1.28.4/kubeadm (49102848 bytes)
	I0314 18:22:22.321414    4456 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0314 18:22:22.345359    4456 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubelet -> /var/lib/minikube/binaries/v1.28.4/kubelet
	I0314 18:22:22.355142    4456 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubelet
	I0314 18:22:22.361301    4456 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubelet': No such file or directory
	I0314 18:22:22.361301    4456 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubelet --> /var/lib/minikube/binaries/v1.28.4/kubelet (110850048 bytes)
	I0314 18:22:23.020492    4456 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0314 18:22:23.037618    4456 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0314 18:22:23.066254    4456 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0314 18:22:23.095124    4456 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1345 bytes)
	I0314 18:22:23.134899    4456 ssh_runner.go:195] Run: grep 172.17.95.254	control-plane.minikube.internal$ /etc/hosts
	I0314 18:22:23.143800    4456 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.17.95.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0314 18:22:23.172678    4456 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 18:22:23.355206    4456 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0314 18:22:23.384806    4456 host.go:66] Checking if "ha-832100" exists ...
	I0314 18:22:23.385381    4456 start.go:316] joinCluster: &{Name:ha-832100 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 Clust
erName:ha-832100 Namespace:default APIServerHAVIP:172.17.95.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.90.10 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.17.92.203 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpir
ation:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 18:22:23.385651    4456 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0314 18:22:23.385727    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-832100 ).state
	I0314 18:22:25.384807    4456 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 18:22:25.384807    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:22:25.384885    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-832100 ).networkadapters[0]).ipaddresses[0]
	I0314 18:22:27.794823    4456 main.go:141] libmachine: [stdout =====>] : 172.17.90.10
	
	I0314 18:22:27.794823    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:22:27.794823    4456 sshutil.go:53] new ssh client: &{IP:172.17.90.10 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\ha-832100\id_rsa Username:docker}
	I0314 18:22:27.992753    4456 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0": (4.6067657s)
	I0314 18:22:27.992926    4456 start.go:342] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:172.17.92.203 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0314 18:22:27.993011    4456 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token p7u2we.a8555h9i8xpsfr9n --discovery-token-ca-cert-hash sha256:d6cca618a9b89cd38081689b02ff56bf93130e2cdd7cca4172dee33bbb1d34eb --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-832100-m02 --control-plane --apiserver-advertise-address=172.17.92.203 --apiserver-bind-port=8443"
	I0314 18:23:24.453643    4456 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token p7u2we.a8555h9i8xpsfr9n --discovery-token-ca-cert-hash sha256:d6cca618a9b89cd38081689b02ff56bf93130e2cdd7cca4172dee33bbb1d34eb --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-832100-m02 --control-plane --apiserver-advertise-address=172.17.92.203 --apiserver-bind-port=8443": (56.4565214s)
	I0314 18:23:24.453643    4456 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0314 18:23:25.205488    4456 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-832100-m02 minikube.k8s.io/updated_at=2024_03_14T18_23_25_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=c6f78a3db54ac629870afb44fb5bc8be9e04a8c7 minikube.k8s.io/name=ha-832100 minikube.k8s.io/primary=false
	I0314 18:23:25.383511    4456 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-832100-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0314 18:23:25.525981    4456 start.go:318] duration metric: took 1m2.1360754s to joinCluster
	I0314 18:23:25.526232    4456 start.go:234] Will wait 6m0s for node &{Name:m02 IP:172.17.92.203 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0314 18:23:25.529115    4456 out.go:177] * Verifying Kubernetes components...
	I0314 18:23:25.526431    4456 config.go:182] Loaded profile config "ha-832100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0314 18:23:25.539529    4456 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 18:23:25.830984    4456 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0314 18:23:25.857035    4456 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0314 18:23:25.858029    4456 kapi.go:59] client config for ha-832100: &rest.Config{Host:"https://172.17.95.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\ha-832100\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\ha-832100\\client.key", CAFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Nex
tProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1ec9180), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0314 18:23:25.858029    4456 kubeadm.go:477] Overriding stale ClientConfig host https://172.17.95.254:8443 with https://172.17.90.10:8443
	I0314 18:23:25.859025    4456 node_ready.go:35] waiting up to 6m0s for node "ha-832100-m02" to be "Ready" ...
	I0314 18:23:25.859025    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/nodes/ha-832100-m02
	I0314 18:23:25.859025    4456 round_trippers.go:469] Request Headers:
	I0314 18:23:25.859025    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:23:25.859025    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:23:25.876882    4456 round_trippers.go:574] Response Status: 200 OK in 17 milliseconds
	I0314 18:23:26.373335    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/nodes/ha-832100-m02
	I0314 18:23:26.373399    4456 round_trippers.go:469] Request Headers:
	I0314 18:23:26.373399    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:23:26.373399    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:23:26.380685    4456 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0314 18:23:26.866516    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/nodes/ha-832100-m02
	I0314 18:23:26.866516    4456 round_trippers.go:469] Request Headers:
	I0314 18:23:26.866516    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:23:26.866516    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:23:26.872245    4456 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0314 18:23:27.361464    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/nodes/ha-832100-m02
	I0314 18:23:27.361685    4456 round_trippers.go:469] Request Headers:
	I0314 18:23:27.361685    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:23:27.361758    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:23:27.366521    4456 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 18:23:27.868795    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/nodes/ha-832100-m02
	I0314 18:23:27.868854    4456 round_trippers.go:469] Request Headers:
	I0314 18:23:27.868854    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:23:27.868854    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:23:27.873704    4456 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 18:23:27.873704    4456 node_ready.go:53] node "ha-832100-m02" has status "Ready":"False"
	I0314 18:23:28.361234    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/nodes/ha-832100-m02
	I0314 18:23:28.361312    4456 round_trippers.go:469] Request Headers:
	I0314 18:23:28.361312    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:23:28.361377    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:23:28.367031    4456 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0314 18:23:28.869089    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/nodes/ha-832100-m02
	I0314 18:23:28.869328    4456 round_trippers.go:469] Request Headers:
	I0314 18:23:28.869328    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:23:28.869328    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:23:28.873826    4456 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 18:23:29.361659    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/nodes/ha-832100-m02
	I0314 18:23:29.361659    4456 round_trippers.go:469] Request Headers:
	I0314 18:23:29.361659    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:23:29.361659    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:23:29.366934    4456 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0314 18:23:29.869756    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/nodes/ha-832100-m02
	I0314 18:23:29.869756    4456 round_trippers.go:469] Request Headers:
	I0314 18:23:29.869756    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:23:29.869756    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:23:29.874328    4456 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 18:23:29.874735    4456 node_ready.go:53] node "ha-832100-m02" has status "Ready":"False"
	I0314 18:23:30.361543    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/nodes/ha-832100-m02
	I0314 18:23:30.361715    4456 round_trippers.go:469] Request Headers:
	I0314 18:23:30.361715    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:23:30.361715    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:23:30.366478    4456 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 18:23:30.869395    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/nodes/ha-832100-m02
	I0314 18:23:30.869451    4456 round_trippers.go:469] Request Headers:
	I0314 18:23:30.869451    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:23:30.869451    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:23:31.228824    4456 round_trippers.go:574] Response Status: 200 OK in 359 milliseconds
	I0314 18:23:31.372366    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/nodes/ha-832100-m02
	I0314 18:23:31.372423    4456 round_trippers.go:469] Request Headers:
	I0314 18:23:31.372423    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:23:31.372423    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:23:31.378114    4456 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0314 18:23:31.863871    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/nodes/ha-832100-m02
	I0314 18:23:31.863965    4456 round_trippers.go:469] Request Headers:
	I0314 18:23:31.863965    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:23:31.863965    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:23:31.869340    4456 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0314 18:23:32.364196    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/nodes/ha-832100-m02
	I0314 18:23:32.364287    4456 round_trippers.go:469] Request Headers:
	I0314 18:23:32.364287    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:23:32.364287    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:23:32.369322    4456 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0314 18:23:32.370422    4456 node_ready.go:53] node "ha-832100-m02" has status "Ready":"False"
	I0314 18:23:32.868031    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/nodes/ha-832100-m02
	I0314 18:23:32.868124    4456 round_trippers.go:469] Request Headers:
	I0314 18:23:32.868124    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:23:32.868124    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:23:32.872833    4456 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 18:23:33.370723    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/nodes/ha-832100-m02
	I0314 18:23:33.370803    4456 round_trippers.go:469] Request Headers:
	I0314 18:23:33.370803    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:23:33.370803    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:23:33.375511    4456 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 18:23:33.873496    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/nodes/ha-832100-m02
	I0314 18:23:33.873496    4456 round_trippers.go:469] Request Headers:
	I0314 18:23:33.873496    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:23:33.873496    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:23:33.877904    4456 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 18:23:34.362897    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/nodes/ha-832100-m02
	I0314 18:23:34.362897    4456 round_trippers.go:469] Request Headers:
	I0314 18:23:34.362897    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:23:34.362897    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:23:34.369071    4456 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0314 18:23:34.867429    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/nodes/ha-832100-m02
	I0314 18:23:34.867516    4456 round_trippers.go:469] Request Headers:
	I0314 18:23:34.867516    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:23:34.867516    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:23:34.872198    4456 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 18:23:34.872748    4456 node_ready.go:53] node "ha-832100-m02" has status "Ready":"False"
	I0314 18:23:35.370004    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/nodes/ha-832100-m02
	I0314 18:23:35.370239    4456 round_trippers.go:469] Request Headers:
	I0314 18:23:35.370239    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:23:35.370239    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:23:35.375131    4456 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 18:23:35.874484    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/nodes/ha-832100-m02
	I0314 18:23:35.874685    4456 round_trippers.go:469] Request Headers:
	I0314 18:23:35.874685    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:23:35.874685    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:23:35.879800    4456 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0314 18:23:35.879984    4456 node_ready.go:49] node "ha-832100-m02" has status "Ready":"True"
	I0314 18:23:35.879984    4456 node_ready.go:38] duration metric: took 10.0202321s for node "ha-832100-m02" to be "Ready" ...
	I0314 18:23:35.879984    4456 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0314 18:23:35.880513    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/namespaces/kube-system/pods
	I0314 18:23:35.880645    4456 round_trippers.go:469] Request Headers:
	I0314 18:23:35.880645    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:23:35.880645    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:23:35.887558    4456 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0314 18:23:35.896050    4456 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-5rf5x" in "kube-system" namespace to be "Ready" ...
	I0314 18:23:35.896050    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-5rf5x
	I0314 18:23:35.896050    4456 round_trippers.go:469] Request Headers:
	I0314 18:23:35.896050    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:23:35.896050    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:23:35.900878    4456 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 18:23:35.902637    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/nodes/ha-832100
	I0314 18:23:35.902637    4456 round_trippers.go:469] Request Headers:
	I0314 18:23:35.902637    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:23:35.902637    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:23:35.906677    4456 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 18:23:35.907695    4456 pod_ready.go:92] pod "coredns-5dd5756b68-5rf5x" in "kube-system" namespace has status "Ready":"True"
	I0314 18:23:35.907695    4456 pod_ready.go:81] duration metric: took 11.6442ms for pod "coredns-5dd5756b68-5rf5x" in "kube-system" namespace to be "Ready" ...
	I0314 18:23:35.907759    4456 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-mnw55" in "kube-system" namespace to be "Ready" ...
	I0314 18:23:35.907837    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-mnw55
	I0314 18:23:35.907894    4456 round_trippers.go:469] Request Headers:
	I0314 18:23:35.907894    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:23:35.907921    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:23:35.910663    4456 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0314 18:23:35.912332    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/nodes/ha-832100
	I0314 18:23:35.912405    4456 round_trippers.go:469] Request Headers:
	I0314 18:23:35.912405    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:23:35.912405    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:23:35.917045    4456 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 18:23:35.917988    4456 pod_ready.go:92] pod "coredns-5dd5756b68-mnw55" in "kube-system" namespace has status "Ready":"True"
	I0314 18:23:35.917988    4456 pod_ready.go:81] duration metric: took 10.2286ms for pod "coredns-5dd5756b68-mnw55" in "kube-system" namespace to be "Ready" ...
	I0314 18:23:35.917988    4456 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-832100" in "kube-system" namespace to be "Ready" ...
	I0314 18:23:35.918096    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/namespaces/kube-system/pods/etcd-ha-832100
	I0314 18:23:35.918096    4456 round_trippers.go:469] Request Headers:
	I0314 18:23:35.918096    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:23:35.918096    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:23:35.920672    4456 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0314 18:23:35.921666    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/nodes/ha-832100
	I0314 18:23:35.921666    4456 round_trippers.go:469] Request Headers:
	I0314 18:23:35.921666    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:23:35.921666    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:23:35.925447    4456 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 18:23:35.926255    4456 pod_ready.go:92] pod "etcd-ha-832100" in "kube-system" namespace has status "Ready":"True"
	I0314 18:23:35.926255    4456 pod_ready.go:81] duration metric: took 8.2012ms for pod "etcd-ha-832100" in "kube-system" namespace to be "Ready" ...
	I0314 18:23:35.926255    4456 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-832100-m02" in "kube-system" namespace to be "Ready" ...
	I0314 18:23:35.926255    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/namespaces/kube-system/pods/etcd-ha-832100-m02
	I0314 18:23:35.926255    4456 round_trippers.go:469] Request Headers:
	I0314 18:23:35.926255    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:23:35.926255    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:23:35.929822    4456 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 18:23:35.930669    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/nodes/ha-832100-m02
	I0314 18:23:35.930669    4456 round_trippers.go:469] Request Headers:
	I0314 18:23:35.930669    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:23:35.930669    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:23:35.935432    4456 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 18:23:35.935819    4456 pod_ready.go:92] pod "etcd-ha-832100-m02" in "kube-system" namespace has status "Ready":"True"
	I0314 18:23:35.935819    4456 pod_ready.go:81] duration metric: took 9.563ms for pod "etcd-ha-832100-m02" in "kube-system" namespace to be "Ready" ...
	I0314 18:23:35.935819    4456 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-832100" in "kube-system" namespace to be "Ready" ...
	I0314 18:23:36.076268    4456 request.go:629] Waited for 140.4387ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.90.10:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-832100
	I0314 18:23:36.076599    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-832100
	I0314 18:23:36.076599    4456 round_trippers.go:469] Request Headers:
	I0314 18:23:36.076700    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:23:36.076700    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:23:36.081639    4456 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 18:23:36.278220    4456 request.go:629] Waited for 195.0811ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.90.10:8443/api/v1/nodes/ha-832100
	I0314 18:23:36.278567    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/nodes/ha-832100
	I0314 18:23:36.278644    4456 round_trippers.go:469] Request Headers:
	I0314 18:23:36.278644    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:23:36.278644    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:23:36.283125    4456 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 18:23:36.284840    4456 pod_ready.go:92] pod "kube-apiserver-ha-832100" in "kube-system" namespace has status "Ready":"True"
	I0314 18:23:36.284840    4456 pod_ready.go:81] duration metric: took 348.9961ms for pod "kube-apiserver-ha-832100" in "kube-system" namespace to be "Ready" ...
	I0314 18:23:36.284936    4456 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-832100-m02" in "kube-system" namespace to be "Ready" ...
	I0314 18:23:36.480791    4456 request.go:629] Waited for 195.7012ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.90.10:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-832100-m02
	I0314 18:23:36.481213    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-832100-m02
	I0314 18:23:36.481213    4456 round_trippers.go:469] Request Headers:
	I0314 18:23:36.481213    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:23:36.481213    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:23:36.491458    4456 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0314 18:23:36.683270    4456 request.go:629] Waited for 191.2322ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.90.10:8443/api/v1/nodes/ha-832100-m02
	I0314 18:23:36.683561    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/nodes/ha-832100-m02
	I0314 18:23:36.683561    4456 round_trippers.go:469] Request Headers:
	I0314 18:23:36.683561    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:23:36.683674    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:23:36.689322    4456 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0314 18:23:36.689860    4456 pod_ready.go:92] pod "kube-apiserver-ha-832100-m02" in "kube-system" namespace has status "Ready":"True"
	I0314 18:23:36.689860    4456 pod_ready.go:81] duration metric: took 404.849ms for pod "kube-apiserver-ha-832100-m02" in "kube-system" namespace to be "Ready" ...
	I0314 18:23:36.689860    4456 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-832100" in "kube-system" namespace to be "Ready" ...
	I0314 18:23:36.885618    4456 request.go:629] Waited for 195.4447ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.90.10:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-832100
	I0314 18:23:36.885618    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-832100
	I0314 18:23:36.885618    4456 round_trippers.go:469] Request Headers:
	I0314 18:23:36.885618    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:23:36.885618    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:23:36.891180    4456 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0314 18:23:37.089688    4456 request.go:629] Waited for 197.441ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.90.10:8443/api/v1/nodes/ha-832100
	I0314 18:23:37.089768    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/nodes/ha-832100
	I0314 18:23:37.089843    4456 round_trippers.go:469] Request Headers:
	I0314 18:23:37.089843    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:23:37.089843    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:23:37.094716    4456 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 18:23:37.095727    4456 pod_ready.go:92] pod "kube-controller-manager-ha-832100" in "kube-system" namespace has status "Ready":"True"
	I0314 18:23:37.095836    4456 pod_ready.go:81] duration metric: took 405.9049ms for pod "kube-controller-manager-ha-832100" in "kube-system" namespace to be "Ready" ...
	I0314 18:23:37.095869    4456 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-832100-m02" in "kube-system" namespace to be "Ready" ...
	I0314 18:23:37.279118    4456 request.go:629] Waited for 182.8903ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.90.10:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-832100-m02
	I0314 18:23:37.283777    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-832100-m02
	I0314 18:23:37.283777    4456 round_trippers.go:469] Request Headers:
	I0314 18:23:37.283777    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:23:37.283889    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:23:37.289232    4456 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0314 18:23:37.483807    4456 request.go:629] Waited for 192.7041ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.90.10:8443/api/v1/nodes/ha-832100-m02
	I0314 18:23:37.483912    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/nodes/ha-832100-m02
	I0314 18:23:37.483990    4456 round_trippers.go:469] Request Headers:
	I0314 18:23:37.483990    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:23:37.483990    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:23:37.490584    4456 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0314 18:23:37.491122    4456 pod_ready.go:92] pod "kube-controller-manager-ha-832100-m02" in "kube-system" namespace has status "Ready":"True"
	I0314 18:23:37.491122    4456 pod_ready.go:81] duration metric: took 395.1705ms for pod "kube-controller-manager-ha-832100-m02" in "kube-system" namespace to be "Ready" ...
	I0314 18:23:37.491122    4456 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-cnzzc" in "kube-system" namespace to be "Ready" ...
	I0314 18:23:37.686953    4456 request.go:629] Waited for 195.8167ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.90.10:8443/api/v1/namespaces/kube-system/pods/kube-proxy-cnzzc
	I0314 18:23:37.686953    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/namespaces/kube-system/pods/kube-proxy-cnzzc
	I0314 18:23:37.686953    4456 round_trippers.go:469] Request Headers:
	I0314 18:23:37.686953    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:23:37.686953    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:23:37.693064    4456 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0314 18:23:37.889338    4456 request.go:629] Waited for 195.204ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.90.10:8443/api/v1/nodes/ha-832100
	I0314 18:23:37.889724    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/nodes/ha-832100
	I0314 18:23:37.889724    4456 round_trippers.go:469] Request Headers:
	I0314 18:23:37.889724    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:23:37.889724    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:23:37.895028    4456 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0314 18:23:37.895851    4456 pod_ready.go:92] pod "kube-proxy-cnzzc" in "kube-system" namespace has status "Ready":"True"
	I0314 18:23:37.895851    4456 pod_ready.go:81] duration metric: took 404.6997ms for pod "kube-proxy-cnzzc" in "kube-system" namespace to be "Ready" ...
	I0314 18:23:37.895851    4456 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-g4l9q" in "kube-system" namespace to be "Ready" ...
	I0314 18:23:38.076544    4456 request.go:629] Waited for 180.551ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.90.10:8443/api/v1/namespaces/kube-system/pods/kube-proxy-g4l9q
	I0314 18:23:38.076985    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/namespaces/kube-system/pods/kube-proxy-g4l9q
	I0314 18:23:38.076985    4456 round_trippers.go:469] Request Headers:
	I0314 18:23:38.076985    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:23:38.076985    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:23:38.082123    4456 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0314 18:23:38.278303    4456 request.go:629] Waited for 194.7908ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.90.10:8443/api/v1/nodes/ha-832100-m02
	I0314 18:23:38.278622    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/nodes/ha-832100-m02
	I0314 18:23:38.278622    4456 round_trippers.go:469] Request Headers:
	I0314 18:23:38.278622    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:23:38.278622    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:23:38.284491    4456 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0314 18:23:38.285025    4456 pod_ready.go:92] pod "kube-proxy-g4l9q" in "kube-system" namespace has status "Ready":"True"
	I0314 18:23:38.285025    4456 pod_ready.go:81] duration metric: took 389.1456ms for pod "kube-proxy-g4l9q" in "kube-system" namespace to be "Ready" ...
	I0314 18:23:38.285025    4456 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-832100" in "kube-system" namespace to be "Ready" ...
	I0314 18:23:38.487247    4456 request.go:629] Waited for 202.0586ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.90.10:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-832100
	I0314 18:23:38.487569    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-832100
	I0314 18:23:38.487569    4456 round_trippers.go:469] Request Headers:
	I0314 18:23:38.487569    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:23:38.487614    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:23:38.493024    4456 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0314 18:23:38.675634    4456 request.go:629] Waited for 181.9308ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.90.10:8443/api/v1/nodes/ha-832100
	I0314 18:23:38.675976    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/nodes/ha-832100
	I0314 18:23:38.675976    4456 round_trippers.go:469] Request Headers:
	I0314 18:23:38.675976    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:23:38.675976    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:23:38.680745    4456 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 18:23:38.682067    4456 pod_ready.go:92] pod "kube-scheduler-ha-832100" in "kube-system" namespace has status "Ready":"True"
	I0314 18:23:38.682165    4456 pod_ready.go:81] duration metric: took 396.9284ms for pod "kube-scheduler-ha-832100" in "kube-system" namespace to be "Ready" ...
	I0314 18:23:38.682165    4456 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-832100-m02" in "kube-system" namespace to be "Ready" ...
	I0314 18:23:38.878111    4456 request.go:629] Waited for 195.8461ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.90.10:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-832100-m02
	I0314 18:23:38.878111    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-832100-m02
	I0314 18:23:38.878111    4456 round_trippers.go:469] Request Headers:
	I0314 18:23:38.878111    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:23:38.878111    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:23:38.883769    4456 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0314 18:23:39.081164    4456 request.go:629] Waited for 195.9411ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.90.10:8443/api/v1/nodes/ha-832100-m02
	I0314 18:23:39.081474    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/nodes/ha-832100-m02
	I0314 18:23:39.081508    4456 round_trippers.go:469] Request Headers:
	I0314 18:23:39.081508    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:23:39.081508    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:23:39.086262    4456 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 18:23:39.086262    4456 pod_ready.go:92] pod "kube-scheduler-ha-832100-m02" in "kube-system" namespace has status "Ready":"True"
	I0314 18:23:39.086262    4456 pod_ready.go:81] duration metric: took 404.0674ms for pod "kube-scheduler-ha-832100-m02" in "kube-system" namespace to be "Ready" ...
	I0314 18:23:39.086262    4456 pod_ready.go:38] duration metric: took 3.2060458s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0314 18:23:39.086262    4456 api_server.go:52] waiting for apiserver process to appear ...
	I0314 18:23:39.096839    4456 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 18:23:39.120142    4456 api_server.go:72] duration metric: took 13.5929244s to wait for apiserver process to appear ...
	I0314 18:23:39.120142    4456 api_server.go:88] waiting for apiserver healthz status ...
	I0314 18:23:39.120142    4456 api_server.go:253] Checking apiserver healthz at https://172.17.90.10:8443/healthz ...
	I0314 18:23:39.130124    4456 api_server.go:279] https://172.17.90.10:8443/healthz returned 200:
	ok
	I0314 18:23:39.130124    4456 round_trippers.go:463] GET https://172.17.90.10:8443/version
	I0314 18:23:39.130124    4456 round_trippers.go:469] Request Headers:
	I0314 18:23:39.130124    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:23:39.130124    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:23:39.132711    4456 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0314 18:23:39.133620    4456 api_server.go:141] control plane version: v1.28.4
	I0314 18:23:39.133620    4456 api_server.go:131] duration metric: took 13.4767ms to wait for apiserver health ...
	I0314 18:23:39.133699    4456 system_pods.go:43] waiting for kube-system pods to appear ...
	I0314 18:23:39.286917    4456 request.go:629] Waited for 152.9933ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.90.10:8443/api/v1/namespaces/kube-system/pods
	I0314 18:23:39.287187    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/namespaces/kube-system/pods
	I0314 18:23:39.287187    4456 round_trippers.go:469] Request Headers:
	I0314 18:23:39.287187    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:23:39.287222    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:23:39.294794    4456 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0314 18:23:39.301151    4456 system_pods.go:59] 17 kube-system pods found
	I0314 18:23:39.301151    4456 system_pods.go:61] "coredns-5dd5756b68-5rf5x" [a1975ad0-d327-4b3a-81a0-ead7c000b839] Running
	I0314 18:23:39.301151    4456 system_pods.go:61] "coredns-5dd5756b68-mnw55" [1eb87fcd-6c11-4457-b9dc-aaa8ec89f851] Running
	I0314 18:23:39.301151    4456 system_pods.go:61] "etcd-ha-832100" [db669e0d-400b-4b97-a76f-53f15d844a6d] Running
	I0314 18:23:39.301151    4456 system_pods.go:61] "etcd-ha-832100-m02" [0127bd94-9828-4de0-9724-82b7de2a3730] Running
	I0314 18:23:39.301151    4456 system_pods.go:61] "kindnet-6n7bk" [a1281a26-baf8-4566-b964-e4b042aceae9] Running
	I0314 18:23:39.301151    4456 system_pods.go:61] "kindnet-jvbts" [1070cc03-2571-4d58-9446-b704ad17b1b1] Running
	I0314 18:23:39.301151    4456 system_pods.go:61] "kube-apiserver-ha-832100" [30d411af-dab6-44d2-9887-a08a042d6150] Running
	I0314 18:23:39.301151    4456 system_pods.go:61] "kube-apiserver-ha-832100-m02" [53db6070-884e-4df1-b77b-15a6415384db] Running
	I0314 18:23:39.301151    4456 system_pods.go:61] "kube-controller-manager-ha-832100" [6d430700-f7cd-473e-98a7-c5d4f6c0b984] Running
	I0314 18:23:39.301151    4456 system_pods.go:61] "kube-controller-manager-ha-832100-m02" [81fa8e3e-357e-4a7a-8acc-4481c0292f26] Running
	I0314 18:23:39.301151    4456 system_pods.go:61] "kube-proxy-cnzzc" [83a6c448-c577-4c77-8e21-11efe6bab9ac] Running
	I0314 18:23:39.301151    4456 system_pods.go:61] "kube-proxy-g4l9q" [5e8dd3b4-2059-47f9-aca1-cadb8dc76b4d] Running
	I0314 18:23:39.301151    4456 system_pods.go:61] "kube-scheduler-ha-832100" [28207820-b6cd-4573-82b1-9fa8b88741b1] Running
	I0314 18:23:39.301151    4456 system_pods.go:61] "kube-scheduler-ha-832100-m02" [d0d35814-e1ca-4136-9e0a-5a578f4d08e2] Running
	I0314 18:23:39.301151    4456 system_pods.go:61] "kube-vip-ha-832100" [c20342af-ece8-442d-88e0-b15cd453b554] Running / Ready:ContainersNotReady (containers with unready status: [kube-vip]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-vip])
	I0314 18:23:39.301151    4456 system_pods.go:61] "kube-vip-ha-832100-m02" [f27cb2fa-b6eb-4c83-97c4-8582bb73aca7] Running / Ready:ContainersNotReady (containers with unready status: [kube-vip]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-vip])
	I0314 18:23:39.301151    4456 system_pods.go:61] "storage-provisioner" [099c1e5d-1c0b-4df7-b023-1f8da354c4e6] Running
	I0314 18:23:39.301151    4456 system_pods.go:74] duration metric: took 167.4397ms to wait for pod list to return data ...
	I0314 18:23:39.301151    4456 default_sa.go:34] waiting for default service account to be created ...
	I0314 18:23:39.477127    4456 request.go:629] Waited for 175.8707ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.90.10:8443/api/v1/namespaces/default/serviceaccounts
	I0314 18:23:39.477301    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/namespaces/default/serviceaccounts
	I0314 18:23:39.477301    4456 round_trippers.go:469] Request Headers:
	I0314 18:23:39.477301    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:23:39.477301    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:23:39.485304    4456 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0314 18:23:39.485304    4456 default_sa.go:45] found service account: "default"
	I0314 18:23:39.485304    4456 default_sa.go:55] duration metric: took 184.1393ms for default service account to be created ...
	I0314 18:23:39.485304    4456 system_pods.go:116] waiting for k8s-apps to be running ...
	I0314 18:23:39.679101    4456 request.go:629] Waited for 192.6625ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.90.10:8443/api/v1/namespaces/kube-system/pods
	I0314 18:23:39.679101    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/namespaces/kube-system/pods
	I0314 18:23:39.679101    4456 round_trippers.go:469] Request Headers:
	I0314 18:23:39.679101    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:23:39.679101    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:23:39.693764    4456 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0314 18:23:39.699719    4456 system_pods.go:86] 17 kube-system pods found
	I0314 18:23:39.699719    4456 system_pods.go:89] "coredns-5dd5756b68-5rf5x" [a1975ad0-d327-4b3a-81a0-ead7c000b839] Running
	I0314 18:23:39.699719    4456 system_pods.go:89] "coredns-5dd5756b68-mnw55" [1eb87fcd-6c11-4457-b9dc-aaa8ec89f851] Running
	I0314 18:23:39.699719    4456 system_pods.go:89] "etcd-ha-832100" [db669e0d-400b-4b97-a76f-53f15d844a6d] Running
	I0314 18:23:39.699719    4456 system_pods.go:89] "etcd-ha-832100-m02" [0127bd94-9828-4de0-9724-82b7de2a3730] Running
	I0314 18:23:39.699719    4456 system_pods.go:89] "kindnet-6n7bk" [a1281a26-baf8-4566-b964-e4b042aceae9] Running
	I0314 18:23:39.699719    4456 system_pods.go:89] "kindnet-jvbts" [1070cc03-2571-4d58-9446-b704ad17b1b1] Running
	I0314 18:23:39.699719    4456 system_pods.go:89] "kube-apiserver-ha-832100" [30d411af-dab6-44d2-9887-a08a042d6150] Running
	I0314 18:23:39.699719    4456 system_pods.go:89] "kube-apiserver-ha-832100-m02" [53db6070-884e-4df1-b77b-15a6415384db] Running
	I0314 18:23:39.700375    4456 system_pods.go:89] "kube-controller-manager-ha-832100" [6d430700-f7cd-473e-98a7-c5d4f6c0b984] Running
	I0314 18:23:39.700375    4456 system_pods.go:89] "kube-controller-manager-ha-832100-m02" [81fa8e3e-357e-4a7a-8acc-4481c0292f26] Running
	I0314 18:23:39.700375    4456 system_pods.go:89] "kube-proxy-cnzzc" [83a6c448-c577-4c77-8e21-11efe6bab9ac] Running
	I0314 18:23:39.700375    4456 system_pods.go:89] "kube-proxy-g4l9q" [5e8dd3b4-2059-47f9-aca1-cadb8dc76b4d] Running
	I0314 18:23:39.700375    4456 system_pods.go:89] "kube-scheduler-ha-832100" [28207820-b6cd-4573-82b1-9fa8b88741b1] Running
	I0314 18:23:39.700375    4456 system_pods.go:89] "kube-scheduler-ha-832100-m02" [d0d35814-e1ca-4136-9e0a-5a578f4d08e2] Running
	I0314 18:23:39.700375    4456 system_pods.go:89] "kube-vip-ha-832100" [c20342af-ece8-442d-88e0-b15cd453b554] Running / Ready:ContainersNotReady (containers with unready status: [kube-vip]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-vip])
	I0314 18:23:39.700375    4456 system_pods.go:89] "kube-vip-ha-832100-m02" [f27cb2fa-b6eb-4c83-97c4-8582bb73aca7] Running / Ready:ContainersNotReady (containers with unready status: [kube-vip]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-vip])
	I0314 18:23:39.700469    4456 system_pods.go:89] "storage-provisioner" [099c1e5d-1c0b-4df7-b023-1f8da354c4e6] Running
	I0314 18:23:39.700469    4456 system_pods.go:126] duration metric: took 214.1821ms to wait for k8s-apps to be running ...
	I0314 18:23:39.700469    4456 system_svc.go:44] waiting for kubelet service to be running ....
	I0314 18:23:39.709846    4456 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0314 18:23:39.733637    4456 system_svc.go:56] duration metric: took 33.1354ms WaitForService to wait for kubelet
	I0314 18:23:39.733684    4456 kubeadm.go:576] duration metric: took 14.2064216s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0314 18:23:39.733750    4456 node_conditions.go:102] verifying NodePressure condition ...
	I0314 18:23:39.883651    4456 request.go:629] Waited for 149.5757ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.90.10:8443/api/v1/nodes
	I0314 18:23:39.883829    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/nodes
	I0314 18:23:39.883829    4456 round_trippers.go:469] Request Headers:
	I0314 18:23:39.883829    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:23:39.883829    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:23:39.889165    4456 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0314 18:23:39.890157    4456 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0314 18:23:39.890255    4456 node_conditions.go:123] node cpu capacity is 2
	I0314 18:23:39.890255    4456 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0314 18:23:39.890255    4456 node_conditions.go:123] node cpu capacity is 2
	I0314 18:23:39.890255    4456 node_conditions.go:105] duration metric: took 156.4936ms to run NodePressure ...
	I0314 18:23:39.890359    4456 start.go:240] waiting for startup goroutines ...
	I0314 18:23:39.890450    4456 start.go:254] writing updated cluster config ...
	I0314 18:23:39.894037    4456 out.go:177] 
	I0314 18:23:39.907001    4456 config.go:182] Loaded profile config "ha-832100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0314 18:23:39.907662    4456 profile.go:142] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-832100\config.json ...
	I0314 18:23:39.912810    4456 out.go:177] * Starting "ha-832100-m03" control-plane node in "ha-832100" cluster
	I0314 18:23:39.915117    4456 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0314 18:23:39.915117    4456 cache.go:56] Caching tarball of preloaded images
	I0314 18:23:39.915784    4456 preload.go:173] Found C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0314 18:23:39.915784    4456 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0314 18:23:39.916315    4456 profile.go:142] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-832100\config.json ...
	I0314 18:23:39.921902    4456 start.go:360] acquireMachinesLock for ha-832100-m03: {Name:mk814f158b6187cc9297257c36fdbe0d2871c950 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0314 18:23:39.922007    4456 start.go:364] duration metric: took 53.1µs to acquireMachinesLock for "ha-832100-m03"
	I0314 18:23:39.922007    4456 start.go:93] Provisioning new machine with config: &{Name:ha-832100 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.28.4 ClusterName:ha-832100 Namespace:default APIServerHAVIP:172.17.95.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.90.10 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.17.92.203 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false in
gress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryM
irror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0314 18:23:39.922007    4456 start.go:125] createHost starting for "m03" (driver="hyperv")
	I0314 18:23:39.925483    4456 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0314 18:23:39.925483    4456 start.go:159] libmachine.API.Create for "ha-832100" (driver="hyperv")
	I0314 18:23:39.925483    4456 client.go:168] LocalClient.Create starting
	I0314 18:23:39.926249    4456 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem
	I0314 18:23:39.926249    4456 main.go:141] libmachine: Decoding PEM data...
	I0314 18:23:39.926249    4456 main.go:141] libmachine: Parsing certificate...
	I0314 18:23:39.926249    4456 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem
	I0314 18:23:39.926249    4456 main.go:141] libmachine: Decoding PEM data...
	I0314 18:23:39.926249    4456 main.go:141] libmachine: Parsing certificate...
	I0314 18:23:39.926249    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0314 18:23:41.726144    4456 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0314 18:23:41.726144    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:23:41.726144    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0314 18:23:43.359650    4456 main.go:141] libmachine: [stdout =====>] : False
	
	I0314 18:23:43.359650    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:23:43.359650    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0314 18:23:44.752810    4456 main.go:141] libmachine: [stdout =====>] : True
	
	I0314 18:23:44.753340    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:23:44.753340    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0314 18:23:48.198508    4456 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0314 18:23:48.198584    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:23:48.200246    4456 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube7/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.32.1-1710348681-18375-amd64.iso...
	I0314 18:23:48.506298    4456 main.go:141] libmachine: Creating SSH key...
	I0314 18:23:48.732710    4456 main.go:141] libmachine: Creating VM...
	I0314 18:23:48.732710    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0314 18:23:51.388088    4456 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0314 18:23:51.388088    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:23:51.388618    4456 main.go:141] libmachine: Using switch "Default Switch"
	I0314 18:23:51.388618    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0314 18:23:53.047491    4456 main.go:141] libmachine: [stdout =====>] : True
	
	I0314 18:23:53.047491    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:23:53.047672    4456 main.go:141] libmachine: Creating VHD
	I0314 18:23:53.047672    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\ha-832100-m03\fixed.vhd' -SizeBytes 10MB -Fixed
	I0314 18:23:56.597818    4456 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube7
	Path                    : C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\ha-832100-m03\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : ED37C6A0-44DF-40B6-8B14-3CF0BECB7168
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0314 18:23:56.597904    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:23:56.597996    4456 main.go:141] libmachine: Writing magic tar header
	I0314 18:23:56.598072    4456 main.go:141] libmachine: Writing SSH key tar header
	I0314 18:23:56.606196    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\ha-832100-m03\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\ha-832100-m03\disk.vhd' -VHDType Dynamic -DeleteSource
	I0314 18:23:59.608221    4456 main.go:141] libmachine: [stdout =====>] : 
	I0314 18:23:59.613270    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:23:59.613369    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\ha-832100-m03\disk.vhd' -SizeBytes 20000MB
	I0314 18:24:01.996273    4456 main.go:141] libmachine: [stdout =====>] : 
	I0314 18:24:01.996273    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:24:01.996273    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-832100-m03 -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\ha-832100-m03' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0314 18:24:05.415917    4456 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	ha-832100-m03 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0314 18:24:05.416259    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:24:05.416320    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-832100-m03 -DynamicMemoryEnabled $false
	I0314 18:24:07.489518    4456 main.go:141] libmachine: [stdout =====>] : 
	I0314 18:24:07.489518    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:24:07.489518    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-832100-m03 -Count 2
	I0314 18:24:09.527545    4456 main.go:141] libmachine: [stdout =====>] : 
	I0314 18:24:09.528089    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:24:09.528089    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-832100-m03 -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\ha-832100-m03\boot2docker.iso'
	I0314 18:24:11.930595    4456 main.go:141] libmachine: [stdout =====>] : 
	I0314 18:24:11.930648    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:24:11.930648    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-832100-m03 -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\ha-832100-m03\disk.vhd'
	I0314 18:24:14.389765    4456 main.go:141] libmachine: [stdout =====>] : 
	I0314 18:24:14.389765    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:24:14.389765    4456 main.go:141] libmachine: Starting VM...
	I0314 18:24:14.390501    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-832100-m03
	I0314 18:24:17.284393    4456 main.go:141] libmachine: [stdout =====>] : 
	I0314 18:24:17.284393    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:24:17.284393    4456 main.go:141] libmachine: Waiting for host to start...
	I0314 18:24:17.284644    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-832100-m03 ).state
	I0314 18:24:19.362431    4456 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 18:24:19.363045    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:24:19.363148    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-832100-m03 ).networkadapters[0]).ipaddresses[0]
	I0314 18:24:21.691601    4456 main.go:141] libmachine: [stdout =====>] : 
	I0314 18:24:21.692310    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:24:22.704524    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-832100-m03 ).state
	I0314 18:24:24.704052    4456 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 18:24:24.704207    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:24:24.704253    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-832100-m03 ).networkadapters[0]).ipaddresses[0]
	I0314 18:24:27.013832    4456 main.go:141] libmachine: [stdout =====>] : 
	I0314 18:24:27.013832    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:24:28.015167    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-832100-m03 ).state
	I0314 18:24:30.032828    4456 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 18:24:30.032828    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:24:30.032828    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-832100-m03 ).networkadapters[0]).ipaddresses[0]
	I0314 18:24:32.348790    4456 main.go:141] libmachine: [stdout =====>] : 
	I0314 18:24:32.349001    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:24:33.359120    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-832100-m03 ).state
	I0314 18:24:35.394217    4456 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 18:24:35.394217    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:24:35.394217    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-832100-m03 ).networkadapters[0]).ipaddresses[0]
	I0314 18:24:37.669171    4456 main.go:141] libmachine: [stdout =====>] : 
	I0314 18:24:37.669171    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:24:38.678659    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-832100-m03 ).state
	I0314 18:24:40.705115    4456 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 18:24:40.705906    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:24:40.705970    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-832100-m03 ).networkadapters[0]).ipaddresses[0]
	I0314 18:24:43.058138    4456 main.go:141] libmachine: [stdout =====>] : 172.17.89.54
	
	I0314 18:24:43.058138    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:24:43.058138    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-832100-m03 ).state
	I0314 18:24:44.994117    4456 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 18:24:44.994117    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:24:44.994117    4456 machine.go:94] provisionDockerMachine start ...
	I0314 18:24:44.994245    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-832100-m03 ).state
	I0314 18:24:46.985416    4456 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 18:24:46.985416    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:24:46.986069    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-832100-m03 ).networkadapters[0]).ipaddresses[0]
	I0314 18:24:49.344205    4456 main.go:141] libmachine: [stdout =====>] : 172.17.89.54
	
	I0314 18:24:49.344443    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:24:49.348000    4456 main.go:141] libmachine: Using SSH client type: native
	I0314 18:24:49.357524    4456 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xac9f80] 0xaccb60 <nil>  [] 0s} 172.17.89.54 22 <nil> <nil>}
	I0314 18:24:49.357524    4456 main.go:141] libmachine: About to run SSH command:
	hostname
	I0314 18:24:49.485978    4456 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0314 18:24:49.485978    4456 buildroot.go:166] provisioning hostname "ha-832100-m03"
	I0314 18:24:49.485978    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-832100-m03 ).state
	I0314 18:24:51.435939    4456 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 18:24:51.436654    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:24:51.436752    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-832100-m03 ).networkadapters[0]).ipaddresses[0]
	I0314 18:24:53.773459    4456 main.go:141] libmachine: [stdout =====>] : 172.17.89.54
	
	I0314 18:24:53.773459    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:24:53.778134    4456 main.go:141] libmachine: Using SSH client type: native
	I0314 18:24:53.778134    4456 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xac9f80] 0xaccb60 <nil>  [] 0s} 172.17.89.54 22 <nil> <nil>}
	I0314 18:24:53.778134    4456 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-832100-m03 && echo "ha-832100-m03" | sudo tee /etc/hostname
	I0314 18:24:53.925684    4456 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-832100-m03
	
	I0314 18:24:53.925684    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-832100-m03 ).state
	I0314 18:24:55.870992    4456 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 18:24:55.870992    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:24:55.870992    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-832100-m03 ).networkadapters[0]).ipaddresses[0]
	I0314 18:24:58.253424    4456 main.go:141] libmachine: [stdout =====>] : 172.17.89.54
	
	I0314 18:24:58.253561    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:24:58.257148    4456 main.go:141] libmachine: Using SSH client type: native
	I0314 18:24:58.257148    4456 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xac9f80] 0xaccb60 <nil>  [] 0s} 172.17.89.54 22 <nil> <nil>}
	I0314 18:24:58.257148    4456 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-832100-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-832100-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-832100-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0314 18:24:58.388339    4456 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0314 18:24:58.388339    4456 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube7\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube7\minikube-integration\.minikube}
	I0314 18:24:58.388339    4456 buildroot.go:174] setting up certificates
	I0314 18:24:58.388339    4456 provision.go:84] configureAuth start
	I0314 18:24:58.388339    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-832100-m03 ).state
	I0314 18:25:00.342689    4456 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 18:25:00.343514    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:25:00.343514    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-832100-m03 ).networkadapters[0]).ipaddresses[0]
	I0314 18:25:02.698584    4456 main.go:141] libmachine: [stdout =====>] : 172.17.89.54
	
	I0314 18:25:02.699241    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:25:02.699241    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-832100-m03 ).state
	I0314 18:25:04.661912    4456 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 18:25:04.662311    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:25:04.662311    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-832100-m03 ).networkadapters[0]).ipaddresses[0]
	I0314 18:25:07.021304    4456 main.go:141] libmachine: [stdout =====>] : 172.17.89.54
	
	I0314 18:25:07.021304    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:25:07.021304    4456 provision.go:143] copyHostCerts
	I0314 18:25:07.021477    4456 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem
	I0314 18:25:07.021477    4456 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem, removing ...
	I0314 18:25:07.021477    4456 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.pem
	I0314 18:25:07.021969    4456 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0314 18:25:07.022912    4456 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem
	I0314 18:25:07.023160    4456 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem, removing ...
	I0314 18:25:07.023160    4456 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cert.pem
	I0314 18:25:07.023516    4456 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0314 18:25:07.024393    4456 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem
	I0314 18:25:07.024739    4456 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem, removing ...
	I0314 18:25:07.024739    4456 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\key.pem
	I0314 18:25:07.025149    4456 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem (1679 bytes)
	I0314 18:25:07.025575    4456 provision.go:117] generating server cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-832100-m03 san=[127.0.0.1 172.17.89.54 ha-832100-m03 localhost minikube]
	I0314 18:25:07.222638    4456 provision.go:177] copyRemoteCerts
	I0314 18:25:07.231650    4456 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0314 18:25:07.231650    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-832100-m03 ).state
	I0314 18:25:09.205126    4456 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 18:25:09.205922    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:25:09.205922    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-832100-m03 ).networkadapters[0]).ipaddresses[0]
	I0314 18:25:11.551218    4456 main.go:141] libmachine: [stdout =====>] : 172.17.89.54
	
	I0314 18:25:11.551218    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:25:11.551601    4456 sshutil.go:53] new ssh client: &{IP:172.17.89.54 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\ha-832100-m03\id_rsa Username:docker}
	I0314 18:25:11.656175    4456 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.4242079s)
	I0314 18:25:11.656240    4456 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0314 18:25:11.656362    4456 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0314 18:25:11.703292    4456 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0314 18:25:11.703458    4456 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0314 18:25:11.751861    4456 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0314 18:25:11.751861    4456 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0314 18:25:11.795910    4456 provision.go:87] duration metric: took 13.4066102s to configureAuth
	I0314 18:25:11.796907    4456 buildroot.go:189] setting minikube options for container-runtime
	I0314 18:25:11.796907    4456 config.go:182] Loaded profile config "ha-832100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0314 18:25:11.796907    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-832100-m03 ).state
	I0314 18:25:13.753295    4456 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 18:25:13.753334    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:25:13.753407    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-832100-m03 ).networkadapters[0]).ipaddresses[0]
	I0314 18:25:16.143073    4456 main.go:141] libmachine: [stdout =====>] : 172.17.89.54
	
	I0314 18:25:16.143940    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:25:16.147776    4456 main.go:141] libmachine: Using SSH client type: native
	I0314 18:25:16.148167    4456 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xac9f80] 0xaccb60 <nil>  [] 0s} 172.17.89.54 22 <nil> <nil>}
	I0314 18:25:16.148167    4456 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0314 18:25:16.278483    4456 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0314 18:25:16.278571    4456 buildroot.go:70] root file system type: tmpfs
	I0314 18:25:16.278798    4456 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0314 18:25:16.278868    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-832100-m03 ).state
	I0314 18:25:18.244263    4456 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 18:25:18.244263    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:25:18.244374    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-832100-m03 ).networkadapters[0]).ipaddresses[0]
	I0314 18:25:20.588253    4456 main.go:141] libmachine: [stdout =====>] : 172.17.89.54
	
	I0314 18:25:20.588253    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:25:20.595073    4456 main.go:141] libmachine: Using SSH client type: native
	I0314 18:25:20.595734    4456 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xac9f80] 0xaccb60 <nil>  [] 0s} 172.17.89.54 22 <nil> <nil>}
	I0314 18:25:20.595734    4456 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.17.90.10"
	Environment="NO_PROXY=172.17.90.10,172.17.92.203"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0314 18:25:20.740991    4456 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.17.90.10
	Environment=NO_PROXY=172.17.90.10,172.17.92.203
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0314 18:25:20.741580    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-832100-m03 ).state
	I0314 18:25:22.717574    4456 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 18:25:22.717574    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:25:22.717574    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-832100-m03 ).networkadapters[0]).ipaddresses[0]
	I0314 18:25:25.067588    4456 main.go:141] libmachine: [stdout =====>] : 172.17.89.54
	
	I0314 18:25:25.067588    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:25:25.071183    4456 main.go:141] libmachine: Using SSH client type: native
	I0314 18:25:25.071242    4456 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xac9f80] 0xaccb60 <nil>  [] 0s} 172.17.89.54 22 <nil> <nil>}
	I0314 18:25:25.071242    4456 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0314 18:25:27.177254    4456 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0314 18:25:27.177254    4456 machine.go:97] duration metric: took 42.1801112s to provisionDockerMachine
	I0314 18:25:27.177254    4456 client.go:171] duration metric: took 1m47.244054s to LocalClient.Create
	I0314 18:25:27.177794    4456 start.go:167] duration metric: took 1m47.244054s to libmachine.API.Create "ha-832100"
	I0314 18:25:27.177843    4456 start.go:293] postStartSetup for "ha-832100-m03" (driver="hyperv")
	I0314 18:25:27.177866    4456 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0314 18:25:27.186184    4456 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0314 18:25:27.186184    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-832100-m03 ).state
	I0314 18:25:29.200667    4456 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 18:25:29.201124    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:25:29.201203    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-832100-m03 ).networkadapters[0]).ipaddresses[0]
	I0314 18:25:31.559491    4456 main.go:141] libmachine: [stdout =====>] : 172.17.89.54
	
	I0314 18:25:31.559491    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:25:31.560327    4456 sshutil.go:53] new ssh client: &{IP:172.17.89.54 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\ha-832100-m03\id_rsa Username:docker}
	I0314 18:25:31.653939    4456 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.4674328s)
	I0314 18:25:31.663804    4456 ssh_runner.go:195] Run: cat /etc/os-release
	I0314 18:25:31.670686    4456 info.go:137] Remote host: Buildroot 2023.02.9
	I0314 18:25:31.670771    4456 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\addons for local assets ...
	I0314 18:25:31.671063    4456 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\files for local assets ...
	I0314 18:25:31.671320    4456 filesync.go:149] local asset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\110522.pem -> 110522.pem in /etc/ssl/certs
	I0314 18:25:31.671320    4456 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\110522.pem -> /etc/ssl/certs/110522.pem
	I0314 18:25:31.680617    4456 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0314 18:25:31.698729    4456 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\110522.pem --> /etc/ssl/certs/110522.pem (1708 bytes)
	I0314 18:25:31.742770    4456 start.go:296] duration metric: took 4.5645982s for postStartSetup
	I0314 18:25:31.744965    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-832100-m03 ).state
	I0314 18:25:33.720266    4456 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 18:25:33.721015    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:25:33.721015    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-832100-m03 ).networkadapters[0]).ipaddresses[0]
	I0314 18:25:36.063260    4456 main.go:141] libmachine: [stdout =====>] : 172.17.89.54
	
	I0314 18:25:36.063260    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:25:36.064366    4456 profile.go:142] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-832100\config.json ...
	I0314 18:25:36.066192    4456 start.go:128] duration metric: took 1m56.1358274s to createHost
	I0314 18:25:36.066192    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-832100-m03 ).state
	I0314 18:25:38.012377    4456 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 18:25:38.012377    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:25:38.012517    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-832100-m03 ).networkadapters[0]).ipaddresses[0]
	I0314 18:25:40.377672    4456 main.go:141] libmachine: [stdout =====>] : 172.17.89.54
	
	I0314 18:25:40.377925    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:25:40.382059    4456 main.go:141] libmachine: Using SSH client type: native
	I0314 18:25:40.382425    4456 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xac9f80] 0xaccb60 <nil>  [] 0s} 172.17.89.54 22 <nil> <nil>}
	I0314 18:25:40.382498    4456 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0314 18:25:40.512392    4456 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710440740.773172686
	
	I0314 18:25:40.512392    4456 fix.go:216] guest clock: 1710440740.773172686
	I0314 18:25:40.512488    4456 fix.go:229] Guest: 2024-03-14 18:25:40.773172686 +0000 UTC Remote: 2024-03-14 18:25:36.0661926 +0000 UTC m=+556.593856501 (delta=4.706980086s)
	I0314 18:25:40.512488    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-832100-m03 ).state
	I0314 18:25:42.467679    4456 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 18:25:42.467679    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:25:42.468370    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-832100-m03 ).networkadapters[0]).ipaddresses[0]
	I0314 18:25:44.797230    4456 main.go:141] libmachine: [stdout =====>] : 172.17.89.54
	
	I0314 18:25:44.797230    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:25:44.802081    4456 main.go:141] libmachine: Using SSH client type: native
	I0314 18:25:44.802684    4456 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xac9f80] 0xaccb60 <nil>  [] 0s} 172.17.89.54 22 <nil> <nil>}
	I0314 18:25:44.802684    4456 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1710440740
	I0314 18:25:44.940695    4456 main.go:141] libmachine: SSH cmd err, output: <nil>: Thu Mar 14 18:25:40 UTC 2024
	
	I0314 18:25:44.941226    4456 fix.go:236] clock set: Thu Mar 14 18:25:40 UTC 2024
	 (err=<nil>)
	I0314 18:25:44.941226    4456 start.go:83] releasing machines lock for "ha-832100-m03", held for 2m5.0102203s
	I0314 18:25:44.941400    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-832100-m03 ).state
	I0314 18:25:46.886916    4456 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 18:25:46.886916    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:25:46.887149    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-832100-m03 ).networkadapters[0]).ipaddresses[0]
	I0314 18:25:49.220645    4456 main.go:141] libmachine: [stdout =====>] : 172.17.89.54
	
	I0314 18:25:49.220645    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:25:49.225136    4456 out.go:177] * Found network options:
	I0314 18:25:49.227214    4456 out.go:177]   - NO_PROXY=172.17.90.10,172.17.92.203
	W0314 18:25:49.229799    4456 proxy.go:119] fail to check proxy env: Error ip not in block
	W0314 18:25:49.229799    4456 proxy.go:119] fail to check proxy env: Error ip not in block
	I0314 18:25:49.231853    4456 out.go:177]   - NO_PROXY=172.17.90.10,172.17.92.203
	W0314 18:25:49.233235    4456 proxy.go:119] fail to check proxy env: Error ip not in block
	W0314 18:25:49.233235    4456 proxy.go:119] fail to check proxy env: Error ip not in block
	W0314 18:25:49.234233    4456 proxy.go:119] fail to check proxy env: Error ip not in block
	W0314 18:25:49.234233    4456 proxy.go:119] fail to check proxy env: Error ip not in block
	I0314 18:25:49.237013    4456 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0314 18:25:49.238091    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-832100-m03 ).state
	I0314 18:25:49.243305    4456 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0314 18:25:49.244307    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-832100-m03 ).state
	I0314 18:25:51.236990    4456 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 18:25:51.236990    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:25:51.236990    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-832100-m03 ).networkadapters[0]).ipaddresses[0]
	I0314 18:25:51.249881    4456 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 18:25:51.249881    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:25:51.249881    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-832100-m03 ).networkadapters[0]).ipaddresses[0]
	I0314 18:25:53.627475    4456 main.go:141] libmachine: [stdout =====>] : 172.17.89.54
	
	I0314 18:25:53.627475    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:25:53.627629    4456 sshutil.go:53] new ssh client: &{IP:172.17.89.54 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\ha-832100-m03\id_rsa Username:docker}
	I0314 18:25:53.650585    4456 main.go:141] libmachine: [stdout =====>] : 172.17.89.54
	
	I0314 18:25:53.650585    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:25:53.650585    4456 sshutil.go:53] new ssh client: &{IP:172.17.89.54 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\ha-832100-m03\id_rsa Username:docker}
	I0314 18:25:53.774688    4456 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (4.5310553s)
	W0314 18:25:53.774688    4456 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0314 18:25:53.774812    4456 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.5374699s)
	I0314 18:25:53.783576    4456 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0314 18:25:53.810333    4456 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0314 18:25:53.810333    4456 start.go:494] detecting cgroup driver to use...
	I0314 18:25:53.810333    4456 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0314 18:25:53.850261    4456 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0314 18:25:53.884126    4456 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0314 18:25:53.904936    4456 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0314 18:25:53.917935    4456 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0314 18:25:53.948932    4456 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0314 18:25:53.983402    4456 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0314 18:25:54.015433    4456 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0314 18:25:54.044429    4456 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0314 18:25:54.072457    4456 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0314 18:25:54.101345    4456 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0314 18:25:54.128213    4456 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0314 18:25:54.155984    4456 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 18:25:54.347837    4456 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0314 18:25:54.379685    4456 start.go:494] detecting cgroup driver to use...
	I0314 18:25:54.390679    4456 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0314 18:25:54.422190    4456 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0314 18:25:54.452805    4456 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0314 18:25:54.491981    4456 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0314 18:25:54.522884    4456 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0314 18:25:54.553646    4456 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0314 18:25:54.636093    4456 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0314 18:25:54.659013    4456 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0314 18:25:54.702208    4456 ssh_runner.go:195] Run: which cri-dockerd
	I0314 18:25:54.718333    4456 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0314 18:25:54.743201    4456 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0314 18:25:54.784354    4456 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0314 18:25:54.973991    4456 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0314 18:25:55.152370    4456 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0314 18:25:55.152370    4456 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0314 18:25:55.190439    4456 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 18:25:55.366737    4456 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0314 18:25:57.866034    4456 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.4990428s)
	I0314 18:25:57.874926    4456 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0314 18:25:57.910228    4456 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0314 18:25:57.943810    4456 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0314 18:25:58.135489    4456 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0314 18:25:58.328470    4456 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 18:25:58.511718    4456 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0314 18:25:58.548132    4456 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0314 18:25:58.580352    4456 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 18:25:58.769254    4456 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0314 18:25:58.865204    4456 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0314 18:25:58.875707    4456 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0314 18:25:58.886002    4456 start.go:562] Will wait 60s for crictl version
	I0314 18:25:58.895571    4456 ssh_runner.go:195] Run: which crictl
	I0314 18:25:58.911288    4456 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0314 18:25:58.984261    4456 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  25.0.4
	RuntimeApiVersion:  v1
	I0314 18:25:58.993731    4456 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0314 18:25:59.034062    4456 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0314 18:25:59.069392    4456 out.go:204] * Preparing Kubernetes v1.28.4 on Docker 25.0.4 ...
	I0314 18:25:59.073124    4456 out.go:177]   - env NO_PROXY=172.17.90.10
	I0314 18:25:59.075295    4456 out.go:177]   - env NO_PROXY=172.17.90.10,172.17.92.203
	I0314 18:25:59.077622    4456 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0314 18:25:59.082799    4456 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0314 18:25:59.082871    4456 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0314 18:25:59.082871    4456 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0314 18:25:59.082871    4456 ip.go:207] Found interface: {Index:7 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:82:e8:09 Flags:up|broadcast|multicast|running}
	I0314 18:25:59.085843    4456 ip.go:210] interface addr: fe80::e3be:cf7e:6bd2:b964/64
	I0314 18:25:59.085898    4456 ip.go:210] interface addr: 172.17.80.1/20
	I0314 18:25:59.097874    4456 ssh_runner.go:195] Run: grep 172.17.80.1	host.minikube.internal$ /etc/hosts
	I0314 18:25:59.103737    4456 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.17.80.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0314 18:25:59.123902    4456 mustload.go:65] Loading cluster: ha-832100
	I0314 18:25:59.124536    4456 config.go:182] Loaded profile config "ha-832100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0314 18:25:59.124722    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-832100 ).state
	I0314 18:26:01.063855    4456 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 18:26:01.063855    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:26:01.063855    4456 host.go:66] Checking if "ha-832100" exists ...
	I0314 18:26:01.065177    4456 certs.go:68] Setting up C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-832100 for IP: 172.17.89.54
	I0314 18:26:01.065177    4456 certs.go:194] generating shared ca certs ...
	I0314 18:26:01.065259    4456 certs.go:226] acquiring lock for ca certs: {Name:mk3bd6e475aadf28590677101b36f61c132d4290 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 18:26:01.065837    4456 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key
	I0314 18:26:01.066174    4456 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key
	I0314 18:26:01.066356    4456 certs.go:256] generating profile certs ...
	I0314 18:26:01.066718    4456 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-832100\client.key
	I0314 18:26:01.066718    4456 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-832100\apiserver.key.4377854c
	I0314 18:26:01.067051    4456 crypto.go:68] Generating cert C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-832100\apiserver.crt.4377854c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.17.90.10 172.17.92.203 172.17.89.54 172.17.95.254]
	I0314 18:26:01.241196    4456 crypto.go:156] Writing cert to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-832100\apiserver.crt.4377854c ...
	I0314 18:26:01.241196    4456 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-832100\apiserver.crt.4377854c: {Name:mka1507243b4541904331c4d3a2bb32413478303 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 18:26:01.242196    4456 crypto.go:164] Writing key to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-832100\apiserver.key.4377854c ...
	I0314 18:26:01.242196    4456 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-832100\apiserver.key.4377854c: {Name:mk0f1cb39b26dc4d2052fa37e53b0b761513c8aa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 18:26:01.243328    4456 certs.go:381] copying C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-832100\apiserver.crt.4377854c -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-832100\apiserver.crt
	I0314 18:26:01.256384    4456 certs.go:385] copying C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-832100\apiserver.key.4377854c -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-832100\apiserver.key
	I0314 18:26:01.257588    4456 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-832100\proxy-client.key
	I0314 18:26:01.257588    4456 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0314 18:26:01.258391    4456 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0314 18:26:01.258391    4456 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0314 18:26:01.258722    4456 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0314 18:26:01.258802    4456 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-832100\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0314 18:26:01.258926    4456 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-832100\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0314 18:26:01.266248    4456 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-832100\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0314 18:26:01.266852    4456 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-832100\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0314 18:26:01.267303    4456 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\11052.pem (1338 bytes)
	W0314 18:26:01.267543    4456 certs.go:480] ignoring C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\11052_empty.pem, impossibly tiny 0 bytes
	I0314 18:26:01.267616    4456 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0314 18:26:01.267774    4456 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0314 18:26:01.267774    4456 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0314 18:26:01.267774    4456 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0314 18:26:01.268376    4456 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\110522.pem (1708 bytes)
	I0314 18:26:01.268580    4456 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\110522.pem -> /usr/share/ca-certificates/110522.pem
	I0314 18:26:01.268728    4456 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0314 18:26:01.268802    4456 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\11052.pem -> /usr/share/ca-certificates/11052.pem
	I0314 18:26:01.268955    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-832100 ).state
	I0314 18:26:03.233614    4456 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 18:26:03.233614    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:26:03.234386    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-832100 ).networkadapters[0]).ipaddresses[0]
	I0314 18:26:05.595287    4456 main.go:141] libmachine: [stdout =====>] : 172.17.90.10
	
	I0314 18:26:05.595329    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:26:05.595810    4456 sshutil.go:53] new ssh client: &{IP:172.17.90.10 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\ha-832100\id_rsa Username:docker}
	I0314 18:26:05.687697    4456 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0314 18:26:05.695391    4456 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0314 18:26:05.722310    4456 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0314 18:26:05.729552    4456 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0314 18:26:05.756798    4456 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0314 18:26:05.764650    4456 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0314 18:26:05.792790    4456 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0314 18:26:05.799163    4456 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0314 18:26:05.825720    4456 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0314 18:26:05.832424    4456 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0314 18:26:05.860634    4456 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0314 18:26:05.866986    4456 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0314 18:26:05.884498    4456 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0314 18:26:05.930334    4456 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0314 18:26:05.971883    4456 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0314 18:26:06.014824    4456 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0314 18:26:06.057952    4456 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-832100\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0314 18:26:06.104959    4456 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-832100\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0314 18:26:06.148355    4456 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-832100\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0314 18:26:06.189938    4456 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-832100\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0314 18:26:06.232419    4456 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\110522.pem --> /usr/share/ca-certificates/110522.pem (1708 bytes)
	I0314 18:26:06.275625    4456 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0314 18:26:06.320216    4456 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\11052.pem --> /usr/share/ca-certificates/11052.pem (1338 bytes)
	I0314 18:26:06.362681    4456 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0314 18:26:06.393445    4456 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0314 18:26:06.421928    4456 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0314 18:26:06.452991    4456 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0314 18:26:06.481780    4456 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0314 18:26:06.515946    4456 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0314 18:26:06.545965    4456 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0314 18:26:06.585050    4456 ssh_runner.go:195] Run: openssl version
	I0314 18:26:06.602743    4456 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11052.pem && ln -fs /usr/share/ca-certificates/11052.pem /etc/ssl/certs/11052.pem"
	I0314 18:26:06.630114    4456 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11052.pem
	I0314 18:26:06.637902    4456 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 14 17:58 /usr/share/ca-certificates/11052.pem
	I0314 18:26:06.646914    4456 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11052.pem
	I0314 18:26:06.663889    4456 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11052.pem /etc/ssl/certs/51391683.0"
	I0314 18:26:06.690968    4456 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110522.pem && ln -fs /usr/share/ca-certificates/110522.pem /etc/ssl/certs/110522.pem"
	I0314 18:26:06.718253    4456 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/110522.pem
	I0314 18:26:06.724795    4456 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 14 17:58 /usr/share/ca-certificates/110522.pem
	I0314 18:26:06.733288    4456 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110522.pem
	I0314 18:26:06.750271    4456 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110522.pem /etc/ssl/certs/3ec20f2e.0"
	I0314 18:26:06.778689    4456 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0314 18:26:06.807161    4456 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0314 18:26:06.813667    4456 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 14 17:44 /usr/share/ca-certificates/minikubeCA.pem
	I0314 18:26:06.821890    4456 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0314 18:26:06.839789    4456 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0314 18:26:06.868313    4456 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0314 18:26:06.874502    4456 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0314 18:26:06.874806    4456 kubeadm.go:928] updating node {m03 172.17.89.54 8443 v1.28.4 docker true true} ...
	I0314 18:26:06.874957    4456 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-832100-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.17.89.54
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:ha-832100 Namespace:default APIServerHAVIP:172.17.95.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0314 18:26:06.875037    4456 kube-vip.go:105] generating kube-vip config ...
	I0314 18:26:06.875037    4456 kube-vip.go:125] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.17.95.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0314 18:26:06.883600    4456 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0314 18:26:06.900159    4456 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.28.4: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.28.4': No such file or directory
	
	Initiating transfer...
	I0314 18:26:06.904682    4456 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.28.4
	I0314 18:26:06.925551    4456 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubelet.sha256
	I0314 18:26:06.925551    4456 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl.sha256
	I0314 18:26:06.925551    4456 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubectl -> /var/lib/minikube/binaries/v1.28.4/kubectl
	I0314 18:26:06.925551    4456 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubeadm.sha256
	I0314 18:26:06.926288    4456 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubeadm -> /var/lib/minikube/binaries/v1.28.4/kubeadm
	I0314 18:26:06.937792    4456 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0314 18:26:06.938504    4456 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubeadm
	I0314 18:26:06.939797    4456 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubectl
	I0314 18:26:06.958012    4456 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubeadm': No such file or directory
	I0314 18:26:06.958092    4456 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubectl': No such file or directory
	I0314 18:26:06.958092    4456 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubelet -> /var/lib/minikube/binaries/v1.28.4/kubelet
	I0314 18:26:06.958239    4456 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubectl --> /var/lib/minikube/binaries/v1.28.4/kubectl (49885184 bytes)
	I0314 18:26:06.958239    4456 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubeadm --> /var/lib/minikube/binaries/v1.28.4/kubeadm (49102848 bytes)
	I0314 18:26:06.967370    4456 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubelet
	I0314 18:26:07.037045    4456 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubelet': No such file or directory
	I0314 18:26:07.037265    4456 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubelet --> /var/lib/minikube/binaries/v1.28.4/kubelet (110850048 bytes)
	I0314 18:26:08.136690    4456 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0314 18:26:08.154748    4456 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0314 18:26:08.188370    4456 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0314 18:26:08.220010    4456 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1345 bytes)
	I0314 18:26:08.263105    4456 ssh_runner.go:195] Run: grep 172.17.95.254	control-plane.minikube.internal$ /etc/hosts
	I0314 18:26:08.269390    4456 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.17.95.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0314 18:26:08.298928    4456 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 18:26:08.477984    4456 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0314 18:26:08.504870    4456 host.go:66] Checking if "ha-832100" exists ...
	I0314 18:26:08.505492    4456 start.go:316] joinCluster: &{Name:ha-832100 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 Clust
erName:ha-832100 Namespace:default APIServerHAVIP:172.17.95.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.90.10 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.17.92.203 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:172.17.89.54 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fa
lse inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 18:26:08.505492    4456 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0314 18:26:08.505492    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-832100 ).state
	I0314 18:26:10.468760    4456 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 18:26:10.468999    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:26:10.468999    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-832100 ).networkadapters[0]).ipaddresses[0]
	I0314 18:26:12.900808    4456 main.go:141] libmachine: [stdout =====>] : 172.17.90.10
	
	I0314 18:26:12.901298    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:26:12.901298    4456 sshutil.go:53] new ssh client: &{IP:172.17.90.10 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\ha-832100\id_rsa Username:docker}
	I0314 18:26:13.501772    4456 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0": (4.9959166s)
	I0314 18:26:13.501937    4456 start.go:342] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:172.17.89.54 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0314 18:26:13.502040    4456 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 8p1ag1.y3mj4i16tjb2rzcp --discovery-token-ca-cert-hash sha256:d6cca618a9b89cd38081689b02ff56bf93130e2cdd7cca4172dee33bbb1d34eb --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-832100-m03 --control-plane --apiserver-advertise-address=172.17.89.54 --apiserver-bind-port=8443"
	I0314 18:26:58.220623    4456 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 8p1ag1.y3mj4i16tjb2rzcp --discovery-token-ca-cert-hash sha256:d6cca618a9b89cd38081689b02ff56bf93130e2cdd7cca4172dee33bbb1d34eb --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-832100-m03 --control-plane --apiserver-advertise-address=172.17.89.54 --apiserver-bind-port=8443": (44.7152369s)
	I0314 18:26:58.220623    4456 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0314 18:26:58.979403    4456 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-832100-m03 minikube.k8s.io/updated_at=2024_03_14T18_26_58_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=c6f78a3db54ac629870afb44fb5bc8be9e04a8c7 minikube.k8s.io/name=ha-832100 minikube.k8s.io/primary=false
	I0314 18:26:59.127322    4456 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-832100-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0314 18:26:59.387400    4456 start.go:318] duration metric: took 50.8781682s to joinCluster
	I0314 18:26:59.387400    4456 start.go:234] Will wait 6m0s for node &{Name:m03 IP:172.17.89.54 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0314 18:26:59.388421    4456 config.go:182] Loaded profile config "ha-832100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0314 18:26:59.390033    4456 out.go:177] * Verifying Kubernetes components...
	I0314 18:26:59.402988    4456 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 18:26:59.780995    4456 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0314 18:26:59.815706    4456 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0314 18:26:59.816473    4456 kapi.go:59] client config for ha-832100: &rest.Config{Host:"https://172.17.95.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\ha-832100\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\ha-832100\\client.key", CAFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Nex
tProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1ec9180), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0314 18:26:59.816618    4456 kubeadm.go:477] Overriding stale ClientConfig host https://172.17.95.254:8443 with https://172.17.90.10:8443
	I0314 18:26:59.816765    4456 node_ready.go:35] waiting up to 6m0s for node "ha-832100-m03" to be "Ready" ...
	I0314 18:26:59.817298    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/nodes/ha-832100-m03
	I0314 18:26:59.817298    4456 round_trippers.go:469] Request Headers:
	I0314 18:26:59.817298    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:26:59.817298    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:26:59.832970    4456 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I0314 18:27:00.321985    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/nodes/ha-832100-m03
	I0314 18:27:00.321985    4456 round_trippers.go:469] Request Headers:
	I0314 18:27:00.321985    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:27:00.321985    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:27:00.326553    4456 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 18:27:00.827051    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/nodes/ha-832100-m03
	I0314 18:27:00.827051    4456 round_trippers.go:469] Request Headers:
	I0314 18:27:00.827051    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:27:00.827051    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:27:00.832348    4456 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0314 18:27:01.331471    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/nodes/ha-832100-m03
	I0314 18:27:01.331551    4456 round_trippers.go:469] Request Headers:
	I0314 18:27:01.331551    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:27:01.331551    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:27:01.337065    4456 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 18:27:01.820519    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/nodes/ha-832100-m03
	I0314 18:27:01.820519    4456 round_trippers.go:469] Request Headers:
	I0314 18:27:01.820519    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:27:01.820519    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:27:01.826133    4456 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0314 18:27:01.827851    4456 node_ready.go:53] node "ha-832100-m03" has status "Ready":"False"
	I0314 18:27:02.325379    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/nodes/ha-832100-m03
	I0314 18:27:02.325602    4456 round_trippers.go:469] Request Headers:
	I0314 18:27:02.325602    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:27:02.325602    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:27:02.330183    4456 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 18:27:02.818901    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/nodes/ha-832100-m03
	I0314 18:27:02.818901    4456 round_trippers.go:469] Request Headers:
	I0314 18:27:02.819129    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:27:02.819129    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:27:02.823781    4456 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 18:27:03.327448    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/nodes/ha-832100-m03
	I0314 18:27:03.327523    4456 round_trippers.go:469] Request Headers:
	I0314 18:27:03.327523    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:27:03.327523    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:27:03.331741    4456 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 18:27:03.819758    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/nodes/ha-832100-m03
	I0314 18:27:03.819758    4456 round_trippers.go:469] Request Headers:
	I0314 18:27:03.819758    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:27:03.819758    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:27:03.825521    4456 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0314 18:27:04.327912    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/nodes/ha-832100-m03
	I0314 18:27:04.327912    4456 round_trippers.go:469] Request Headers:
	I0314 18:27:04.327912    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:27:04.327912    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:27:04.332736    4456 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 18:27:04.333348    4456 node_ready.go:53] node "ha-832100-m03" has status "Ready":"False"
	I0314 18:27:04.831778    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/nodes/ha-832100-m03
	I0314 18:27:04.847328    4456 round_trippers.go:469] Request Headers:
	I0314 18:27:04.847391    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:27:04.847391    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:27:04.945479    4456 round_trippers.go:574] Response Status: 200 OK in 98 milliseconds
	I0314 18:27:05.320788    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/nodes/ha-832100-m03
	I0314 18:27:05.320788    4456 round_trippers.go:469] Request Headers:
	I0314 18:27:05.320788    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:27:05.320788    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:27:05.324502    4456 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 18:27:05.822920    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/nodes/ha-832100-m03
	I0314 18:27:05.822920    4456 round_trippers.go:469] Request Headers:
	I0314 18:27:05.822920    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:27:05.822920    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:27:05.857661    4456 round_trippers.go:574] Response Status: 200 OK in 34 milliseconds
	I0314 18:27:06.326230    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/nodes/ha-832100-m03
	I0314 18:27:06.326512    4456 round_trippers.go:469] Request Headers:
	I0314 18:27:06.326512    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:27:06.326512    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:27:06.332824    4456 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0314 18:27:06.832639    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/nodes/ha-832100-m03
	I0314 18:27:06.832679    4456 round_trippers.go:469] Request Headers:
	I0314 18:27:06.832719    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:27:06.832719    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:27:06.837673    4456 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 18:27:06.838326    4456 node_ready.go:53] node "ha-832100-m03" has status "Ready":"False"
	I0314 18:27:07.322310    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/nodes/ha-832100-m03
	I0314 18:27:07.322392    4456 round_trippers.go:469] Request Headers:
	I0314 18:27:07.322392    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:27:07.322392    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:27:07.326670    4456 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 18:27:07.817531    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/nodes/ha-832100-m03
	I0314 18:27:07.817531    4456 round_trippers.go:469] Request Headers:
	I0314 18:27:07.817531    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:27:07.817531    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:27:07.842083    4456 round_trippers.go:574] Response Status: 200 OK in 24 milliseconds
	I0314 18:27:08.331820    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/nodes/ha-832100-m03
	I0314 18:27:08.331870    4456 round_trippers.go:469] Request Headers:
	I0314 18:27:08.331919    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:27:08.331919    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:27:08.336589    4456 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 18:27:08.819433    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/nodes/ha-832100-m03
	I0314 18:27:08.819622    4456 round_trippers.go:469] Request Headers:
	I0314 18:27:08.819622    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:27:08.819622    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:27:08.824219    4456 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 18:27:09.325914    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/nodes/ha-832100-m03
	I0314 18:27:09.325914    4456 round_trippers.go:469] Request Headers:
	I0314 18:27:09.325914    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:27:09.325914    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:27:09.330480    4456 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 18:27:09.331713    4456 node_ready.go:49] node "ha-832100-m03" has status "Ready":"True"
	I0314 18:27:09.331805    4456 node_ready.go:38] duration metric: took 9.5143332s for node "ha-832100-m03" to be "Ready" ...
	I0314 18:27:09.331805    4456 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0314 18:27:09.331913    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/namespaces/kube-system/pods
	I0314 18:27:09.331913    4456 round_trippers.go:469] Request Headers:
	I0314 18:27:09.332026    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:27:09.332026    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:27:09.340280    4456 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0314 18:27:09.350488    4456 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-5rf5x" in "kube-system" namespace to be "Ready" ...
	I0314 18:27:09.350488    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-5rf5x
	I0314 18:27:09.350488    4456 round_trippers.go:469] Request Headers:
	I0314 18:27:09.350488    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:27:09.350488    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:27:09.354738    4456 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 18:27:09.356381    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/nodes/ha-832100
	I0314 18:27:09.356498    4456 round_trippers.go:469] Request Headers:
	I0314 18:27:09.356498    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:27:09.356498    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:27:09.359663    4456 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 18:27:09.361116    4456 pod_ready.go:92] pod "coredns-5dd5756b68-5rf5x" in "kube-system" namespace has status "Ready":"True"
	I0314 18:27:09.361263    4456 pod_ready.go:81] duration metric: took 10.7736ms for pod "coredns-5dd5756b68-5rf5x" in "kube-system" namespace to be "Ready" ...
	I0314 18:27:09.361263    4456 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-mnw55" in "kube-system" namespace to be "Ready" ...
	I0314 18:27:09.361351    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-mnw55
	I0314 18:27:09.361390    4456 round_trippers.go:469] Request Headers:
	I0314 18:27:09.361404    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:27:09.361404    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:27:09.365601    4456 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 18:27:09.366466    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/nodes/ha-832100
	I0314 18:27:09.366501    4456 round_trippers.go:469] Request Headers:
	I0314 18:27:09.366501    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:27:09.366501    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:27:09.369964    4456 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 18:27:09.370214    4456 pod_ready.go:92] pod "coredns-5dd5756b68-mnw55" in "kube-system" namespace has status "Ready":"True"
	I0314 18:27:09.370214    4456 pod_ready.go:81] duration metric: took 8.9505ms for pod "coredns-5dd5756b68-mnw55" in "kube-system" namespace to be "Ready" ...
	I0314 18:27:09.370214    4456 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-832100" in "kube-system" namespace to be "Ready" ...
	I0314 18:27:09.370214    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/namespaces/kube-system/pods/etcd-ha-832100
	I0314 18:27:09.370214    4456 round_trippers.go:469] Request Headers:
	I0314 18:27:09.370214    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:27:09.370214    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:27:09.374521    4456 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 18:27:09.374521    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/nodes/ha-832100
	I0314 18:27:09.374521    4456 round_trippers.go:469] Request Headers:
	I0314 18:27:09.374521    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:27:09.374521    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:27:09.378452    4456 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 18:27:09.379684    4456 pod_ready.go:92] pod "etcd-ha-832100" in "kube-system" namespace has status "Ready":"True"
	I0314 18:27:09.379684    4456 pod_ready.go:81] duration metric: took 9.4697ms for pod "etcd-ha-832100" in "kube-system" namespace to be "Ready" ...
	I0314 18:27:09.379684    4456 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-832100-m02" in "kube-system" namespace to be "Ready" ...
	I0314 18:27:09.379684    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/namespaces/kube-system/pods/etcd-ha-832100-m02
	I0314 18:27:09.379684    4456 round_trippers.go:469] Request Headers:
	I0314 18:27:09.379684    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:27:09.379684    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:27:09.383337    4456 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 18:27:09.384283    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/nodes/ha-832100-m02
	I0314 18:27:09.384357    4456 round_trippers.go:469] Request Headers:
	I0314 18:27:09.384357    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:27:09.384357    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:27:09.387526    4456 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 18:27:09.388101    4456 pod_ready.go:92] pod "etcd-ha-832100-m02" in "kube-system" namespace has status "Ready":"True"
	I0314 18:27:09.388647    4456 pod_ready.go:81] duration metric: took 8.9619ms for pod "etcd-ha-832100-m02" in "kube-system" namespace to be "Ready" ...
	I0314 18:27:09.388647    4456 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-832100-m03" in "kube-system" namespace to be "Ready" ...
	I0314 18:27:09.527084    4456 request.go:629] Waited for 138.1577ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.90.10:8443/api/v1/namespaces/kube-system/pods/etcd-ha-832100-m03
	I0314 18:27:09.527278    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/namespaces/kube-system/pods/etcd-ha-832100-m03
	I0314 18:27:09.527278    4456 round_trippers.go:469] Request Headers:
	I0314 18:27:09.527278    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:27:09.527278    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:27:09.547113    4456 round_trippers.go:574] Response Status: 200 OK in 19 milliseconds
	I0314 18:27:09.728461    4456 request.go:629] Waited for 180.7081ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.90.10:8443/api/v1/nodes/ha-832100-m03
	I0314 18:27:09.728658    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/nodes/ha-832100-m03
	I0314 18:27:09.728658    4456 round_trippers.go:469] Request Headers:
	I0314 18:27:09.728658    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:27:09.728658    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:27:09.739027    4456 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0314 18:27:09.933901    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/namespaces/kube-system/pods/etcd-ha-832100-m03
	I0314 18:27:09.933973    4456 round_trippers.go:469] Request Headers:
	I0314 18:27:09.933973    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:27:09.933973    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:27:09.939264    4456 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0314 18:27:10.134402    4456 request.go:629] Waited for 194.1685ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.90.10:8443/api/v1/nodes/ha-832100-m03
	I0314 18:27:10.134759    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/nodes/ha-832100-m03
	I0314 18:27:10.134759    4456 round_trippers.go:469] Request Headers:
	I0314 18:27:10.134759    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:27:10.134759    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:27:10.142549    4456 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0314 18:27:10.399734    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/namespaces/kube-system/pods/etcd-ha-832100-m03
	I0314 18:27:10.399937    4456 round_trippers.go:469] Request Headers:
	I0314 18:27:10.399937    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:27:10.399937    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:27:10.406928    4456 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0314 18:27:10.538754    4456 request.go:629] Waited for 131.1569ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.90.10:8443/api/v1/nodes/ha-832100-m03
	I0314 18:27:10.539105    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/nodes/ha-832100-m03
	I0314 18:27:10.539105    4456 round_trippers.go:469] Request Headers:
	I0314 18:27:10.539105    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:27:10.539105    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:27:10.543821    4456 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 18:27:10.897196    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/namespaces/kube-system/pods/etcd-ha-832100-m03
	I0314 18:27:10.897196    4456 round_trippers.go:469] Request Headers:
	I0314 18:27:10.897196    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:27:10.897196    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:27:10.901810    4456 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 18:27:10.928607    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/nodes/ha-832100-m03
	I0314 18:27:10.929017    4456 round_trippers.go:469] Request Headers:
	I0314 18:27:10.929017    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:27:10.929017    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:27:10.933083    4456 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 18:27:10.934406    4456 pod_ready.go:92] pod "etcd-ha-832100-m03" in "kube-system" namespace has status "Ready":"True"
	I0314 18:27:10.934458    4456 pod_ready.go:81] duration metric: took 1.545634s for pod "etcd-ha-832100-m03" in "kube-system" namespace to be "Ready" ...
	I0314 18:27:10.934511    4456 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-832100" in "kube-system" namespace to be "Ready" ...
	I0314 18:27:11.131571    4456 request.go:629] Waited for 196.965ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.90.10:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-832100
	I0314 18:27:11.131571    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-832100
	I0314 18:27:11.131571    4456 round_trippers.go:469] Request Headers:
	I0314 18:27:11.131571    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:27:11.131571    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:27:11.142270    4456 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0314 18:27:11.332312    4456 request.go:629] Waited for 188.6003ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.90.10:8443/api/v1/nodes/ha-832100
	I0314 18:27:11.332670    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/nodes/ha-832100
	I0314 18:27:11.332670    4456 round_trippers.go:469] Request Headers:
	I0314 18:27:11.332670    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:27:11.332670    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:27:11.339042    4456 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0314 18:27:11.339693    4456 pod_ready.go:92] pod "kube-apiserver-ha-832100" in "kube-system" namespace has status "Ready":"True"
	I0314 18:27:11.339787    4456 pod_ready.go:81] duration metric: took 405.2467ms for pod "kube-apiserver-ha-832100" in "kube-system" namespace to be "Ready" ...
	I0314 18:27:11.339787    4456 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-832100-m02" in "kube-system" namespace to be "Ready" ...
	I0314 18:27:11.536897    4456 request.go:629] Waited for 197.01ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.90.10:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-832100-m02
	I0314 18:27:11.537306    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-832100-m02
	I0314 18:27:11.537306    4456 round_trippers.go:469] Request Headers:
	I0314 18:27:11.537306    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:27:11.537306    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:27:11.544162    4456 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0314 18:27:11.740278    4456 request.go:629] Waited for 194.8178ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.90.10:8443/api/v1/nodes/ha-832100-m02
	I0314 18:27:11.740353    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/nodes/ha-832100-m02
	I0314 18:27:11.740353    4456 round_trippers.go:469] Request Headers:
	I0314 18:27:11.740353    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:27:11.740353    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:27:11.745145    4456 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 18:27:11.745756    4456 pod_ready.go:92] pod "kube-apiserver-ha-832100-m02" in "kube-system" namespace has status "Ready":"True"
	I0314 18:27:11.745813    4456 pod_ready.go:81] duration metric: took 405.9958ms for pod "kube-apiserver-ha-832100-m02" in "kube-system" namespace to be "Ready" ...
	I0314 18:27:11.745813    4456 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-832100-m03" in "kube-system" namespace to be "Ready" ...
	I0314 18:27:11.926396    4456 request.go:629] Waited for 180.336ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.90.10:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-832100-m03
	I0314 18:27:11.926482    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-832100-m03
	I0314 18:27:11.926597    4456 round_trippers.go:469] Request Headers:
	I0314 18:27:11.926597    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:27:11.926597    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:27:11.930829    4456 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 18:27:12.130105    4456 request.go:629] Waited for 197.6038ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.90.10:8443/api/v1/nodes/ha-832100-m03
	I0314 18:27:12.130433    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/nodes/ha-832100-m03
	I0314 18:27:12.130521    4456 round_trippers.go:469] Request Headers:
	I0314 18:27:12.130521    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:27:12.130574    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:27:12.135722    4456 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0314 18:27:12.334766    4456 request.go:629] Waited for 79.4589ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.90.10:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-832100-m03
	I0314 18:27:12.334929    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-832100-m03
	I0314 18:27:12.334929    4456 round_trippers.go:469] Request Headers:
	I0314 18:27:12.334929    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:27:12.334929    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:27:12.340002    4456 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 18:27:12.536609    4456 request.go:629] Waited for 195.7417ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.90.10:8443/api/v1/nodes/ha-832100-m03
	I0314 18:27:12.536609    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/nodes/ha-832100-m03
	I0314 18:27:12.536609    4456 round_trippers.go:469] Request Headers:
	I0314 18:27:12.536609    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:27:12.536609    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:27:12.540796    4456 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 18:27:12.756441    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-832100-m03
	I0314 18:27:12.756441    4456 round_trippers.go:469] Request Headers:
	I0314 18:27:12.756504    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:27:12.756504    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:27:12.760908    4456 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 18:27:12.926934    4456 request.go:629] Waited for 164.2646ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.90.10:8443/api/v1/nodes/ha-832100-m03
	I0314 18:27:12.927023    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/nodes/ha-832100-m03
	I0314 18:27:12.927023    4456 round_trippers.go:469] Request Headers:
	I0314 18:27:12.927023    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:27:12.927023    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:27:12.933829    4456 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0314 18:27:13.255615    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-832100-m03
	I0314 18:27:13.255615    4456 round_trippers.go:469] Request Headers:
	I0314 18:27:13.255615    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:27:13.255615    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:27:13.260197    4456 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 18:27:13.326404    4456 request.go:629] Waited for 64.9859ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.90.10:8443/api/v1/nodes/ha-832100-m03
	I0314 18:27:13.326508    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/nodes/ha-832100-m03
	I0314 18:27:13.326508    4456 round_trippers.go:469] Request Headers:
	I0314 18:27:13.326508    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:27:13.326508    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:27:13.331086    4456 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 18:27:13.331682    4456 pod_ready.go:92] pod "kube-apiserver-ha-832100-m03" in "kube-system" namespace has status "Ready":"True"
	I0314 18:27:13.331682    4456 pod_ready.go:81] duration metric: took 1.585696s for pod "kube-apiserver-ha-832100-m03" in "kube-system" namespace to be "Ready" ...
	I0314 18:27:13.331682    4456 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-832100" in "kube-system" namespace to be "Ready" ...
	I0314 18:27:13.539727    4456 request.go:629] Waited for 207.2412ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.90.10:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-832100
	I0314 18:27:13.539807    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-832100
	I0314 18:27:13.539807    4456 round_trippers.go:469] Request Headers:
	I0314 18:27:13.539889    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:27:13.539889    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:27:13.543998    4456 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 18:27:13.730581    4456 request.go:629] Waited for 185.1362ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.90.10:8443/api/v1/nodes/ha-832100
	I0314 18:27:13.730765    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/nodes/ha-832100
	I0314 18:27:13.730899    4456 round_trippers.go:469] Request Headers:
	I0314 18:27:13.730899    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:27:13.730899    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:27:13.739027    4456 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0314 18:27:13.739632    4456 pod_ready.go:92] pod "kube-controller-manager-ha-832100" in "kube-system" namespace has status "Ready":"True"
	I0314 18:27:13.739632    4456 pod_ready.go:81] duration metric: took 407.9198ms for pod "kube-controller-manager-ha-832100" in "kube-system" namespace to be "Ready" ...
	I0314 18:27:13.739632    4456 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-832100-m02" in "kube-system" namespace to be "Ready" ...
	I0314 18:27:13.933100    4456 request.go:629] Waited for 193.1902ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.90.10:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-832100-m02
	I0314 18:27:13.933325    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-832100-m02
	I0314 18:27:13.933401    4456 round_trippers.go:469] Request Headers:
	I0314 18:27:13.933401    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:27:13.933401    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:27:13.937453    4456 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 18:27:14.135834    4456 request.go:629] Waited for 196.8582ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.90.10:8443/api/v1/nodes/ha-832100-m02
	I0314 18:27:14.135834    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/nodes/ha-832100-m02
	I0314 18:27:14.136111    4456 round_trippers.go:469] Request Headers:
	I0314 18:27:14.136111    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:27:14.136181    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:27:14.140868    4456 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 18:27:14.142495    4456 pod_ready.go:92] pod "kube-controller-manager-ha-832100-m02" in "kube-system" namespace has status "Ready":"True"
	I0314 18:27:14.142564    4456 pod_ready.go:81] duration metric: took 402.9017ms for pod "kube-controller-manager-ha-832100-m02" in "kube-system" namespace to be "Ready" ...
	I0314 18:27:14.142564    4456 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-832100-m03" in "kube-system" namespace to be "Ready" ...
	I0314 18:27:14.337087    4456 request.go:629] Waited for 194.4072ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.90.10:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-832100-m03
	I0314 18:27:14.337359    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-832100-m03
	I0314 18:27:14.337359    4456 round_trippers.go:469] Request Headers:
	I0314 18:27:14.337359    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:27:14.337359    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:27:14.344392    4456 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0314 18:27:14.541470    4456 request.go:629] Waited for 196.4135ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.90.10:8443/api/v1/nodes/ha-832100-m03
	I0314 18:27:14.541470    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/nodes/ha-832100-m03
	I0314 18:27:14.541470    4456 round_trippers.go:469] Request Headers:
	I0314 18:27:14.541821    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:27:14.541821    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:27:14.546625    4456 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 18:27:14.547885    4456 pod_ready.go:92] pod "kube-controller-manager-ha-832100-m03" in "kube-system" namespace has status "Ready":"True"
	I0314 18:27:14.547885    4456 pod_ready.go:81] duration metric: took 405.2909ms for pod "kube-controller-manager-ha-832100-m03" in "kube-system" namespace to be "Ready" ...
	I0314 18:27:14.547885    4456 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-cnzzc" in "kube-system" namespace to be "Ready" ...
	I0314 18:27:14.729590    4456 request.go:629] Waited for 181.4242ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.90.10:8443/api/v1/namespaces/kube-system/pods/kube-proxy-cnzzc
	I0314 18:27:14.729688    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/namespaces/kube-system/pods/kube-proxy-cnzzc
	I0314 18:27:14.729843    4456 round_trippers.go:469] Request Headers:
	I0314 18:27:14.729866    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:27:14.729866    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:27:14.734497    4456 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 18:27:14.932471    4456 request.go:629] Waited for 196.2143ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.90.10:8443/api/v1/nodes/ha-832100
	I0314 18:27:14.932719    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/nodes/ha-832100
	I0314 18:27:14.932719    4456 round_trippers.go:469] Request Headers:
	I0314 18:27:14.932719    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:27:14.932719    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:27:14.938809    4456 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0314 18:27:14.939565    4456 pod_ready.go:92] pod "kube-proxy-cnzzc" in "kube-system" namespace has status "Ready":"True"
	I0314 18:27:14.939565    4456 pod_ready.go:81] duration metric: took 391.6505ms for pod "kube-proxy-cnzzc" in "kube-system" namespace to be "Ready" ...
	I0314 18:27:14.939565    4456 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-g4l9q" in "kube-system" namespace to be "Ready" ...
	I0314 18:27:15.136238    4456 request.go:629] Waited for 196.4543ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.90.10:8443/api/v1/namespaces/kube-system/pods/kube-proxy-g4l9q
	I0314 18:27:15.136385    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/namespaces/kube-system/pods/kube-proxy-g4l9q
	I0314 18:27:15.136385    4456 round_trippers.go:469] Request Headers:
	I0314 18:27:15.136385    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:27:15.136385    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:27:15.140977    4456 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 18:27:15.339536    4456 request.go:629] Waited for 197.9938ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.90.10:8443/api/v1/nodes/ha-832100-m02
	I0314 18:27:15.339536    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/nodes/ha-832100-m02
	I0314 18:27:15.339536    4456 round_trippers.go:469] Request Headers:
	I0314 18:27:15.339775    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:27:15.339775    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:27:15.343820    4456 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 18:27:15.344305    4456 pod_ready.go:92] pod "kube-proxy-g4l9q" in "kube-system" namespace has status "Ready":"True"
	I0314 18:27:15.344305    4456 pod_ready.go:81] duration metric: took 404.7102ms for pod "kube-proxy-g4l9q" in "kube-system" namespace to be "Ready" ...
	I0314 18:27:15.344305    4456 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-z9bkt" in "kube-system" namespace to be "Ready" ...
	I0314 18:27:15.527009    4456 request.go:629] Waited for 182.1136ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.90.10:8443/api/v1/namespaces/kube-system/pods/kube-proxy-z9bkt
	I0314 18:27:15.527009    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/namespaces/kube-system/pods/kube-proxy-z9bkt
	I0314 18:27:15.527009    4456 round_trippers.go:469] Request Headers:
	I0314 18:27:15.527009    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:27:15.527009    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:27:15.531720    4456 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 18:27:15.730360    4456 request.go:629] Waited for 197.2543ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.90.10:8443/api/v1/nodes/ha-832100-m03
	I0314 18:27:15.730539    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/nodes/ha-832100-m03
	I0314 18:27:15.730539    4456 round_trippers.go:469] Request Headers:
	I0314 18:27:15.730637    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:27:15.730637    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:27:15.734814    4456 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 18:27:15.735761    4456 pod_ready.go:92] pod "kube-proxy-z9bkt" in "kube-system" namespace has status "Ready":"True"
	I0314 18:27:15.735761    4456 pod_ready.go:81] duration metric: took 391.4266ms for pod "kube-proxy-z9bkt" in "kube-system" namespace to be "Ready" ...
	I0314 18:27:15.735761    4456 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-832100" in "kube-system" namespace to be "Ready" ...
	I0314 18:27:15.931051    4456 request.go:629] Waited for 195.2762ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.90.10:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-832100
	I0314 18:27:15.931051    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-832100
	I0314 18:27:15.931051    4456 round_trippers.go:469] Request Headers:
	I0314 18:27:15.931051    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:27:15.931051    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:27:15.935993    4456 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 18:27:16.133118    4456 request.go:629] Waited for 196.3589ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.90.10:8443/api/v1/nodes/ha-832100
	I0314 18:27:16.133118    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/nodes/ha-832100
	I0314 18:27:16.133118    4456 round_trippers.go:469] Request Headers:
	I0314 18:27:16.133118    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:27:16.133118    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:27:16.138113    4456 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 18:27:16.139002    4456 pod_ready.go:92] pod "kube-scheduler-ha-832100" in "kube-system" namespace has status "Ready":"True"
	I0314 18:27:16.139531    4456 pod_ready.go:81] duration metric: took 403.7403ms for pod "kube-scheduler-ha-832100" in "kube-system" namespace to be "Ready" ...
	I0314 18:27:16.139531    4456 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-832100-m02" in "kube-system" namespace to be "Ready" ...
	I0314 18:27:16.338457    4456 request.go:629] Waited for 198.8296ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.90.10:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-832100-m02
	I0314 18:27:16.338457    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-832100-m02
	I0314 18:27:16.338457    4456 round_trippers.go:469] Request Headers:
	I0314 18:27:16.338457    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:27:16.338457    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:27:16.343118    4456 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 18:27:16.540509    4456 request.go:629] Waited for 195.9608ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.90.10:8443/api/v1/nodes/ha-832100-m02
	I0314 18:27:16.540832    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/nodes/ha-832100-m02
	I0314 18:27:16.540931    4456 round_trippers.go:469] Request Headers:
	I0314 18:27:16.540931    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:27:16.540931    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:27:16.546763    4456 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0314 18:27:16.547435    4456 pod_ready.go:92] pod "kube-scheduler-ha-832100-m02" in "kube-system" namespace has status "Ready":"True"
	I0314 18:27:16.547435    4456 pod_ready.go:81] duration metric: took 407.8734ms for pod "kube-scheduler-ha-832100-m02" in "kube-system" namespace to be "Ready" ...
	I0314 18:27:16.547435    4456 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-832100-m03" in "kube-system" namespace to be "Ready" ...
	I0314 18:27:16.726957    4456 request.go:629] Waited for 178.9807ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.90.10:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-832100-m03
	I0314 18:27:16.727180    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-832100-m03
	I0314 18:27:16.727180    4456 round_trippers.go:469] Request Headers:
	I0314 18:27:16.727180    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:27:16.727243    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:27:16.732136    4456 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 18:27:16.928539    4456 request.go:629] Waited for 195.5805ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.90.10:8443/api/v1/nodes/ha-832100-m03
	I0314 18:27:16.928777    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/nodes/ha-832100-m03
	I0314 18:27:16.928777    4456 round_trippers.go:469] Request Headers:
	I0314 18:27:16.928777    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:27:16.928777    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:27:16.931827    4456 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 18:27:16.933006    4456 pod_ready.go:92] pod "kube-scheduler-ha-832100-m03" in "kube-system" namespace has status "Ready":"True"
	I0314 18:27:16.933006    4456 pod_ready.go:81] duration metric: took 385.5425ms for pod "kube-scheduler-ha-832100-m03" in "kube-system" namespace to be "Ready" ...
	I0314 18:27:16.933006    4456 pod_ready.go:38] duration metric: took 7.6006356s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0314 18:27:16.933006    4456 api_server.go:52] waiting for apiserver process to appear ...
	I0314 18:27:16.942786    4456 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 18:27:16.965701    4456 api_server.go:72] duration metric: took 17.5769944s to wait for apiserver process to appear ...
	I0314 18:27:16.965756    4456 api_server.go:88] waiting for apiserver healthz status ...
	I0314 18:27:16.965756    4456 api_server.go:253] Checking apiserver healthz at https://172.17.90.10:8443/healthz ...
	I0314 18:27:16.975350    4456 api_server.go:279] https://172.17.90.10:8443/healthz returned 200:
	ok
	I0314 18:27:16.975628    4456 round_trippers.go:463] GET https://172.17.90.10:8443/version
	I0314 18:27:16.975628    4456 round_trippers.go:469] Request Headers:
	I0314 18:27:16.975628    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:27:16.975628    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:27:16.976813    4456 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0314 18:27:16.977666    4456 api_server.go:141] control plane version: v1.28.4
	I0314 18:27:16.977666    4456 api_server.go:131] duration metric: took 11.9095ms to wait for apiserver health ...
	I0314 18:27:16.977666    4456 system_pods.go:43] waiting for kube-system pods to appear ...
	I0314 18:27:17.131169    4456 request.go:629] Waited for 153.3849ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.90.10:8443/api/v1/namespaces/kube-system/pods
	I0314 18:27:17.131513    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/namespaces/kube-system/pods
	I0314 18:27:17.131513    4456 round_trippers.go:469] Request Headers:
	I0314 18:27:17.131513    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:27:17.131513    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:27:17.139896    4456 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0314 18:27:17.147802    4456 system_pods.go:59] 24 kube-system pods found
	I0314 18:27:17.147802    4456 system_pods.go:61] "coredns-5dd5756b68-5rf5x" [a1975ad0-d327-4b3a-81a0-ead7c000b839] Running
	I0314 18:27:17.147802    4456 system_pods.go:61] "coredns-5dd5756b68-mnw55" [1eb87fcd-6c11-4457-b9dc-aaa8ec89f851] Running
	I0314 18:27:17.147802    4456 system_pods.go:61] "etcd-ha-832100" [db669e0d-400b-4b97-a76f-53f15d844a6d] Running
	I0314 18:27:17.147802    4456 system_pods.go:61] "etcd-ha-832100-m02" [0127bd94-9828-4de0-9724-82b7de2a3730] Running
	I0314 18:27:17.147802    4456 system_pods.go:61] "etcd-ha-832100-m03" [848f4086-efb8-4323-ba6d-bef830e929aa] Running
	I0314 18:27:17.147802    4456 system_pods.go:61] "kindnet-6n7bk" [a1281a26-baf8-4566-b964-e4b042aceae9] Running
	I0314 18:27:17.147802    4456 system_pods.go:61] "kindnet-jvbts" [1070cc03-2571-4d58-9446-b704ad17b1b1] Running
	I0314 18:27:17.147802    4456 system_pods.go:61] "kindnet-trr4z" [9576d1b9-b53d-4a68-8d93-59623314b444] Running
	I0314 18:27:17.147802    4456 system_pods.go:61] "kube-apiserver-ha-832100" [30d411af-dab6-44d2-9887-a08a042d6150] Running
	I0314 18:27:17.147802    4456 system_pods.go:61] "kube-apiserver-ha-832100-m02" [53db6070-884e-4df1-b77b-15a6415384db] Running
	I0314 18:27:17.147802    4456 system_pods.go:61] "kube-apiserver-ha-832100-m03" [b6167751-0919-40b8-ad99-2fa53949189f] Running
	I0314 18:27:17.147802    4456 system_pods.go:61] "kube-controller-manager-ha-832100" [6d430700-f7cd-473e-98a7-c5d4f6c0b984] Running
	I0314 18:27:17.147802    4456 system_pods.go:61] "kube-controller-manager-ha-832100-m02" [81fa8e3e-357e-4a7a-8acc-4481c0292f26] Running
	I0314 18:27:17.147802    4456 system_pods.go:61] "kube-controller-manager-ha-832100-m03" [fd950d1b-a488-4abf-903d-f1b6f6d875ea] Running
	I0314 18:27:17.148329    4456 system_pods.go:61] "kube-proxy-cnzzc" [83a6c448-c577-4c77-8e21-11efe6bab9ac] Running
	I0314 18:27:17.148329    4456 system_pods.go:61] "kube-proxy-g4l9q" [5e8dd3b4-2059-47f9-aca1-cadb8dc76b4d] Running
	I0314 18:27:17.148329    4456 system_pods.go:61] "kube-proxy-z9bkt" [98f1ecf2-c332-4005-a248-3548fec2336b] Running
	I0314 18:27:17.148329    4456 system_pods.go:61] "kube-scheduler-ha-832100" [28207820-b6cd-4573-82b1-9fa8b88741b1] Running
	I0314 18:27:17.148329    4456 system_pods.go:61] "kube-scheduler-ha-832100-m02" [d0d35814-e1ca-4136-9e0a-5a578f4d08e2] Running
	I0314 18:27:17.148329    4456 system_pods.go:61] "kube-scheduler-ha-832100-m03" [fde2e501-8a64-4863-b806-d42ed506c339] Running
	I0314 18:27:17.148329    4456 system_pods.go:61] "kube-vip-ha-832100" [c20342af-ece8-442d-88e0-b15cd453b554] Running / Ready:ContainersNotReady (containers with unready status: [kube-vip]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-vip])
	I0314 18:27:17.148329    4456 system_pods.go:61] "kube-vip-ha-832100-m02" [f27cb2fa-b6eb-4c83-97c4-8582bb73aca7] Running / Ready:ContainersNotReady (containers with unready status: [kube-vip]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-vip])
	I0314 18:27:17.148329    4456 system_pods.go:61] "kube-vip-ha-832100-m03" [bde414f2-17e7-4b7e-b48d-e52340085739] Running / Ready:ContainersNotReady (containers with unready status: [kube-vip]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-vip])
	I0314 18:27:17.148329    4456 system_pods.go:61] "storage-provisioner" [099c1e5d-1c0b-4df7-b023-1f8da354c4e6] Running
	I0314 18:27:17.148329    4456 system_pods.go:74] duration metric: took 170.65ms to wait for pod list to return data ...
	I0314 18:27:17.148329    4456 default_sa.go:34] waiting for default service account to be created ...
	I0314 18:27:17.333291    4456 request.go:629] Waited for 184.9487ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.90.10:8443/api/v1/namespaces/default/serviceaccounts
	I0314 18:27:17.333613    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/namespaces/default/serviceaccounts
	I0314 18:27:17.333613    4456 round_trippers.go:469] Request Headers:
	I0314 18:27:17.333613    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:27:17.333613    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:27:17.338544    4456 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 18:27:17.338845    4456 default_sa.go:45] found service account: "default"
	I0314 18:27:17.338845    4456 default_sa.go:55] duration metric: took 190.5018ms for default service account to be created ...
	I0314 18:27:17.338845    4456 system_pods.go:116] waiting for k8s-apps to be running ...
	I0314 18:27:17.538125    4456 request.go:629] Waited for 199.1196ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.90.10:8443/api/v1/namespaces/kube-system/pods
	I0314 18:27:17.538125    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/namespaces/kube-system/pods
	I0314 18:27:17.538125    4456 round_trippers.go:469] Request Headers:
	I0314 18:27:17.538125    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:27:17.538125    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:27:17.551443    4456 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0314 18:27:17.560008    4456 system_pods.go:86] 24 kube-system pods found
	I0314 18:27:17.560008    4456 system_pods.go:89] "coredns-5dd5756b68-5rf5x" [a1975ad0-d327-4b3a-81a0-ead7c000b839] Running
	I0314 18:27:17.560008    4456 system_pods.go:89] "coredns-5dd5756b68-mnw55" [1eb87fcd-6c11-4457-b9dc-aaa8ec89f851] Running
	I0314 18:27:17.560008    4456 system_pods.go:89] "etcd-ha-832100" [db669e0d-400b-4b97-a76f-53f15d844a6d] Running
	I0314 18:27:17.560008    4456 system_pods.go:89] "etcd-ha-832100-m02" [0127bd94-9828-4de0-9724-82b7de2a3730] Running
	I0314 18:27:17.560008    4456 system_pods.go:89] "etcd-ha-832100-m03" [848f4086-efb8-4323-ba6d-bef830e929aa] Running
	I0314 18:27:17.560008    4456 system_pods.go:89] "kindnet-6n7bk" [a1281a26-baf8-4566-b964-e4b042aceae9] Running
	I0314 18:27:17.560008    4456 system_pods.go:89] "kindnet-jvbts" [1070cc03-2571-4d58-9446-b704ad17b1b1] Running
	I0314 18:27:17.560008    4456 system_pods.go:89] "kindnet-trr4z" [9576d1b9-b53d-4a68-8d93-59623314b444] Running
	I0314 18:27:17.560008    4456 system_pods.go:89] "kube-apiserver-ha-832100" [30d411af-dab6-44d2-9887-a08a042d6150] Running
	I0314 18:27:17.560008    4456 system_pods.go:89] "kube-apiserver-ha-832100-m02" [53db6070-884e-4df1-b77b-15a6415384db] Running
	I0314 18:27:17.560008    4456 system_pods.go:89] "kube-apiserver-ha-832100-m03" [b6167751-0919-40b8-ad99-2fa53949189f] Running
	I0314 18:27:17.560008    4456 system_pods.go:89] "kube-controller-manager-ha-832100" [6d430700-f7cd-473e-98a7-c5d4f6c0b984] Running
	I0314 18:27:17.560008    4456 system_pods.go:89] "kube-controller-manager-ha-832100-m02" [81fa8e3e-357e-4a7a-8acc-4481c0292f26] Running
	I0314 18:27:17.560008    4456 system_pods.go:89] "kube-controller-manager-ha-832100-m03" [fd950d1b-a488-4abf-903d-f1b6f6d875ea] Running
	I0314 18:27:17.560008    4456 system_pods.go:89] "kube-proxy-cnzzc" [83a6c448-c577-4c77-8e21-11efe6bab9ac] Running
	I0314 18:27:17.560008    4456 system_pods.go:89] "kube-proxy-g4l9q" [5e8dd3b4-2059-47f9-aca1-cadb8dc76b4d] Running
	I0314 18:27:17.560008    4456 system_pods.go:89] "kube-proxy-z9bkt" [98f1ecf2-c332-4005-a248-3548fec2336b] Running
	I0314 18:27:17.560539    4456 system_pods.go:89] "kube-scheduler-ha-832100" [28207820-b6cd-4573-82b1-9fa8b88741b1] Running
	I0314 18:27:17.560539    4456 system_pods.go:89] "kube-scheduler-ha-832100-m02" [d0d35814-e1ca-4136-9e0a-5a578f4d08e2] Running
	I0314 18:27:17.560539    4456 system_pods.go:89] "kube-scheduler-ha-832100-m03" [fde2e501-8a64-4863-b806-d42ed506c339] Running
	I0314 18:27:17.560539    4456 system_pods.go:89] "kube-vip-ha-832100" [c20342af-ece8-442d-88e0-b15cd453b554] Running / Ready:ContainersNotReady (containers with unready status: [kube-vip]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-vip])
	I0314 18:27:17.560539    4456 system_pods.go:89] "kube-vip-ha-832100-m02" [f27cb2fa-b6eb-4c83-97c4-8582bb73aca7] Running / Ready:ContainersNotReady (containers with unready status: [kube-vip]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-vip])
	I0314 18:27:17.560616    4456 system_pods.go:89] "kube-vip-ha-832100-m03" [bde414f2-17e7-4b7e-b48d-e52340085739] Running / Ready:ContainersNotReady (containers with unready status: [kube-vip]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-vip])
	I0314 18:27:17.560616    4456 system_pods.go:89] "storage-provisioner" [099c1e5d-1c0b-4df7-b023-1f8da354c4e6] Running
	I0314 18:27:17.560616    4456 system_pods.go:126] duration metric: took 221.7554ms to wait for k8s-apps to be running ...
	I0314 18:27:17.560616    4456 system_svc.go:44] waiting for kubelet service to be running ....
	I0314 18:27:17.569270    4456 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0314 18:27:17.598733    4456 system_svc.go:56] duration metric: took 38.1135ms WaitForService to wait for kubelet
	I0314 18:27:17.598733    4456 kubeadm.go:576] duration metric: took 18.2099796s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0314 18:27:17.598816    4456 node_conditions.go:102] verifying NodePressure condition ...
	I0314 18:27:17.738971    4456 request.go:629] Waited for 140.1448ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.90.10:8443/api/v1/nodes
	I0314 18:27:17.739355    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/nodes
	I0314 18:27:17.739355    4456 round_trippers.go:469] Request Headers:
	I0314 18:27:17.739391    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:27:17.739391    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:27:17.743980    4456 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 18:27:17.746411    4456 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0314 18:27:17.746487    4456 node_conditions.go:123] node cpu capacity is 2
	I0314 18:27:17.746487    4456 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0314 18:27:17.746487    4456 node_conditions.go:123] node cpu capacity is 2
	I0314 18:27:17.746487    4456 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0314 18:27:17.746487    4456 node_conditions.go:123] node cpu capacity is 2
	I0314 18:27:17.746487    4456 node_conditions.go:105] duration metric: took 147.66ms to run NodePressure ...
	I0314 18:27:17.746487    4456 start.go:240] waiting for startup goroutines ...
	I0314 18:27:17.746553    4456 start.go:254] writing updated cluster config ...
	I0314 18:27:17.756080    4456 ssh_runner.go:195] Run: rm -f paused
	I0314 18:27:17.892103    4456 start.go:600] kubectl: 1.29.2, cluster: 1.28.4 (minor skew: 1)
	I0314 18:27:17.897265    4456 out.go:177] * Done! kubectl is now configured to use "ha-832100" cluster and "default" namespace by default
	
	
	==> Docker <==
	Mar 14 18:23:20 ha-832100 dockerd[1330]: time="2024-03-14T18:23:20.894367124Z" level=info msg="shim disconnected" id=033b57e92730d774dac5a521b534c9e3deae6095ed3960be49fe02aacda66a1e namespace=moby
	Mar 14 18:23:20 ha-832100 dockerd[1330]: time="2024-03-14T18:23:20.894863261Z" level=warning msg="cleaning up after shim disconnected" id=033b57e92730d774dac5a521b534c9e3deae6095ed3960be49fe02aacda66a1e namespace=moby
	Mar 14 18:23:20 ha-832100 dockerd[1330]: time="2024-03-14T18:23:20.895325196Z" level=info msg="cleaning up dead shim" namespace=moby
	Mar 14 18:23:22 ha-832100 dockerd[1330]: time="2024-03-14T18:23:22.008901522Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 14 18:23:22 ha-832100 dockerd[1330]: time="2024-03-14T18:23:22.009523570Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 14 18:23:22 ha-832100 dockerd[1330]: time="2024-03-14T18:23:22.009712584Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 14 18:23:22 ha-832100 dockerd[1330]: time="2024-03-14T18:23:22.009983104Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 14 18:25:07 ha-832100 dockerd[1330]: time="2024-03-14T18:25:07.192355859Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 14 18:25:07 ha-832100 dockerd[1330]: time="2024-03-14T18:25:07.193150220Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 14 18:25:07 ha-832100 dockerd[1330]: time="2024-03-14T18:25:07.193354535Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 14 18:25:07 ha-832100 dockerd[1330]: time="2024-03-14T18:25:07.193645758Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 14 18:25:14 ha-832100 dockerd[1324]: time="2024-03-14T18:25:14.276877661Z" level=info msg="ignoring event" container=06d1269b29766898d4d5377a33f0f19b3938c5b0214c65fb969e0e4b0673b8f8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 14 18:25:14 ha-832100 dockerd[1330]: time="2024-03-14T18:25:14.278200462Z" level=info msg="shim disconnected" id=06d1269b29766898d4d5377a33f0f19b3938c5b0214c65fb969e0e4b0673b8f8 namespace=moby
	Mar 14 18:25:14 ha-832100 dockerd[1330]: time="2024-03-14T18:25:14.278835810Z" level=warning msg="cleaning up after shim disconnected" id=06d1269b29766898d4d5377a33f0f19b3938c5b0214c65fb969e0e4b0673b8f8 namespace=moby
	Mar 14 18:25:14 ha-832100 dockerd[1330]: time="2024-03-14T18:25:14.278926717Z" level=info msg="cleaning up dead shim" namespace=moby
	Mar 14 18:27:53 ha-832100 dockerd[1330]: time="2024-03-14T18:27:53.074463704Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 14 18:27:53 ha-832100 dockerd[1330]: time="2024-03-14T18:27:53.075013746Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 14 18:27:53 ha-832100 dockerd[1330]: time="2024-03-14T18:27:53.075088951Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 14 18:27:53 ha-832100 dockerd[1330]: time="2024-03-14T18:27:53.076855687Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 14 18:27:53 ha-832100 cri-dockerd[1216]: time="2024-03-14T18:27:53Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/98ed522d977f6004135576a2dd58ddefba61f1a7ea388ebeca515f611b6e8425/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Mar 14 18:27:54 ha-832100 cri-dockerd[1216]: time="2024-03-14T18:27:54Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	Mar 14 18:27:54 ha-832100 dockerd[1330]: time="2024-03-14T18:27:54.813562021Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 14 18:27:54 ha-832100 dockerd[1330]: time="2024-03-14T18:27:54.813715132Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 14 18:27:54 ha-832100 dockerd[1330]: time="2024-03-14T18:27:54.813766836Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 14 18:27:54 ha-832100 dockerd[1330]: time="2024-03-14T18:27:54.813946249Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	4f9142c71e126       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   59 seconds ago      Running             busybox                   0                   98ed522d977f6       busybox-5b5d89c9d6-zncln
	06d1269b29766       22aaebb38f4a9                                                                                         3 minutes ago       Exited              kube-vip                  7                   75d9846fc06fe       kube-vip-ha-832100
	e8d9b70930630       6e38f40d628db                                                                                         5 minutes ago       Running             storage-provisioner       1                   81016f8048464       storage-provisioner
	3e4608ed92136       ead0a4a53df89                                                                                         9 minutes ago       Running             coredns                   0                   cb8dedae57c55       coredns-5dd5756b68-mnw55
	8fe8402ba95f0       ead0a4a53df89                                                                                         9 minutes ago       Running             coredns                   0                   80577856f1776       coredns-5dd5756b68-5rf5x
	033b57e92730d       6e38f40d628db                                                                                         9 minutes ago       Exited              storage-provisioner       0                   81016f8048464       storage-provisioner
	9017dcb9908b5       kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988              9 minutes ago       Running             kindnet-cni               0                   bf4a1e0a49ad9       kindnet-jvbts
	fe9255d884de3       83f6cc407eed8                                                                                         9 minutes ago       Running             kube-proxy                0                   f6b21a276ec3a       kube-proxy-cnzzc
	ee93388e9e8be       7fe0e6f37db33                                                                                         9 minutes ago       Running             kube-apiserver            0                   49af7adad0829       kube-apiserver-ha-832100
	c62341ce43817       e3db313c6dbc0                                                                                         9 minutes ago       Running             kube-scheduler            0                   b3432d97eff2a       kube-scheduler-ha-832100
	5e44cfe6e22bc       d058aa5ab969c                                                                                         9 minutes ago       Running             kube-controller-manager   0                   cef907dc2fc23       kube-controller-manager-ha-832100
	3b28661f58ab8       73deb9a3f7025                                                                                         9 minutes ago       Running             etcd                      0                   b501cfaa98ae5       etcd-ha-832100
	
	
	==> coredns [3e4608ed9213] <==
	[INFO] 10.244.2.2:47753 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000117009s
	[INFO] 10.244.2.2:45797 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000218816s
	[INFO] 10.244.2.2:54800 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.023913859s
	[INFO] 10.244.2.2:33361 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000193514s
	[INFO] 10.244.2.2:47990 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000110708s
	[INFO] 10.244.1.2:43918 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000150311s
	[INFO] 10.244.1.2:41533 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000122209s
	[INFO] 10.244.0.4:37564 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000139711s
	[INFO] 10.244.0.4:40929 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000251018s
	[INFO] 10.244.0.4:56498 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000179413s
	[INFO] 10.244.2.2:45730 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000175913s
	[INFO] 10.244.2.2:41389 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000082607s
	[INFO] 10.244.2.2:56157 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000067305s
	[INFO] 10.244.1.2:52311 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000178813s
	[INFO] 10.244.1.2:41198 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000067105s
	[INFO] 10.244.1.2:58044 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000051203s
	[INFO] 10.244.0.4:38792 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000207815s
	[INFO] 10.244.0.4:54825 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000200015s
	[INFO] 10.244.0.4:37063 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000111609s
	[INFO] 10.244.2.2:47276 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000124309s
	[INFO] 10.244.2.2:36530 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000126309s
	[INFO] 10.244.2.2:43071 - 5 "PTR IN 1.80.17.172.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000263819s
	[INFO] 10.244.1.2:48345 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000162612s
	[INFO] 10.244.1.2:38497 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000109708s
	[INFO] 10.244.1.2:34357 - 5 "PTR IN 1.80.17.172.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000129009s
	
	
	==> coredns [8fe8402ba95f] <==
	[INFO] 10.244.0.4:52653 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 140 0.000181114s
	[INFO] 10.244.0.4:48615 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,aa,rd,ra 140 0.000071805s
	[INFO] 10.244.2.2:46048 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.057280613s
	[INFO] 10.244.2.2:33796 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00014271s
	[INFO] 10.244.2.2:37552 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000244918s
	[INFO] 10.244.1.2:39094 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000076806s
	[INFO] 10.244.1.2:54587 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000071905s
	[INFO] 10.244.1.2:32916 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000122009s
	[INFO] 10.244.1.2:51974 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.016769734s
	[INFO] 10.244.1.2:46253 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000111108s
	[INFO] 10.244.1.2:57138 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00014721s
	[INFO] 10.244.0.4:60139 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000100907s
	[INFO] 10.244.0.4:56684 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000194314s
	[INFO] 10.244.0.4:56094 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000186713s
	[INFO] 10.244.0.4:46032 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000146711s
	[INFO] 10.244.0.4:60293 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000098807s
	[INFO] 10.244.2.2:53095 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000066504s
	[INFO] 10.244.1.2:34493 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000097907s
	[INFO] 10.244.0.4:41544 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000102308s
	[INFO] 10.244.2.2:40165 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000171113s
	[INFO] 10.244.1.2:45017 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000172613s
	[INFO] 10.244.0.4:44224 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000083307s
	[INFO] 10.244.0.4:50565 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000093507s
	[INFO] 10.244.0.4:50972 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000095707s
	[INFO] 10.244.0.4:55958 - 5 "PTR IN 1.80.17.172.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000108808s
	
	
	==> describe nodes <==
	Name:               ha-832100
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-832100
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c6f78a3db54ac629870afb44fb5bc8be9e04a8c7
	                    minikube.k8s.io/name=ha-832100
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_14T18_19_20_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 14 Mar 2024 18:19:15 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-832100
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 14 Mar 2024 18:28:50 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 14 Mar 2024 18:28:27 +0000   Thu, 14 Mar 2024 18:19:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 14 Mar 2024 18:28:27 +0000   Thu, 14 Mar 2024 18:19:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 14 Mar 2024 18:28:27 +0000   Thu, 14 Mar 2024 18:19:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 14 Mar 2024 18:28:27 +0000   Thu, 14 Mar 2024 18:19:40 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.17.90.10
	  Hostname:    ha-832100
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164268Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164268Ki
	  pods:               110
	System Info:
	  Machine ID:                 171daa7552864915b65ae5f72eac34f1
	  System UUID:                8618e286-8ee3-9d4d-a418-deff29a16f18
	  Boot ID:                    00d987ca-1c21-4890-8848-50fb6e3b581e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://25.0.4
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-zncln             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         61s
	  kube-system                 coredns-5dd5756b68-5rf5x             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m21s
	  kube-system                 coredns-5dd5756b68-mnw55             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m21s
	  kube-system                 etcd-ha-832100                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         9m34s
	  kube-system                 kindnet-jvbts                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      9m21s
	  kube-system                 kube-apiserver-ha-832100             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m34s
	  kube-system                 kube-controller-manager-ha-832100    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m34s
	  kube-system                 kube-proxy-cnzzc                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m21s
	  kube-system                 kube-scheduler-ha-832100             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m37s
	  kube-system                 kube-vip-ha-832100                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m36s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m13s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 9m19s  kube-proxy       
	  Normal  Starting                 9m35s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  9m34s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m34s  kubelet          Node ha-832100 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m34s  kubelet          Node ha-832100 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m34s  kubelet          Node ha-832100 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m22s  node-controller  Node ha-832100 event: Registered Node ha-832100 in Controller
	  Normal  NodeReady                9m13s  kubelet          Node ha-832100 status is now: NodeReady
	  Normal  RegisteredNode           5m14s  node-controller  Node ha-832100 event: Registered Node ha-832100 in Controller
	  Normal  RegisteredNode           100s   node-controller  Node ha-832100 event: Registered Node ha-832100 in Controller
	
	
	Name:               ha-832100-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-832100-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c6f78a3db54ac629870afb44fb5bc8be9e04a8c7
	                    minikube.k8s.io/name=ha-832100
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_03_14T18_23_25_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 14 Mar 2024 18:23:08 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-832100-m02
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 14 Mar 2024 18:28:46 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 14 Mar 2024 18:28:14 +0000   Thu, 14 Mar 2024 18:23:08 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 14 Mar 2024 18:28:14 +0000   Thu, 14 Mar 2024 18:23:08 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 14 Mar 2024 18:28:14 +0000   Thu, 14 Mar 2024 18:23:08 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 14 Mar 2024 18:28:14 +0000   Thu, 14 Mar 2024 18:23:35 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.17.92.203
	  Hostname:    ha-832100-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164268Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164268Ki
	  pods:               110
	System Info:
	  Machine ID:                 f3fe8d7a570b4a5c950d13fd91eceebd
	  System UUID:                ace7f3bc-53a3-1848-9390-7794cc938af9
	  Boot ID:                    6a313c1e-e213-49a7-9d70-d2d411d4aa42
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://25.0.4
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-qjmj7                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         61s
	  kube-system                 etcd-ha-832100-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         5m44s
	  kube-system                 kindnet-6n7bk                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      5m45s
	  kube-system                 kube-apiserver-ha-832100-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m43s
	  kube-system                 kube-controller-manager-ha-832100-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m43s
	  kube-system                 kube-proxy-g4l9q                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m45s
	  kube-system                 kube-scheduler-ha-832100-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m43s
	  kube-system                 kube-vip-ha-832100-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m32s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason          Age    From             Message
	  ----    ------          ----   ----             -------
	  Normal  Starting        5m27s  kube-proxy       
	  Normal  RegisteredNode  5m14s  node-controller  Node ha-832100-m02 event: Registered Node ha-832100-m02 in Controller
	  Normal  RegisteredNode  100s   node-controller  Node ha-832100-m02 event: Registered Node ha-832100-m02 in Controller
	
	
	Name:               ha-832100-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-832100-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c6f78a3db54ac629870afb44fb5bc8be9e04a8c7
	                    minikube.k8s.io/name=ha-832100
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_03_14T18_26_58_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 14 Mar 2024 18:26:54 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-832100-m03
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 14 Mar 2024 18:28:46 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 14 Mar 2024 18:28:27 +0000   Thu, 14 Mar 2024 18:26:54 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 14 Mar 2024 18:28:27 +0000   Thu, 14 Mar 2024 18:26:54 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 14 Mar 2024 18:28:27 +0000   Thu, 14 Mar 2024 18:26:54 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 14 Mar 2024 18:28:27 +0000   Thu, 14 Mar 2024 18:27:09 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.17.89.54
	  Hostname:    ha-832100-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164268Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164268Ki
	  pods:               110
	System Info:
	  Machine ID:                 b90dfaadf10c415486a200ae38e5e9e0
	  System UUID:                70953b27-5407-2f48-b92b-4ef79ac9bbf1
	  Boot ID:                    74eb7504-536e-44be-ae28-79eb68262092
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://25.0.4
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-9wj82                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         61s
	  kube-system                 etcd-ha-832100-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         116s
	  kube-system                 kindnet-trr4z                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      119s
	  kube-system                 kube-apiserver-ha-832100-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         116s
	  kube-system                 kube-controller-manager-ha-832100-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         116s
	  kube-system                 kube-proxy-z9bkt                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         119s
	  kube-system                 kube-scheduler-ha-832100-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         116s
	  kube-system                 kube-vip-ha-832100-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         112s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  Starting        113s  kube-proxy       
	  Normal  RegisteredNode  119s  node-controller  Node ha-832100-m03 event: Registered Node ha-832100-m03 in Controller
	  Normal  RegisteredNode  117s  node-controller  Node ha-832100-m03 event: Registered Node ha-832100-m03 in Controller
	  Normal  RegisteredNode  100s  node-controller  Node ha-832100-m03 event: Registered Node ha-832100-m03 in Controller
	
	
	==> dmesg <==
	[  +1.449499] systemd-fstab-generator[113]: Ignoring "noauto" option for root device
	[  +5.730940] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[Mar14 18:18] systemd-fstab-generator[641]: Ignoring "noauto" option for root device
	[  +0.173200] systemd-fstab-generator[653]: Ignoring "noauto" option for root device
	[ +29.193455] systemd-fstab-generator[937]: Ignoring "noauto" option for root device
	[  +0.096956] kauditd_printk_skb: 59 callbacks suppressed
	[  +0.504877] systemd-fstab-generator[975]: Ignoring "noauto" option for root device
	[  +0.188642] systemd-fstab-generator[987]: Ignoring "noauto" option for root device
	[  +0.209471] systemd-fstab-generator[1001]: Ignoring "noauto" option for root device
	[  +2.775973] systemd-fstab-generator[1169]: Ignoring "noauto" option for root device
	[  +0.180980] systemd-fstab-generator[1181]: Ignoring "noauto" option for root device
	[  +0.196816] systemd-fstab-generator[1194]: Ignoring "noauto" option for root device
	[  +0.261377] systemd-fstab-generator[1208]: Ignoring "noauto" option for root device
	[ +12.846987] systemd-fstab-generator[1316]: Ignoring "noauto" option for root device
	[  +0.100731] kauditd_printk_skb: 205 callbacks suppressed
	[Mar14 18:19] systemd-fstab-generator[1513]: Ignoring "noauto" option for root device
	[  +7.237686] systemd-fstab-generator[1791]: Ignoring "noauto" option for root device
	[  +0.093976] kauditd_printk_skb: 73 callbacks suppressed
	[  +5.876829] kauditd_printk_skb: 67 callbacks suppressed
	[  +4.028592] systemd-fstab-generator[2788]: Ignoring "noauto" option for root device
	[  +1.191853] kauditd_printk_skb: 24 callbacks suppressed
	[ +19.447796] kauditd_printk_skb: 35 callbacks suppressed
	[  +5.701321] kauditd_printk_skb: 33 callbacks suppressed
	
	
	==> etcd [3b28661f58ab] <==
	{"level":"warn","ts":"2024-03-14T18:26:56.283553Z","caller":"etcdserver/cluster_util.go:155","msg":"failed to get version","remote-member-id":"f58643133c5f6bd3","error":"Get \"https://172.17.89.54:2380/version\": dial tcp 172.17.89.54:2380: connect: connection refused"}
	{"level":"info","ts":"2024-03-14T18:26:56.953674Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"f58643133c5f6bd3"}
	{"level":"info","ts":"2024-03-14T18:26:56.953965Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"fb1c6b41c6abc846","remote-peer-id":"f58643133c5f6bd3"}
	{"level":"info","ts":"2024-03-14T18:26:56.95625Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"fb1c6b41c6abc846","remote-peer-id":"f58643133c5f6bd3"}
	{"level":"info","ts":"2024-03-14T18:26:56.993355Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"fb1c6b41c6abc846","to":"f58643133c5f6bd3","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-03-14T18:26:56.993695Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"fb1c6b41c6abc846","remote-peer-id":"f58643133c5f6bd3"}
	{"level":"info","ts":"2024-03-14T18:26:57.013593Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"fb1c6b41c6abc846","to":"f58643133c5f6bd3","stream-type":"stream Message"}
	{"level":"info","ts":"2024-03-14T18:26:57.013678Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"fb1c6b41c6abc846","remote-peer-id":"f58643133c5f6bd3"}
	{"level":"info","ts":"2024-03-14T18:26:57.507659Z","caller":"traceutil/trace.go:171","msg":"trace[1590466324] linearizableReadLoop","detail":"{readStateIndex:1225; appliedIndex:1225; }","duration":"162.671926ms","start":"2024-03-14T18:26:57.344966Z","end":"2024-03-14T18:26:57.507638Z","steps":["trace[1590466324] 'read index received'  (duration: 162.662525ms)","trace[1590466324] 'applied index is now lower than readState.Index'  (duration: 7.901µs)"],"step_count":2}
	{"level":"warn","ts":"2024-03-14T18:26:57.519984Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"175.010369ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-03-14T18:26:57.520225Z","caller":"traceutil/trace.go:171","msg":"trace[2055186766] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1050; }","duration":"175.267089ms","start":"2024-03-14T18:26:57.344945Z","end":"2024-03-14T18:26:57.520212Z","steps":["trace[2055186766] 'agreement among raft nodes before linearized reading'  (duration: 163.062356ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-14T18:26:57.530663Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"147.430262ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/\" range_end:\"/registry/services/endpoints0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-03-14T18:26:57.530754Z","caller":"traceutil/trace.go:171","msg":"trace[1981487936] range","detail":"{range_begin:/registry/services/endpoints/; range_end:/registry/services/endpoints0; response_count:0; response_revision:1051; }","duration":"147.506268ms","start":"2024-03-14T18:26:57.383209Z","end":"2024-03-14T18:26:57.530715Z","steps":["trace[1981487936] 'agreement among raft nodes before linearized reading'  (duration: 147.40366ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-14T18:26:57.531034Z","caller":"traceutil/trace.go:171","msg":"trace[1559505287] transaction","detail":"{read_only:false; response_revision:1052; number_of_response:1; }","duration":"185.267053ms","start":"2024-03-14T18:26:57.345756Z","end":"2024-03-14T18:26:57.531023Z","steps":["trace[1559505287] 'process raft request'  (duration: 185.187146ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-14T18:26:57.53138Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"142.467483ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/masterleases/172.17.90.10\" ","response":"range_response_count:1 size:131"}
	{"level":"info","ts":"2024-03-14T18:26:57.531424Z","caller":"traceutil/trace.go:171","msg":"trace[823853727] range","detail":"{range_begin:/registry/masterleases/172.17.90.10; range_end:; response_count:1; response_revision:1052; }","duration":"142.516287ms","start":"2024-03-14T18:26:57.3889Z","end":"2024-03-14T18:26:57.531416Z","steps":["trace[823853727] 'agreement among raft nodes before linearized reading'  (duration: 142.160259ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-14T18:27:01.070877Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"206.635986ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/limitranges/\" range_end:\"/registry/limitranges0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-03-14T18:27:01.0711Z","caller":"traceutil/trace.go:171","msg":"trace[206796493] range","detail":"{range_begin:/registry/limitranges/; range_end:/registry/limitranges0; response_count:0; response_revision:1068; }","duration":"206.878904ms","start":"2024-03-14T18:27:00.864206Z","end":"2024-03-14T18:27:01.071084Z","steps":["trace[206796493] 'count revisions from in-memory index tree'  (duration: 205.303985ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-14T18:27:05.206288Z","caller":"etcdserver/raft.go:416","msg":"leader failed to send out heartbeat on time; took too long, leader is overloaded likely from slow disk","to":"f58643133c5f6bd3","heartbeat-interval":"100ms","expected-duration":"200ms","exceeded-duration":"50.125711ms"}
	{"level":"warn","ts":"2024-03-14T18:27:05.206352Z","caller":"etcdserver/raft.go:416","msg":"leader failed to send out heartbeat on time; took too long, leader is overloaded likely from slow disk","to":"c3b06ba6d32fdebc","heartbeat-interval":"100ms","expected-duration":"200ms","exceeded-duration":"50.195716ms"}
	{"level":"warn","ts":"2024-03-14T18:27:05.954439Z","caller":"etcdserver/raft.go:416","msg":"leader failed to send out heartbeat on time; took too long, leader is overloaded likely from slow disk","to":"f58643133c5f6bd3","heartbeat-interval":"100ms","expected-duration":"200ms","exceeded-duration":"66.238741ms"}
	{"level":"warn","ts":"2024-03-14T18:27:05.954583Z","caller":"etcdserver/raft.go:416","msg":"leader failed to send out heartbeat on time; took too long, leader is overloaded likely from slow disk","to":"c3b06ba6d32fdebc","heartbeat-interval":"100ms","expected-duration":"200ms","exceeded-duration":"66.388453ms"}
	{"level":"warn","ts":"2024-03-14T18:27:06.120096Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"165.339733ms","expected-duration":"100ms","prefix":"","request":"header:<ID:14431378453708500707 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1079 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1021 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-03-14T18:27:06.120621Z","caller":"traceutil/trace.go:171","msg":"trace[967587218] transaction","detail":"{read_only:false; response_revision:1085; number_of_response:1; }","duration":"426.375877ms","start":"2024-03-14T18:27:05.694232Z","end":"2024-03-14T18:27:06.120608Z","steps":["trace[967587218] 'process raft request'  (duration: 260.464101ms)","trace[967587218] 'compare'  (duration: 165.133017ms)"],"step_count":2}
	{"level":"warn","ts":"2024-03-14T18:27:06.120795Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-14T18:27:05.694216Z","time spent":"426.538589ms","remote":"127.0.0.1:53976","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1094,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1079 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1021 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	
	
	==> kernel <==
	 18:28:53 up 11 min,  0 users,  load average: 0.62, 0.52, 0.32
	Linux ha-832100 5.10.207 #1 SMP Wed Mar 13 22:01:28 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [9017dcb9908b] <==
	I0314 18:28:04.776364       1 main.go:250] Node ha-832100-m03 has CIDR [10.244.2.0/24] 
	I0314 18:28:14.783552       1 main.go:223] Handling node with IPs: map[172.17.90.10:{}]
	I0314 18:28:14.783650       1 main.go:227] handling current node
	I0314 18:28:14.783664       1 main.go:223] Handling node with IPs: map[172.17.92.203:{}]
	I0314 18:28:14.783673       1 main.go:250] Node ha-832100-m02 has CIDR [10.244.1.0/24] 
	I0314 18:28:14.784282       1 main.go:223] Handling node with IPs: map[172.17.89.54:{}]
	I0314 18:28:14.784358       1 main.go:250] Node ha-832100-m03 has CIDR [10.244.2.0/24] 
	I0314 18:28:24.792833       1 main.go:223] Handling node with IPs: map[172.17.90.10:{}]
	I0314 18:28:24.792873       1 main.go:227] handling current node
	I0314 18:28:24.792884       1 main.go:223] Handling node with IPs: map[172.17.92.203:{}]
	I0314 18:28:24.792891       1 main.go:250] Node ha-832100-m02 has CIDR [10.244.1.0/24] 
	I0314 18:28:24.793121       1 main.go:223] Handling node with IPs: map[172.17.89.54:{}]
	I0314 18:28:24.793232       1 main.go:250] Node ha-832100-m03 has CIDR [10.244.2.0/24] 
	I0314 18:28:34.802492       1 main.go:223] Handling node with IPs: map[172.17.90.10:{}]
	I0314 18:28:34.802696       1 main.go:227] handling current node
	I0314 18:28:34.802713       1 main.go:223] Handling node with IPs: map[172.17.92.203:{}]
	I0314 18:28:34.802902       1 main.go:250] Node ha-832100-m02 has CIDR [10.244.1.0/24] 
	I0314 18:28:34.803241       1 main.go:223] Handling node with IPs: map[172.17.89.54:{}]
	I0314 18:28:34.803346       1 main.go:250] Node ha-832100-m03 has CIDR [10.244.2.0/24] 
	I0314 18:28:44.813197       1 main.go:223] Handling node with IPs: map[172.17.90.10:{}]
	I0314 18:28:44.813440       1 main.go:227] handling current node
	I0314 18:28:44.813593       1 main.go:223] Handling node with IPs: map[172.17.92.203:{}]
	I0314 18:28:44.813695       1 main.go:250] Node ha-832100-m02 has CIDR [10.244.1.0/24] 
	I0314 18:28:44.814458       1 main.go:223] Handling node with IPs: map[172.17.89.54:{}]
	I0314 18:28:44.814535       1 main.go:250] Node ha-832100-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [ee93388e9e8b] <==
	I0314 18:23:24.114858       1 trace.go:236] Trace[1988781831]: "List" accept:application/json, */*,audit-id:f89d6ad5-2a8a-481f-b49a-ed3b0f02d26a,client:172.17.90.10,protocol:HTTP/2.0,resource:nodes,scope:cluster,url:/api/v1/nodes,user-agent:kindnetd/v0.0.0 (linux/amd64) kubernetes/$Format,verb:LIST (14-Mar-2024 18:23:23.365) (total time: 749ms):
	Trace[1988781831]: ["List(recursive=true) etcd3" audit-id:f89d6ad5-2a8a-481f-b49a-ed3b0f02d26a,key:/minions,resourceVersion:,resourceVersionMatch:,limit:0,continue: 749ms (18:23:23.365)]
	Trace[1988781831]: [749.590637ms] [749.590637ms] END
	I0314 18:23:24.115349       1 trace.go:236] Trace[2077400690]: "Get" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:e98e546e-988a-49fb-a246-d216cd1a0567,client:172.17.92.203,protocol:HTTP/2.0,resource:pods,scope:resource,url:/api/v1/namespaces/kube-system/pods/kube-proxy-g4l9q,user-agent:kubelet/v1.28.4 (linux/amd64) kubernetes/bae2c62,verb:GET (14-Mar-2024 18:23:23.361) (total time: 753ms):
	Trace[2077400690]: ---"About to write a response" 752ms (18:23:24.114)
	Trace[2077400690]: [753.698548ms] [753.698548ms] END
	I0314 18:23:24.180654       1 trace.go:236] Trace[38659120]: "Create" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:5a161eb4-1853-46b8-8989-1f828d60e1c3,client:172.17.92.203,protocol:HTTP/2.0,resource:pods,scope:resource,url:/api/v1/namespaces/kube-system/pods,user-agent:kubelet/v1.28.4 (linux/amd64) kubernetes/bae2c62,verb:POST (14-Mar-2024 18:23:17.903) (total time: 6277ms):
	Trace[38659120]: ["Create etcd3" audit-id:5a161eb4-1853-46b8-8989-1f828d60e1c3,key:/pods/kube-system/kube-controller-manager-ha-832100-m02,type:*core.Pod,resource:pods 6275ms (18:23:17.905)
	Trace[38659120]:  ---"Txn call succeeded" 6189ms (18:23:24.095)]
	Trace[38659120]: ---"Write to database call failed" len:2375,err:pods "kube-controller-manager-ha-832100-m02" already exists 84ms (18:23:24.179)
	Trace[38659120]: [6.277008184s] [6.277008184s] END
	I0314 18:23:24.188037       1 trace.go:236] Trace[539825607]: "Create" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:12ee2ca4-f05d-469d-a18d-20f26d0fd1a9,client:172.17.92.203,protocol:HTTP/2.0,resource:pods,scope:resource,url:/api/v1/namespaces/kube-system/pods,user-agent:kubelet/v1.28.4 (linux/amd64) kubernetes/bae2c62,verb:POST (14-Mar-2024 18:23:17.902) (total time: 6285ms):
	Trace[539825607]: ["Create etcd3" audit-id:12ee2ca4-f05d-469d-a18d-20f26d0fd1a9,key:/pods/kube-system/kube-scheduler-ha-832100-m02,type:*core.Pod,resource:pods 6282ms (18:23:17.905)
	Trace[539825607]:  ---"Txn call succeeded" 6198ms (18:23:24.104)]
	Trace[539825607]: ---"Write to database call failed" len:1220,err:pods "kube-scheduler-ha-832100-m02" already exists 83ms (18:23:24.187)
	Trace[539825607]: [6.285431023s] [6.285431023s] END
	I0314 18:23:24.191247       1 trace.go:236] Trace[910430440]: "Create" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:39415533-b6cd-4239-ac2b-14eb8ea499ee,client:172.17.92.203,protocol:HTTP/2.0,resource:pods,scope:resource,url:/api/v1/namespaces/kube-system/pods,user-agent:kubelet/v1.28.4 (linux/amd64) kubernetes/bae2c62,verb:POST (14-Mar-2024 18:23:17.892) (total time: 6298ms):
	Trace[910430440]: ["Create etcd3" audit-id:39415533-b6cd-4239-ac2b-14eb8ea499ee,key:/pods/kube-system/kube-apiserver-ha-832100-m02,type:*core.Pod,resource:pods 6297ms (18:23:17.894)
	Trace[910430440]:  ---"Txn call succeeded" 6201ms (18:23:24.095)]
	Trace[910430440]: ---"Write to database call failed" len:2991,err:pods "kube-apiserver-ha-832100-m02" already exists 95ms (18:23:24.191)
	Trace[910430440]: [6.298329301s] [6.298329301s] END
	I0314 18:26:45.173902       1 trace.go:236] Trace[418336116]: "Update" accept:application/json, */*,audit-id:5f51e147-a89c-455a-83c9-f2ce7ce5230d,client:172.17.90.10,protocol:HTTP/2.0,resource:endpoints,scope:resource,url:/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath,user-agent:storage-provisioner/v0.0.0 (linux/amd64) kubernetes/$Format,verb:PUT (14-Mar-2024 18:26:44.662) (total time: 511ms):
	Trace[418336116]: ["GuaranteedUpdate etcd3" audit-id:5f51e147-a89c-455a-83c9-f2ce7ce5230d,key:/services/endpoints/kube-system/k8s.io-minikube-hostpath,type:*core.Endpoints,resource:endpoints 511ms (18:26:44.662)
	Trace[418336116]:  ---"Txn call completed" 510ms (18:26:45.173)]
	Trace[418336116]: [511.517864ms] [511.517864ms] END
	
	
	==> kube-controller-manager [5e44cfe6e22b] <==
	I0314 18:26:55.028803       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: kindnet-fgcrg"
	I0314 18:26:56.929652       1 event.go:307] "Event occurred" object="ha-832100-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node ha-832100-m03 event: Registered Node ha-832100-m03 in Controller"
	I0314 18:26:56.954131       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="ha-832100-m03"
	I0314 18:27:52.075152       1 event.go:307] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-5b5d89c9d6 to 3"
	I0314 18:27:52.109675       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5b5d89c9d6-qjmj7"
	I0314 18:27:52.155465       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5b5d89c9d6-9wj82"
	I0314 18:27:52.156576       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5b5d89c9d6-zncln"
	I0314 18:27:52.192387       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="118.939796ms"
	I0314 18:27:52.281126       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="88.672981ms"
	I0314 18:27:52.468197       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="186.774684ms"
	I0314 18:27:52.591879       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: busybox-5b5d89c9d6-4pm2q"
	I0314 18:27:52.643246       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: busybox-5b5d89c9d6-4mfgf"
	I0314 18:27:52.643837       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: busybox-5b5d89c9d6-mkctr"
	I0314 18:27:52.644171       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: busybox-5b5d89c9d6-rn7hk"
	I0314 18:27:52.646869       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: busybox-5b5d89c9d6-xkh76"
	I0314 18:27:52.647934       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: busybox-5b5d89c9d6-w679p"
	I0314 18:27:52.706296       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="237.883792ms"
	I0314 18:27:52.723776       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="17.120009ms"
	I0314 18:27:52.724292       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="87.306µs"
	I0314 18:27:55.075368       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="49.08881ms"
	I0314 18:27:55.075561       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="64.005µs"
	I0314 18:27:55.335581       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="15.955174ms"
	I0314 18:27:55.336837       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="54.804µs"
	I0314 18:27:55.496246       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="15.456137ms"
	I0314 18:27:55.497261       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="191.214µs"
	
	
	==> kube-proxy [fe9255d884de] <==
	I0314 18:19:33.797319       1 server_others.go:69] "Using iptables proxy"
	I0314 18:19:33.814945       1 node.go:141] Successfully retrieved node IP: 172.17.90.10
	I0314 18:19:33.905103       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0314 18:19:33.905127       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0314 18:19:33.911503       1 server_others.go:152] "Using iptables Proxier"
	I0314 18:19:33.911626       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0314 18:19:33.912017       1 server.go:846] "Version info" version="v1.28.4"
	I0314 18:19:33.912031       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0314 18:19:33.914339       1 config.go:188] "Starting service config controller"
	I0314 18:19:33.914496       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0314 18:19:33.914636       1 config.go:315] "Starting node config controller"
	I0314 18:19:33.921402       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0314 18:19:33.914655       1 config.go:97] "Starting endpoint slice config controller"
	I0314 18:19:33.921554       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0314 18:19:34.021536       1 shared_informer.go:318] Caches are synced for service config
	I0314 18:19:34.021928       1 shared_informer.go:318] Caches are synced for node config
	I0314 18:19:34.023219       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [c62341ce4381] <==
	E0314 18:19:16.517603       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0314 18:19:16.522660       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0314 18:19:16.522863       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0314 18:19:16.605468       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0314 18:19:16.605775       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0314 18:19:16.643979       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0314 18:19:16.644275       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0314 18:19:18.346336       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0314 18:26:54.750126       1 framework.go:1206] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-trr4z\": pod kindnet-trr4z is already assigned to node \"ha-832100-m03\"" plugin="DefaultBinder" pod="kube-system/kindnet-trr4z" node="ha-832100-m03"
	E0314 18:26:54.750588       1 schedule_one.go:989] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-trr4z\": pod kindnet-trr4z is already assigned to node \"ha-832100-m03\"" pod="kube-system/kindnet-trr4z"
	E0314 18:26:54.751535       1 framework.go:1206] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-z9bkt\": pod kube-proxy-z9bkt is already assigned to node \"ha-832100-m03\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-z9bkt" node="ha-832100-m03"
	E0314 18:26:54.751778       1 schedule_one.go:989] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-z9bkt\": pod kube-proxy-z9bkt is already assigned to node \"ha-832100-m03\"" pod="kube-system/kube-proxy-z9bkt"
	I0314 18:26:54.752484       1 schedule_one.go:1002] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-z9bkt" node="ha-832100-m03"
	E0314 18:27:52.154207       1 framework.go:1206] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-5b5d89c9d6-qjmj7\": pod busybox-5b5d89c9d6-qjmj7 is already assigned to node \"ha-832100-m02\"" plugin="DefaultBinder" pod="default/busybox-5b5d89c9d6-qjmj7" node="ha-832100-m02"
	E0314 18:27:52.154497       1 schedule_one.go:319] "scheduler cache ForgetPod failed" err="pod 0ad1b0ba-dbc3-4f27-8fa8-cc7b850d6caa(default/busybox-5b5d89c9d6-qjmj7) wasn't assumed so cannot be forgotten"
	E0314 18:27:52.154673       1 schedule_one.go:989] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-5b5d89c9d6-qjmj7\": pod busybox-5b5d89c9d6-qjmj7 is already assigned to node \"ha-832100-m02\"" pod="default/busybox-5b5d89c9d6-qjmj7"
	I0314 18:27:52.156566       1 schedule_one.go:1002] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-5b5d89c9d6-qjmj7" node="ha-832100-m02"
	E0314 18:27:52.195181       1 framework.go:1206] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-5b5d89c9d6-9wj82\": pod busybox-5b5d89c9d6-9wj82 is already assigned to node \"ha-832100-m03\"" plugin="DefaultBinder" pod="default/busybox-5b5d89c9d6-9wj82" node="ha-832100-m03"
	E0314 18:27:52.195422       1 schedule_one.go:319] "scheduler cache ForgetPod failed" err="pod 808de52b-8630-4ff2-a243-87778fd03efb(default/busybox-5b5d89c9d6-9wj82) wasn't assumed so cannot be forgotten"
	E0314 18:27:52.196041       1 schedule_one.go:989] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-5b5d89c9d6-9wj82\": pod busybox-5b5d89c9d6-9wj82 is already assigned to node \"ha-832100-m03\"" pod="default/busybox-5b5d89c9d6-9wj82"
	I0314 18:27:52.196238       1 schedule_one.go:1002] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-5b5d89c9d6-9wj82" node="ha-832100-m03"
	E0314 18:27:52.196990       1 framework.go:1206] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-5b5d89c9d6-zncln\": pod busybox-5b5d89c9d6-zncln is already assigned to node \"ha-832100\"" plugin="DefaultBinder" pod="default/busybox-5b5d89c9d6-zncln" node="ha-832100"
	E0314 18:27:52.198139       1 schedule_one.go:319] "scheduler cache ForgetPod failed" err="pod 999025ff-6bb9-4220-8616-b611779f27d1(default/busybox-5b5d89c9d6-zncln) wasn't assumed so cannot be forgotten"
	E0314 18:27:52.201114       1 schedule_one.go:989] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-5b5d89c9d6-zncln\": pod busybox-5b5d89c9d6-zncln is already assigned to node \"ha-832100\"" pod="default/busybox-5b5d89c9d6-zncln"
	I0314 18:27:52.202218       1 schedule_one.go:1002] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-5b5d89c9d6-zncln" node="ha-832100"
	
	
	==> kubelet <==
	Mar 14 18:27:19 ha-832100 kubelet[2809]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 14 18:27:19 ha-832100 kubelet[2809]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 14 18:27:19 ha-832100 kubelet[2809]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 14 18:27:19 ha-832100 kubelet[2809]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 14 18:27:28 ha-832100 kubelet[2809]: I0314 18:27:28.015369    2809 scope.go:117] "RemoveContainer" containerID="06d1269b29766898d4d5377a33f0f19b3938c5b0214c65fb969e0e4b0673b8f8"
	Mar 14 18:27:28 ha-832100 kubelet[2809]: E0314 18:27:28.015787    2809 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-vip\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-vip pod=kube-vip-ha-832100_kube-system(a3840b1dc1c8c7700d743c49c765449b)\"" pod="kube-system/kube-vip-ha-832100" podUID="a3840b1dc1c8c7700d743c49c765449b"
	Mar 14 18:27:43 ha-832100 kubelet[2809]: I0314 18:27:43.015575    2809 scope.go:117] "RemoveContainer" containerID="06d1269b29766898d4d5377a33f0f19b3938c5b0214c65fb969e0e4b0673b8f8"
	Mar 14 18:27:43 ha-832100 kubelet[2809]: E0314 18:27:43.015916    2809 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-vip\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-vip pod=kube-vip-ha-832100_kube-system(a3840b1dc1c8c7700d743c49c765449b)\"" pod="kube-system/kube-vip-ha-832100" podUID="a3840b1dc1c8c7700d743c49c765449b"
	Mar 14 18:27:52 ha-832100 kubelet[2809]: I0314 18:27:52.182477    2809 topology_manager.go:215] "Topology Admit Handler" podUID="999025ff-6bb9-4220-8616-b611779f27d1" podNamespace="default" podName="busybox-5b5d89c9d6-zncln"
	Mar 14 18:27:52 ha-832100 kubelet[2809]: I0314 18:27:52.316586    2809 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2lmhj\" (UniqueName: \"kubernetes.io/projected/999025ff-6bb9-4220-8616-b611779f27d1-kube-api-access-2lmhj\") pod \"busybox-5b5d89c9d6-zncln\" (UID: \"999025ff-6bb9-4220-8616-b611779f27d1\") " pod="default/busybox-5b5d89c9d6-zncln"
	Mar 14 18:27:55 ha-832100 kubelet[2809]: I0314 18:27:55.017146    2809 scope.go:117] "RemoveContainer" containerID="06d1269b29766898d4d5377a33f0f19b3938c5b0214c65fb969e0e4b0673b8f8"
	Mar 14 18:27:55 ha-832100 kubelet[2809]: E0314 18:27:55.017555    2809 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-vip\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-vip pod=kube-vip-ha-832100_kube-system(a3840b1dc1c8c7700d743c49c765449b)\"" pod="kube-system/kube-vip-ha-832100" podUID="a3840b1dc1c8c7700d743c49c765449b"
	Mar 14 18:28:06 ha-832100 kubelet[2809]: I0314 18:28:06.015465    2809 scope.go:117] "RemoveContainer" containerID="06d1269b29766898d4d5377a33f0f19b3938c5b0214c65fb969e0e4b0673b8f8"
	Mar 14 18:28:06 ha-832100 kubelet[2809]: E0314 18:28:06.016054    2809 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-vip\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-vip pod=kube-vip-ha-832100_kube-system(a3840b1dc1c8c7700d743c49c765449b)\"" pod="kube-system/kube-vip-ha-832100" podUID="a3840b1dc1c8c7700d743c49c765449b"
	Mar 14 18:28:19 ha-832100 kubelet[2809]: E0314 18:28:19.051134    2809 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 14 18:28:19 ha-832100 kubelet[2809]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 14 18:28:19 ha-832100 kubelet[2809]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 14 18:28:19 ha-832100 kubelet[2809]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 14 18:28:19 ha-832100 kubelet[2809]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 14 18:28:20 ha-832100 kubelet[2809]: I0314 18:28:20.015436    2809 scope.go:117] "RemoveContainer" containerID="06d1269b29766898d4d5377a33f0f19b3938c5b0214c65fb969e0e4b0673b8f8"
	Mar 14 18:28:20 ha-832100 kubelet[2809]: E0314 18:28:20.015853    2809 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-vip\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-vip pod=kube-vip-ha-832100_kube-system(a3840b1dc1c8c7700d743c49c765449b)\"" pod="kube-system/kube-vip-ha-832100" podUID="a3840b1dc1c8c7700d743c49c765449b"
	Mar 14 18:28:35 ha-832100 kubelet[2809]: I0314 18:28:35.014861    2809 scope.go:117] "RemoveContainer" containerID="06d1269b29766898d4d5377a33f0f19b3938c5b0214c65fb969e0e4b0673b8f8"
	Mar 14 18:28:35 ha-832100 kubelet[2809]: E0314 18:28:35.015642    2809 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-vip\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-vip pod=kube-vip-ha-832100_kube-system(a3840b1dc1c8c7700d743c49c765449b)\"" pod="kube-system/kube-vip-ha-832100" podUID="a3840b1dc1c8c7700d743c49c765449b"
	Mar 14 18:28:48 ha-832100 kubelet[2809]: I0314 18:28:48.015177    2809 scope.go:117] "RemoveContainer" containerID="06d1269b29766898d4d5377a33f0f19b3938c5b0214c65fb969e0e4b0673b8f8"
	Mar 14 18:28:48 ha-832100 kubelet[2809]: E0314 18:28:48.015534    2809 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-vip\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-vip pod=kube-vip-ha-832100_kube-system(a3840b1dc1c8c7700d743c49c765449b)\"" pod="kube-system/kube-vip-ha-832100" podUID="a3840b1dc1c8c7700d743c49c765449b"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0314 18:28:45.659281    5824 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-832100 -n ha-832100
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-832100 -n ha-832100: (11.0995043s)
helpers_test.go:261: (dbg) Run:  kubectl --context ha-832100 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMutliControlPlane/serial/PingHostFromPods FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMutliControlPlane/serial/PingHostFromPods (65.07s)

                                                
                                    
x
+
TestMutliControlPlane/serial/RestartSecondaryNode (187.72s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-832100 node start m02 -v=7 --alsologtostderr
ha_test.go:420: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-832100 node start m02 -v=7 --alsologtostderr: exit status 1 (1m48.8630828s)

                                                
                                                
-- stdout --
	* Starting "ha-832100-m02" control-plane node in "ha-832100" cluster
	* Restarting existing hyperv VM for "ha-832100-m02" ...

                                                
                                                
-- /stdout --
** stderr ** 
	W0314 18:44:30.800761    7764 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0314 18:44:30.857003    7764 out.go:291] Setting OutFile to fd 1592 ...
	I0314 18:44:30.872037    7764 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 18:44:30.872037    7764 out.go:304] Setting ErrFile to fd 280...
	I0314 18:44:30.872037    7764 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 18:44:30.893465    7764 mustload.go:65] Loading cluster: ha-832100
	I0314 18:44:30.894135    7764 config.go:182] Loaded profile config "ha-832100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0314 18:44:30.895466    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-832100-m02 ).state
	I0314 18:44:32.833735    7764 main.go:141] libmachine: [stdout =====>] : Off
	
	I0314 18:44:32.833735    7764 main.go:141] libmachine: [stderr =====>] : 
	W0314 18:44:32.833735    7764 host.go:58] "ha-832100-m02" host status: Stopped
	I0314 18:44:32.837093    7764 out.go:177] * Starting "ha-832100-m02" control-plane node in "ha-832100" cluster
	I0314 18:44:32.839438    7764 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0314 18:44:32.839471    7764 preload.go:147] Found local preload: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I0314 18:44:32.839471    7764 cache.go:56] Caching tarball of preloaded images
	I0314 18:44:32.839999    7764 preload.go:173] Found C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0314 18:44:32.840130    7764 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0314 18:44:32.840130    7764 profile.go:142] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-832100\config.json ...
	I0314 18:44:32.842631    7764 start.go:360] acquireMachinesLock for ha-832100-m02: {Name:mk814f158b6187cc9297257c36fdbe0d2871c950 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0314 18:44:32.842845    7764 start.go:364] duration metric: took 142.9µs to acquireMachinesLock for "ha-832100-m02"
	I0314 18:44:32.843021    7764 start.go:96] Skipping create...Using existing machine configuration
	I0314 18:44:32.843021    7764 fix.go:54] fixHost starting: m02
	I0314 18:44:32.843021    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-832100-m02 ).state
	I0314 18:44:34.806239    7764 main.go:141] libmachine: [stdout =====>] : Off
	
	I0314 18:44:34.806239    7764 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:44:34.806431    7764 fix.go:112] recreateIfNeeded on ha-832100-m02: state=Stopped err=<nil>
	W0314 18:44:34.806431    7764 fix.go:138] unexpected machine state, will restart: <nil>
	I0314 18:44:34.810033    7764 out.go:177] * Restarting existing hyperv VM for "ha-832100-m02" ...
	I0314 18:44:34.812609    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-832100-m02
	I0314 18:44:37.743096    7764 main.go:141] libmachine: [stdout =====>] : 
	I0314 18:44:37.743096    7764 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:44:37.743528    7764 main.go:141] libmachine: Waiting for host to start...
	I0314 18:44:37.743644    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-832100-m02 ).state
	I0314 18:44:39.808023    7764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 18:44:39.808023    7764 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:44:39.808926    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-832100-m02 ).networkadapters[0]).ipaddresses[0]
	I0314 18:44:42.117436    7764 main.go:141] libmachine: [stdout =====>] : 
	I0314 18:44:42.117436    7764 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:44:43.131763    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-832100-m02 ).state
	I0314 18:44:45.137644    7764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 18:44:45.137723    7764 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:44:45.137819    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-832100-m02 ).networkadapters[0]).ipaddresses[0]
	I0314 18:44:47.469732    7764 main.go:141] libmachine: [stdout =====>] : 
	I0314 18:44:47.469732    7764 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:44:48.479606    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-832100-m02 ).state
	I0314 18:44:50.492987    7764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 18:44:50.492987    7764 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:44:50.492987    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-832100-m02 ).networkadapters[0]).ipaddresses[0]
	I0314 18:44:52.777324    7764 main.go:141] libmachine: [stdout =====>] : 
	I0314 18:44:52.777965    7764 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:44:53.780380    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-832100-m02 ).state
	I0314 18:44:55.787519    7764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 18:44:55.788266    7764 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:44:55.788381    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-832100-m02 ).networkadapters[0]).ipaddresses[0]
	I0314 18:44:58.116265    7764 main.go:141] libmachine: [stdout =====>] : 
	I0314 18:44:58.116265    7764 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:44:59.130715    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-832100-m02 ).state
	I0314 18:45:01.186848    7764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 18:45:01.187374    7764 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:45:01.187374    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-832100-m02 ).networkadapters[0]).ipaddresses[0]
	I0314 18:45:03.584097    7764 main.go:141] libmachine: [stdout =====>] : 172.17.92.40
	
	I0314 18:45:03.584174    7764 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:45:03.586381    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-832100-m02 ).state
	I0314 18:45:05.551621    7764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 18:45:05.551621    7764 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:45:05.551709    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-832100-m02 ).networkadapters[0]).ipaddresses[0]
	I0314 18:45:07.942354    7764 main.go:141] libmachine: [stdout =====>] : 172.17.92.40
	
	I0314 18:45:07.942566    7764 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:45:07.942663    7764 profile.go:142] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-832100\config.json ...
	I0314 18:45:07.945163    7764 machine.go:94] provisionDockerMachine start ...
	I0314 18:45:07.945163    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-832100-m02 ).state
	I0314 18:45:09.920946    7764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 18:45:09.920946    7764 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:45:09.921711    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-832100-m02 ).networkadapters[0]).ipaddresses[0]
	I0314 18:45:12.335988    7764 main.go:141] libmachine: [stdout =====>] : 172.17.92.40
	
	I0314 18:45:12.335988    7764 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:45:12.339703    7764 main.go:141] libmachine: Using SSH client type: native
	I0314 18:45:12.340340    7764 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xac9f80] 0xaccb60 <nil>  [] 0s} 172.17.92.40 22 <nil> <nil>}
	I0314 18:45:12.340340    7764 main.go:141] libmachine: About to run SSH command:
	hostname
	I0314 18:45:12.464462    7764 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0314 18:45:12.464462    7764 buildroot.go:166] provisioning hostname "ha-832100-m02"
	I0314 18:45:12.464462    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-832100-m02 ).state
	I0314 18:45:14.433683    7764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 18:45:14.433683    7764 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:45:14.433683    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-832100-m02 ).networkadapters[0]).ipaddresses[0]
	I0314 18:45:16.819502    7764 main.go:141] libmachine: [stdout =====>] : 172.17.92.40
	
	I0314 18:45:16.819502    7764 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:45:16.824410    7764 main.go:141] libmachine: Using SSH client type: native
	I0314 18:45:16.824410    7764 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xac9f80] 0xaccb60 <nil>  [] 0s} 172.17.92.40 22 <nil> <nil>}
	I0314 18:45:16.824941    7764 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-832100-m02 && echo "ha-832100-m02" | sudo tee /etc/hostname
	I0314 18:45:16.975855    7764 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-832100-m02
	
	I0314 18:45:16.975948    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-832100-m02 ).state
	I0314 18:45:18.971359    7764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 18:45:18.971359    7764 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:45:18.971359    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-832100-m02 ).networkadapters[0]).ipaddresses[0]
	I0314 18:45:21.353629    7764 main.go:141] libmachine: [stdout =====>] : 172.17.92.40
	
	I0314 18:45:21.353702    7764 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:45:21.356710    7764 main.go:141] libmachine: Using SSH client type: native
	I0314 18:45:21.357507    7764 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xac9f80] 0xaccb60 <nil>  [] 0s} 172.17.92.40 22 <nil> <nil>}
	I0314 18:45:21.357507    7764 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-832100-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-832100-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-832100-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0314 18:45:21.504399    7764 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0314 18:45:21.504399    7764 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube7\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube7\minikube-integration\.minikube}
	I0314 18:45:21.504399    7764 buildroot.go:174] setting up certificates
	I0314 18:45:21.504399    7764 provision.go:84] configureAuth start
	I0314 18:45:21.504399    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-832100-m02 ).state
	I0314 18:45:23.482838    7764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 18:45:23.482838    7764 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:45:23.482838    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-832100-m02 ).networkadapters[0]).ipaddresses[0]
	I0314 18:45:25.837762    7764 main.go:141] libmachine: [stdout =====>] : 172.17.92.40
	
	I0314 18:45:25.838124    7764 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:45:25.838124    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-832100-m02 ).state
	I0314 18:45:27.845606    7764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 18:45:27.846151    7764 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:45:27.846237    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-832100-m02 ).networkadapters[0]).ipaddresses[0]
	I0314 18:45:30.257618    7764 main.go:141] libmachine: [stdout =====>] : 172.17.92.40
	
	I0314 18:45:30.257618    7764 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:45:30.257618    7764 provision.go:143] copyHostCerts
	I0314 18:45:30.258190    7764 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem
	I0314 18:45:30.258190    7764 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem, removing ...
	I0314 18:45:30.258190    7764 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.pem
	I0314 18:45:30.258717    7764 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0314 18:45:30.259678    7764 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem
	I0314 18:45:30.259760    7764 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem, removing ...
	I0314 18:45:30.259760    7764 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cert.pem
	I0314 18:45:30.259760    7764 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0314 18:45:30.260459    7764 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem
	I0314 18:45:30.261109    7764 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem, removing ...
	I0314 18:45:30.261109    7764 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\key.pem
	I0314 18:45:30.261329    7764 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem (1679 bytes)
	I0314 18:45:30.261935    7764 provision.go:117] generating server cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-832100-m02 san=[127.0.0.1 172.17.92.40 ha-832100-m02 localhost minikube]
	I0314 18:45:30.484120    7764 provision.go:177] copyRemoteCerts
	I0314 18:45:30.492427    7764 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0314 18:45:30.492427    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-832100-m02 ).state
	I0314 18:45:32.453911    7764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 18:45:32.453911    7764 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:45:32.453911    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-832100-m02 ).networkadapters[0]).ipaddresses[0]
	I0314 18:45:34.834723    7764 main.go:141] libmachine: [stdout =====>] : 172.17.92.40
	
	I0314 18:45:34.835199    7764 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:45:34.836420    7764 sshutil.go:53] new ssh client: &{IP:172.17.92.40 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\ha-832100-m02\id_rsa Username:docker}
	I0314 18:45:34.937038    7764 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.4442337s)
	I0314 18:45:34.937038    7764 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0314 18:45:34.937188    7764 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0314 18:45:34.988604    7764 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0314 18:45:34.989262    7764 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0314 18:45:35.034273    7764 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0314 18:45:35.034273    7764 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0314 18:45:35.081906    7764 provision.go:87] duration metric: took 13.5764856s to configureAuth
	I0314 18:45:35.081906    7764 buildroot.go:189] setting minikube options for container-runtime
	I0314 18:45:35.082386    7764 config.go:182] Loaded profile config "ha-832100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0314 18:45:35.082386    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-832100-m02 ).state
	I0314 18:45:37.066070    7764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 18:45:37.066773    7764 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:45:37.066773    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-832100-m02 ).networkadapters[0]).ipaddresses[0]
	I0314 18:45:39.418088    7764 main.go:141] libmachine: [stdout =====>] : 172.17.92.40
	
	I0314 18:45:39.418088    7764 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:45:39.422380    7764 main.go:141] libmachine: Using SSH client type: native
	I0314 18:45:39.422380    7764 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xac9f80] 0xaccb60 <nil>  [] 0s} 172.17.92.40 22 <nil> <nil>}
	I0314 18:45:39.422910    7764 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0314 18:45:39.560383    7764 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0314 18:45:39.560383    7764 buildroot.go:70] root file system type: tmpfs
	I0314 18:45:39.560383    7764 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0314 18:45:39.560383    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-832100-m02 ).state
	I0314 18:45:41.528564    7764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 18:45:41.529225    7764 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:45:41.529334    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-832100-m02 ).networkadapters[0]).ipaddresses[0]
	I0314 18:45:43.888912    7764 main.go:141] libmachine: [stdout =====>] : 172.17.92.40
	
	I0314 18:45:43.888958    7764 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:45:43.892740    7764 main.go:141] libmachine: Using SSH client type: native
	I0314 18:45:43.892740    7764 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xac9f80] 0xaccb60 <nil>  [] 0s} 172.17.92.40 22 <nil> <nil>}
	I0314 18:45:43.893279    7764 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0314 18:45:44.052872    7764 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0314 18:45:44.052984    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-832100-m02 ).state
	I0314 18:45:46.006780    7764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 18:45:46.006780    7764 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:45:46.006780    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-832100-m02 ).networkadapters[0]).ipaddresses[0]
	I0314 18:45:48.384763    7764 main.go:141] libmachine: [stdout =====>] : 172.17.92.40
	
	I0314 18:45:48.385335    7764 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:45:48.389165    7764 main.go:141] libmachine: Using SSH client type: native
	I0314 18:45:48.389602    7764 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xac9f80] 0xaccb60 <nil>  [] 0s} 172.17.92.40 22 <nil> <nil>}
	I0314 18:45:48.389602    7764 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0314 18:45:50.724145    7764 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0314 18:45:50.724145    7764 machine.go:97] duration metric: took 42.7757646s to provisionDockerMachine
	I0314 18:45:50.724202    7764 start.go:293] postStartSetup for "ha-832100-m02" (driver="hyperv")
	I0314 18:45:50.724254    7764 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0314 18:45:50.733254    7764 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0314 18:45:50.733254    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-832100-m02 ).state
	I0314 18:45:52.688773    7764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 18:45:52.689606    7764 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:45:52.689656    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-832100-m02 ).networkadapters[0]).ipaddresses[0]
	I0314 18:45:55.063553    7764 main.go:141] libmachine: [stdout =====>] : 172.17.92.40
	
	I0314 18:45:55.063634    7764 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:45:55.064049    7764 sshutil.go:53] new ssh client: &{IP:172.17.92.40 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\ha-832100-m02\id_rsa Username:docker}
	I0314 18:45:55.172408    7764 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.4388192s)
	I0314 18:45:55.181406    7764 ssh_runner.go:195] Run: cat /etc/os-release
	I0314 18:45:55.188043    7764 info.go:137] Remote host: Buildroot 2023.02.9
	I0314 18:45:55.188043    7764 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\addons for local assets ...
	I0314 18:45:55.188576    7764 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\files for local assets ...
	I0314 18:45:55.189280    7764 filesync.go:149] local asset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\110522.pem -> 110522.pem in /etc/ssl/certs
	I0314 18:45:55.189280    7764 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\110522.pem -> /etc/ssl/certs/110522.pem
	I0314 18:45:55.198257    7764 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0314 18:45:55.215755    7764 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\110522.pem --> /etc/ssl/certs/110522.pem (1708 bytes)
	I0314 18:45:55.258880    7764 start.go:296] duration metric: took 4.5343363s for postStartSetup
	I0314 18:45:55.269952    7764 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0314 18:45:55.269952    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-832100-m02 ).state
	I0314 18:45:57.277970    7764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 18:45:57.277970    7764 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:45:57.277970    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-832100-m02 ).networkadapters[0]).ipaddresses[0]
	I0314 18:45:59.622341    7764 main.go:141] libmachine: [stdout =====>] : 172.17.92.40
	
	I0314 18:45:59.622341    7764 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:45:59.623411    7764 sshutil.go:53] new ssh client: &{IP:172.17.92.40 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\ha-832100-m02\id_rsa Username:docker}
	I0314 18:45:59.739482    7764 ssh_runner.go:235] Completed: sudo ls --almost-all -1 /var/lib/minikube/backup: (4.4691919s)
	I0314 18:45:59.739561    7764 machine.go:198] restoring vm config from /var/lib/minikube/backup: [etc]
	I0314 18:45:59.748625    7764 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0314 18:45:59.820285    7764 fix.go:56] duration metric: took 1m26.970735s for fixHost
	I0314 18:45:59.820285    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-832100-m02 ).state
	I0314 18:46:01.776980    7764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 18:46:01.777317    7764 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:46:01.777317    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-832100-m02 ).networkadapters[0]).ipaddresses[0]
	I0314 18:46:04.127176    7764 main.go:141] libmachine: [stdout =====>] : 172.17.92.40
	
	I0314 18:46:04.127176    7764 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:46:04.133841    7764 main.go:141] libmachine: Using SSH client type: native
	I0314 18:46:04.134481    7764 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xac9f80] 0xaccb60 <nil>  [] 0s} 172.17.92.40 22 <nil> <nil>}
	I0314 18:46:04.134481    7764 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0314 18:46:04.260930    7764 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710441964.522297317
	
	I0314 18:46:04.260930    7764 fix.go:216] guest clock: 1710441964.522297317
	I0314 18:46:04.260930    7764 fix.go:229] Guest: 2024-03-14 18:46:04.522297317 +0000 UTC Remote: 2024-03-14 18:45:59.8202853 +0000 UTC m=+89.104202201 (delta=4.702012017s)
	I0314 18:46:04.261103    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-832100-m02 ).state
	I0314 18:46:06.248414    7764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 18:46:06.248414    7764 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:46:06.248515    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-832100-m02 ).networkadapters[0]).ipaddresses[0]
	I0314 18:46:08.616267    7764 main.go:141] libmachine: [stdout =====>] : 172.17.92.40
	
	I0314 18:46:08.617242    7764 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:46:08.621164    7764 main.go:141] libmachine: Using SSH client type: native
	I0314 18:46:08.621397    7764 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xac9f80] 0xaccb60 <nil>  [] 0s} 172.17.92.40 22 <nil> <nil>}
	I0314 18:46:08.621397    7764 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1710441964
	I0314 18:46:08.756294    7764 main.go:141] libmachine: SSH cmd err, output: <nil>: Thu Mar 14 18:46:04 UTC 2024
	
	I0314 18:46:08.756294    7764 fix.go:236] clock set: Thu Mar 14 18:46:04 UTC 2024
	 (err=<nil>)
	I0314 18:46:08.756294    7764 start.go:83] releasing machines lock for "ha-832100-m02", held for 1m35.9062453s
	I0314 18:46:08.756888    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-832100-m02 ).state
	I0314 18:46:10.721985    7764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 18:46:10.722841    7764 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:46:10.722841    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-832100-m02 ).networkadapters[0]).ipaddresses[0]
	I0314 18:46:13.088270    7764 main.go:141] libmachine: [stdout =====>] : 172.17.92.40
	
	I0314 18:46:13.088270    7764 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:46:13.092548    7764 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0314 18:46:13.092730    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-832100-m02 ).state
	I0314 18:46:13.099616    7764 ssh_runner.go:195] Run: systemctl --version
	I0314 18:46:13.100167    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-832100-m02 ).state
	I0314 18:46:15.068551    7764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 18:46:15.068551    7764 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:46:15.069461    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-832100-m02 ).networkadapters[0]).ipaddresses[0]
	I0314 18:46:15.085002    7764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 18:46:15.085002    7764 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:46:15.085265    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-832100-m02 ).networkadapters[0]).ipaddresses[0]
	I0314 18:46:17.554167    7764 main.go:141] libmachine: [stdout =====>] : 172.17.92.40
	
	I0314 18:46:17.554167    7764 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:46:17.555028    7764 sshutil.go:53] new ssh client: &{IP:172.17.92.40 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\ha-832100-m02\id_rsa Username:docker}
	I0314 18:46:17.582876    7764 main.go:141] libmachine: [stdout =====>] : 172.17.92.40
	
	I0314 18:46:17.583275    7764 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:46:17.583604    7764 sshutil.go:53] new ssh client: &{IP:172.17.92.40 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\ha-832100-m02\id_rsa Username:docker}
	I0314 18:46:17.644654    7764 ssh_runner.go:235] Completed: systemctl --version: (4.5446942s)
	I0314 18:46:17.653986    7764 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0314 18:46:17.782209    7764 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0314 18:46:17.782351    7764 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.6892322s)
	I0314 18:46:17.791101    7764 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0314 18:46:17.819709    7764 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0314 18:46:17.819709    7764 start.go:494] detecting cgroup driver to use...
	I0314 18:46:17.820111    7764 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0314 18:46:17.868696    7764 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0314 18:46:17.897458    7764 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0314 18:46:17.917264    7764 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0314 18:46:17.926700    7764 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0314 18:46:17.955944    7764 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0314 18:46:17.982868    7764 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0314 18:46:18.013466    7764 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0314 18:46:18.042643    7764 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0314 18:46:18.072951    7764 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0314 18:46:18.099924    7764 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0314 18:46:18.124365    7764 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0314 18:46:18.154165    7764 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 18:46:18.346514    7764 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0314 18:46:18.374534    7764 start.go:494] detecting cgroup driver to use...
	I0314 18:46:18.383655    7764 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0314 18:46:18.413951    7764 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0314 18:46:18.444150    7764 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0314 18:46:18.477364    7764 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0314 18:46:18.506799    7764 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0314 18:46:18.538475    7764 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0314 18:46:18.595095    7764 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0314 18:46:18.618644    7764 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0314 18:46:18.660950    7764 ssh_runner.go:195] Run: which cri-dockerd
	I0314 18:46:18.677370    7764 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0314 18:46:18.694244    7764 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0314 18:46:18.731470    7764 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0314 18:46:18.910106    7764 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0314 18:46:19.088586    7764 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0314 18:46:19.088823    7764 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0314 18:46:19.127621    7764 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 18:46:19.305810    7764 ssh_runner.go:195] Run: sudo systemctl restart docker

                                                
                                                
** /stderr **
ha_test.go:422: W0314 18:44:30.800761    7764 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I0314 18:44:30.857003    7764 out.go:291] Setting OutFile to fd 1592 ...
I0314 18:44:30.872037    7764 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0314 18:44:30.872037    7764 out.go:304] Setting ErrFile to fd 280...
I0314 18:44:30.872037    7764 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0314 18:44:30.893465    7764 mustload.go:65] Loading cluster: ha-832100
I0314 18:44:30.894135    7764 config.go:182] Loaded profile config "ha-832100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0314 18:44:30.895466    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-832100-m02 ).state
I0314 18:44:32.833735    7764 main.go:141] libmachine: [stdout =====>] : Off

                                                
                                                
I0314 18:44:32.833735    7764 main.go:141] libmachine: [stderr =====>] : 
W0314 18:44:32.833735    7764 host.go:58] "ha-832100-m02" host status: Stopped
I0314 18:44:32.837093    7764 out.go:177] * Starting "ha-832100-m02" control-plane node in "ha-832100" cluster
I0314 18:44:32.839438    7764 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
I0314 18:44:32.839471    7764 preload.go:147] Found local preload: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
I0314 18:44:32.839471    7764 cache.go:56] Caching tarball of preloaded images
I0314 18:44:32.839999    7764 preload.go:173] Found C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I0314 18:44:32.840130    7764 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
I0314 18:44:32.840130    7764 profile.go:142] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-832100\config.json ...
I0314 18:44:32.842631    7764 start.go:360] acquireMachinesLock for ha-832100-m02: {Name:mk814f158b6187cc9297257c36fdbe0d2871c950 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0314 18:44:32.842845    7764 start.go:364] duration metric: took 142.9µs to acquireMachinesLock for "ha-832100-m02"
I0314 18:44:32.843021    7764 start.go:96] Skipping create...Using existing machine configuration
I0314 18:44:32.843021    7764 fix.go:54] fixHost starting: m02
I0314 18:44:32.843021    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-832100-m02 ).state
I0314 18:44:34.806239    7764 main.go:141] libmachine: [stdout =====>] : Off

                                                
                                                
I0314 18:44:34.806239    7764 main.go:141] libmachine: [stderr =====>] : 
I0314 18:44:34.806431    7764 fix.go:112] recreateIfNeeded on ha-832100-m02: state=Stopped err=<nil>
W0314 18:44:34.806431    7764 fix.go:138] unexpected machine state, will restart: <nil>
I0314 18:44:34.810033    7764 out.go:177] * Restarting existing hyperv VM for "ha-832100-m02" ...
I0314 18:44:34.812609    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-832100-m02
I0314 18:44:37.743096    7764 main.go:141] libmachine: [stdout =====>] : 
I0314 18:44:37.743096    7764 main.go:141] libmachine: [stderr =====>] : 
I0314 18:44:37.743528    7764 main.go:141] libmachine: Waiting for host to start...
I0314 18:44:37.743644    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-832100-m02 ).state
I0314 18:44:39.808023    7764 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0314 18:44:39.808023    7764 main.go:141] libmachine: [stderr =====>] : 
I0314 18:44:39.808926    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-832100-m02 ).networkadapters[0]).ipaddresses[0]
I0314 18:44:42.117436    7764 main.go:141] libmachine: [stdout =====>] : 
I0314 18:44:42.117436    7764 main.go:141] libmachine: [stderr =====>] : 
I0314 18:44:43.131763    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-832100-m02 ).state
I0314 18:44:45.137644    7764 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0314 18:44:45.137723    7764 main.go:141] libmachine: [stderr =====>] : 
I0314 18:44:45.137819    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-832100-m02 ).networkadapters[0]).ipaddresses[0]
I0314 18:44:47.469732    7764 main.go:141] libmachine: [stdout =====>] : 
I0314 18:44:47.469732    7764 main.go:141] libmachine: [stderr =====>] : 
I0314 18:44:48.479606    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-832100-m02 ).state
I0314 18:44:50.492987    7764 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0314 18:44:50.492987    7764 main.go:141] libmachine: [stderr =====>] : 
I0314 18:44:50.492987    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-832100-m02 ).networkadapters[0]).ipaddresses[0]
I0314 18:44:52.777324    7764 main.go:141] libmachine: [stdout =====>] : 
I0314 18:44:52.777965    7764 main.go:141] libmachine: [stderr =====>] : 
I0314 18:44:53.780380    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-832100-m02 ).state
I0314 18:44:55.787519    7764 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0314 18:44:55.788266    7764 main.go:141] libmachine: [stderr =====>] : 
I0314 18:44:55.788381    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-832100-m02 ).networkadapters[0]).ipaddresses[0]
I0314 18:44:58.116265    7764 main.go:141] libmachine: [stdout =====>] : 
I0314 18:44:58.116265    7764 main.go:141] libmachine: [stderr =====>] : 
I0314 18:44:59.130715    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-832100-m02 ).state
I0314 18:45:01.186848    7764 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0314 18:45:01.187374    7764 main.go:141] libmachine: [stderr =====>] : 
I0314 18:45:01.187374    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-832100-m02 ).networkadapters[0]).ipaddresses[0]
I0314 18:45:03.584097    7764 main.go:141] libmachine: [stdout =====>] : 172.17.92.40

                                                
                                                
I0314 18:45:03.584174    7764 main.go:141] libmachine: [stderr =====>] : 
I0314 18:45:03.586381    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-832100-m02 ).state
I0314 18:45:05.551621    7764 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0314 18:45:05.551621    7764 main.go:141] libmachine: [stderr =====>] : 
I0314 18:45:05.551709    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-832100-m02 ).networkadapters[0]).ipaddresses[0]
I0314 18:45:07.942354    7764 main.go:141] libmachine: [stdout =====>] : 172.17.92.40

                                                
                                                
I0314 18:45:07.942566    7764 main.go:141] libmachine: [stderr =====>] : 
I0314 18:45:07.942663    7764 profile.go:142] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-832100\config.json ...
I0314 18:45:07.945163    7764 machine.go:94] provisionDockerMachine start ...
I0314 18:45:07.945163    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-832100-m02 ).state
I0314 18:45:09.920946    7764 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0314 18:45:09.920946    7764 main.go:141] libmachine: [stderr =====>] : 
I0314 18:45:09.921711    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-832100-m02 ).networkadapters[0]).ipaddresses[0]
I0314 18:45:12.335988    7764 main.go:141] libmachine: [stdout =====>] : 172.17.92.40

                                                
                                                
I0314 18:45:12.335988    7764 main.go:141] libmachine: [stderr =====>] : 
I0314 18:45:12.339703    7764 main.go:141] libmachine: Using SSH client type: native
I0314 18:45:12.340340    7764 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xac9f80] 0xaccb60 <nil>  [] 0s} 172.17.92.40 22 <nil> <nil>}
I0314 18:45:12.340340    7764 main.go:141] libmachine: About to run SSH command:
hostname
I0314 18:45:12.464462    7764 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube

                                                
                                                
I0314 18:45:12.464462    7764 buildroot.go:166] provisioning hostname "ha-832100-m02"
I0314 18:45:12.464462    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-832100-m02 ).state
I0314 18:45:14.433683    7764 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0314 18:45:14.433683    7764 main.go:141] libmachine: [stderr =====>] : 
I0314 18:45:14.433683    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-832100-m02 ).networkadapters[0]).ipaddresses[0]
I0314 18:45:16.819502    7764 main.go:141] libmachine: [stdout =====>] : 172.17.92.40

                                                
                                                
I0314 18:45:16.819502    7764 main.go:141] libmachine: [stderr =====>] : 
I0314 18:45:16.824410    7764 main.go:141] libmachine: Using SSH client type: native
I0314 18:45:16.824410    7764 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xac9f80] 0xaccb60 <nil>  [] 0s} 172.17.92.40 22 <nil> <nil>}
I0314 18:45:16.824941    7764 main.go:141] libmachine: About to run SSH command:
sudo hostname ha-832100-m02 && echo "ha-832100-m02" | sudo tee /etc/hostname
I0314 18:45:16.975855    7764 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-832100-m02

                                                
                                                
I0314 18:45:16.975948    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-832100-m02 ).state
I0314 18:45:18.971359    7764 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0314 18:45:18.971359    7764 main.go:141] libmachine: [stderr =====>] : 
I0314 18:45:18.971359    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-832100-m02 ).networkadapters[0]).ipaddresses[0]
I0314 18:45:21.353629    7764 main.go:141] libmachine: [stdout =====>] : 172.17.92.40

                                                
                                                
I0314 18:45:21.353702    7764 main.go:141] libmachine: [stderr =====>] : 
I0314 18:45:21.356710    7764 main.go:141] libmachine: Using SSH client type: native
I0314 18:45:21.357507    7764 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xac9f80] 0xaccb60 <nil>  [] 0s} 172.17.92.40 22 <nil> <nil>}
I0314 18:45:21.357507    7764 main.go:141] libmachine: About to run SSH command:

                                                
                                                
		if ! grep -xq '.*\sha-832100-m02' /etc/hosts; then
			if grep -xq '127.0.1.1\s.*' /etc/hosts; then
				sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-832100-m02/g' /etc/hosts;
			else 
				echo '127.0.1.1 ha-832100-m02' | sudo tee -a /etc/hosts; 
			fi
		fi
I0314 18:45:21.504399    7764 main.go:141] libmachine: SSH cmd err, output: <nil>: 
I0314 18:45:21.504399    7764 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube7\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube7\minikube-integration\.minikube}
I0314 18:45:21.504399    7764 buildroot.go:174] setting up certificates
I0314 18:45:21.504399    7764 provision.go:84] configureAuth start
I0314 18:45:21.504399    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-832100-m02 ).state
I0314 18:45:23.482838    7764 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0314 18:45:23.482838    7764 main.go:141] libmachine: [stderr =====>] : 
I0314 18:45:23.482838    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-832100-m02 ).networkadapters[0]).ipaddresses[0]
I0314 18:45:25.837762    7764 main.go:141] libmachine: [stdout =====>] : 172.17.92.40

                                                
                                                
I0314 18:45:25.838124    7764 main.go:141] libmachine: [stderr =====>] : 
I0314 18:45:25.838124    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-832100-m02 ).state
I0314 18:45:27.845606    7764 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0314 18:45:27.846151    7764 main.go:141] libmachine: [stderr =====>] : 
I0314 18:45:27.846237    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-832100-m02 ).networkadapters[0]).ipaddresses[0]
I0314 18:45:30.257618    7764 main.go:141] libmachine: [stdout =====>] : 172.17.92.40

                                                
                                                
I0314 18:45:30.257618    7764 main.go:141] libmachine: [stderr =====>] : 
I0314 18:45:30.257618    7764 provision.go:143] copyHostCerts
I0314 18:45:30.258190    7764 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem
I0314 18:45:30.258190    7764 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem, removing ...
I0314 18:45:30.258190    7764 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.pem
I0314 18:45:30.258717    7764 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem (1082 bytes)
I0314 18:45:30.259678    7764 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem
I0314 18:45:30.259760    7764 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem, removing ...
I0314 18:45:30.259760    7764 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cert.pem
I0314 18:45:30.259760    7764 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem (1123 bytes)
I0314 18:45:30.260459    7764 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem
I0314 18:45:30.261109    7764 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem, removing ...
I0314 18:45:30.261109    7764 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\key.pem
I0314 18:45:30.261329    7764 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem (1679 bytes)
I0314 18:45:30.261935    7764 provision.go:117] generating server cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-832100-m02 san=[127.0.0.1 172.17.92.40 ha-832100-m02 localhost minikube]
I0314 18:45:30.484120    7764 provision.go:177] copyRemoteCerts
I0314 18:45:30.492427    7764 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0314 18:45:30.492427    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-832100-m02 ).state
I0314 18:45:32.453911    7764 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0314 18:45:32.453911    7764 main.go:141] libmachine: [stderr =====>] : 
I0314 18:45:32.453911    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-832100-m02 ).networkadapters[0]).ipaddresses[0]
I0314 18:45:34.834723    7764 main.go:141] libmachine: [stdout =====>] : 172.17.92.40

                                                
                                                
I0314 18:45:34.835199    7764 main.go:141] libmachine: [stderr =====>] : 
I0314 18:45:34.836420    7764 sshutil.go:53] new ssh client: &{IP:172.17.92.40 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\ha-832100-m02\id_rsa Username:docker}
I0314 18:45:34.937038    7764 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.4442337s)
I0314 18:45:34.937038    7764 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
I0314 18:45:34.937188    7764 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
I0314 18:45:34.988604    7764 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
I0314 18:45:34.989262    7764 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
I0314 18:45:35.034273    7764 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
I0314 18:45:35.034273    7764 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I0314 18:45:35.081906    7764 provision.go:87] duration metric: took 13.5764856s to configureAuth
I0314 18:45:35.081906    7764 buildroot.go:189] setting minikube options for container-runtime
I0314 18:45:35.082386    7764 config.go:182] Loaded profile config "ha-832100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0314 18:45:35.082386    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-832100-m02 ).state
I0314 18:45:37.066070    7764 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0314 18:45:37.066773    7764 main.go:141] libmachine: [stderr =====>] : 
I0314 18:45:37.066773    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-832100-m02 ).networkadapters[0]).ipaddresses[0]
I0314 18:45:39.418088    7764 main.go:141] libmachine: [stdout =====>] : 172.17.92.40

                                                
                                                
I0314 18:45:39.418088    7764 main.go:141] libmachine: [stderr =====>] : 
I0314 18:45:39.422380    7764 main.go:141] libmachine: Using SSH client type: native
I0314 18:45:39.422380    7764 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xac9f80] 0xaccb60 <nil>  [] 0s} 172.17.92.40 22 <nil> <nil>}
I0314 18:45:39.422910    7764 main.go:141] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0314 18:45:39.560383    7764 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs

                                                
                                                
I0314 18:45:39.560383    7764 buildroot.go:70] root file system type: tmpfs
I0314 18:45:39.560383    7764 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
I0314 18:45:39.560383    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-832100-m02 ).state
I0314 18:45:41.528564    7764 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0314 18:45:41.529225    7764 main.go:141] libmachine: [stderr =====>] : 
I0314 18:45:41.529334    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-832100-m02 ).networkadapters[0]).ipaddresses[0]
I0314 18:45:43.888912    7764 main.go:141] libmachine: [stdout =====>] : 172.17.92.40

                                                
                                                
I0314 18:45:43.888958    7764 main.go:141] libmachine: [stderr =====>] : 
I0314 18:45:43.892740    7764 main.go:141] libmachine: Using SSH client type: native
I0314 18:45:43.892740    7764 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xac9f80] 0xaccb60 <nil>  [] 0s} 172.17.92.40 22 <nil> <nil>}
I0314 18:45:43.893279    7764 main.go:141] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target  minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket 
StartLimitBurst=3
StartLimitIntervalSec=60

                                                
                                                
[Service]
Type=notify
Restart=on-failure

                                                
                                                

                                                
                                                

                                                
                                                
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.

                                                
                                                
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
ExecReload=/bin/kill -s HUP \$MAINPID

                                                
                                                
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity

                                                
                                                
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0

                                                
                                                
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes

                                                
                                                
# kill only the docker process, not all processes in the cgroup
KillMode=process

                                                
                                                
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0314 18:45:44.052872    7764 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target  minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket 
StartLimitBurst=3
StartLimitIntervalSec=60

                                                
                                                
[Service]
Type=notify
Restart=on-failure

                                                
                                                

                                                
                                                

                                                
                                                
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.

                                                
                                                
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
ExecReload=/bin/kill -s HUP $MAINPID

                                                
                                                
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity

                                                
                                                
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0

                                                
                                                
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes

                                                
                                                
# kill only the docker process, not all processes in the cgroup
KillMode=process

                                                
                                                
[Install]
WantedBy=multi-user.target

                                                
                                                
I0314 18:45:44.052984    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-832100-m02 ).state
I0314 18:45:46.006780    7764 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0314 18:45:46.006780    7764 main.go:141] libmachine: [stderr =====>] : 
I0314 18:45:46.006780    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-832100-m02 ).networkadapters[0]).ipaddresses[0]
I0314 18:45:48.384763    7764 main.go:141] libmachine: [stdout =====>] : 172.17.92.40

                                                
                                                
I0314 18:45:48.385335    7764 main.go:141] libmachine: [stderr =====>] : 
I0314 18:45:48.389165    7764 main.go:141] libmachine: Using SSH client type: native
I0314 18:45:48.389602    7764 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xac9f80] 0xaccb60 <nil>  [] 0s} 172.17.92.40 22 <nil> <nil>}
I0314 18:45:48.389602    7764 main.go:141] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0314 18:45:50.724145    7764 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.

                                                
                                                
I0314 18:45:50.724145    7764 machine.go:97] duration metric: took 42.7757646s to provisionDockerMachine
I0314 18:45:50.724202    7764 start.go:293] postStartSetup for "ha-832100-m02" (driver="hyperv")
I0314 18:45:50.724254    7764 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0314 18:45:50.733254    7764 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0314 18:45:50.733254    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-832100-m02 ).state
I0314 18:45:52.688773    7764 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0314 18:45:52.689606    7764 main.go:141] libmachine: [stderr =====>] : 
I0314 18:45:52.689656    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-832100-m02 ).networkadapters[0]).ipaddresses[0]
I0314 18:45:55.063553    7764 main.go:141] libmachine: [stdout =====>] : 172.17.92.40

                                                
                                                
I0314 18:45:55.063634    7764 main.go:141] libmachine: [stderr =====>] : 
I0314 18:45:55.064049    7764 sshutil.go:53] new ssh client: &{IP:172.17.92.40 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\ha-832100-m02\id_rsa Username:docker}
I0314 18:45:55.172408    7764 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.4388192s)
I0314 18:45:55.181406    7764 ssh_runner.go:195] Run: cat /etc/os-release
I0314 18:45:55.188043    7764 info.go:137] Remote host: Buildroot 2023.02.9
I0314 18:45:55.188043    7764 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\addons for local assets ...
I0314 18:45:55.188576    7764 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\files for local assets ...
I0314 18:45:55.189280    7764 filesync.go:149] local asset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\110522.pem -> 110522.pem in /etc/ssl/certs
I0314 18:45:55.189280    7764 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\110522.pem -> /etc/ssl/certs/110522.pem
I0314 18:45:55.198257    7764 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0314 18:45:55.215755    7764 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\110522.pem --> /etc/ssl/certs/110522.pem (1708 bytes)
I0314 18:45:55.258880    7764 start.go:296] duration metric: took 4.5343363s for postStartSetup
I0314 18:45:55.269952    7764 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
I0314 18:45:55.269952    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-832100-m02 ).state
I0314 18:45:57.277970    7764 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0314 18:45:57.277970    7764 main.go:141] libmachine: [stderr =====>] : 
I0314 18:45:57.277970    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-832100-m02 ).networkadapters[0]).ipaddresses[0]
I0314 18:45:59.622341    7764 main.go:141] libmachine: [stdout =====>] : 172.17.92.40

                                                
                                                
I0314 18:45:59.622341    7764 main.go:141] libmachine: [stderr =====>] : 
I0314 18:45:59.623411    7764 sshutil.go:53] new ssh client: &{IP:172.17.92.40 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\ha-832100-m02\id_rsa Username:docker}
I0314 18:45:59.739482    7764 ssh_runner.go:235] Completed: sudo ls --almost-all -1 /var/lib/minikube/backup: (4.4691919s)
I0314 18:45:59.739561    7764 machine.go:198] restoring vm config from /var/lib/minikube/backup: [etc]
I0314 18:45:59.748625    7764 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
I0314 18:45:59.820285    7764 fix.go:56] duration metric: took 1m26.970735s for fixHost
I0314 18:45:59.820285    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-832100-m02 ).state
I0314 18:46:01.776980    7764 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0314 18:46:01.777317    7764 main.go:141] libmachine: [stderr =====>] : 
I0314 18:46:01.777317    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-832100-m02 ).networkadapters[0]).ipaddresses[0]
I0314 18:46:04.127176    7764 main.go:141] libmachine: [stdout =====>] : 172.17.92.40

                                                
                                                
I0314 18:46:04.127176    7764 main.go:141] libmachine: [stderr =====>] : 
I0314 18:46:04.133841    7764 main.go:141] libmachine: Using SSH client type: native
I0314 18:46:04.134481    7764 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xac9f80] 0xaccb60 <nil>  [] 0s} 172.17.92.40 22 <nil> <nil>}
I0314 18:46:04.134481    7764 main.go:141] libmachine: About to run SSH command:
date +%!s(MISSING).%!N(MISSING)
I0314 18:46:04.260930    7764 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710441964.522297317

                                                
                                                
I0314 18:46:04.260930    7764 fix.go:216] guest clock: 1710441964.522297317
I0314 18:46:04.260930    7764 fix.go:229] Guest: 2024-03-14 18:46:04.522297317 +0000 UTC Remote: 2024-03-14 18:45:59.8202853 +0000 UTC m=+89.104202201 (delta=4.702012017s)
I0314 18:46:04.261103    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-832100-m02 ).state
I0314 18:46:06.248414    7764 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0314 18:46:06.248414    7764 main.go:141] libmachine: [stderr =====>] : 
I0314 18:46:06.248515    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-832100-m02 ).networkadapters[0]).ipaddresses[0]
I0314 18:46:08.616267    7764 main.go:141] libmachine: [stdout =====>] : 172.17.92.40

                                                
                                                
I0314 18:46:08.617242    7764 main.go:141] libmachine: [stderr =====>] : 
I0314 18:46:08.621164    7764 main.go:141] libmachine: Using SSH client type: native
I0314 18:46:08.621397    7764 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xac9f80] 0xaccb60 <nil>  [] 0s} 172.17.92.40 22 <nil> <nil>}
I0314 18:46:08.621397    7764 main.go:141] libmachine: About to run SSH command:
sudo date -s @1710441964
I0314 18:46:08.756294    7764 main.go:141] libmachine: SSH cmd err, output: <nil>: Thu Mar 14 18:46:04 UTC 2024

                                                
                                                
I0314 18:46:08.756294    7764 fix.go:236] clock set: Thu Mar 14 18:46:04 UTC 2024
(err=<nil>)
I0314 18:46:08.756294    7764 start.go:83] releasing machines lock for "ha-832100-m02", held for 1m35.9062453s
I0314 18:46:08.756888    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-832100-m02 ).state
I0314 18:46:10.721985    7764 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0314 18:46:10.722841    7764 main.go:141] libmachine: [stderr =====>] : 
I0314 18:46:10.722841    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-832100-m02 ).networkadapters[0]).ipaddresses[0]
I0314 18:46:13.088270    7764 main.go:141] libmachine: [stdout =====>] : 172.17.92.40

                                                
                                                
I0314 18:46:13.088270    7764 main.go:141] libmachine: [stderr =====>] : 
I0314 18:46:13.092548    7764 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0314 18:46:13.092730    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-832100-m02 ).state
I0314 18:46:13.099616    7764 ssh_runner.go:195] Run: systemctl --version
I0314 18:46:13.100167    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-832100-m02 ).state
I0314 18:46:15.068551    7764 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0314 18:46:15.068551    7764 main.go:141] libmachine: [stderr =====>] : 
I0314 18:46:15.069461    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-832100-m02 ).networkadapters[0]).ipaddresses[0]
I0314 18:46:15.085002    7764 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0314 18:46:15.085002    7764 main.go:141] libmachine: [stderr =====>] : 
I0314 18:46:15.085265    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-832100-m02 ).networkadapters[0]).ipaddresses[0]
I0314 18:46:17.554167    7764 main.go:141] libmachine: [stdout =====>] : 172.17.92.40

                                                
                                                
I0314 18:46:17.554167    7764 main.go:141] libmachine: [stderr =====>] : 
I0314 18:46:17.555028    7764 sshutil.go:53] new ssh client: &{IP:172.17.92.40 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\ha-832100-m02\id_rsa Username:docker}
I0314 18:46:17.582876    7764 main.go:141] libmachine: [stdout =====>] : 172.17.92.40

                                                
                                                
I0314 18:46:17.583275    7764 main.go:141] libmachine: [stderr =====>] : 
I0314 18:46:17.583604    7764 sshutil.go:53] new ssh client: &{IP:172.17.92.40 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\ha-832100-m02\id_rsa Username:docker}
I0314 18:46:17.644654    7764 ssh_runner.go:235] Completed: systemctl --version: (4.5446942s)
I0314 18:46:17.653986    7764 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
W0314 18:46:17.782209    7764 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I0314 18:46:17.782351    7764 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.6892322s)
I0314 18:46:17.791101    7764 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0314 18:46:17.819709    7764 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
I0314 18:46:17.819709    7764 start.go:494] detecting cgroup driver to use...
I0314 18:46:17.820111    7764 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0314 18:46:17.868696    7764 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
I0314 18:46:17.897458    7764 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0314 18:46:17.917264    7764 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
I0314 18:46:17.926700    7764 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0314 18:46:17.955944    7764 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0314 18:46:17.982868    7764 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0314 18:46:18.013466    7764 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0314 18:46:18.042643    7764 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0314 18:46:18.072951    7764 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0314 18:46:18.099924    7764 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0314 18:46:18.124365    7764 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0314 18:46:18.154165    7764 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0314 18:46:18.346514    7764 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0314 18:46:18.374534    7764 start.go:494] detecting cgroup driver to use...
I0314 18:46:18.383655    7764 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I0314 18:46:18.413951    7764 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0314 18:46:18.444150    7764 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
I0314 18:46:18.477364    7764 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0314 18:46:18.506799    7764 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0314 18:46:18.538475    7764 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I0314 18:46:18.595095    7764 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0314 18:46:18.618644    7764 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
" | sudo tee /etc/crictl.yaml"
I0314 18:46:18.660950    7764 ssh_runner.go:195] Run: which cri-dockerd
I0314 18:46:18.677370    7764 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
I0314 18:46:18.694244    7764 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
I0314 18:46:18.731470    7764 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I0314 18:46:18.910106    7764 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I0314 18:46:19.088586    7764 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
I0314 18:46:19.088823    7764 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
I0314 18:46:19.127621    7764 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0314 18:46:19.305810    7764 ssh_runner.go:195] Run: sudo systemctl restart docker
ha_test.go:423: secondary control-plane node start returned an error. args "out/minikube-windows-amd64.exe -p ha-832100 node start m02 -v=7 --alsologtostderr": exit status 1
ha_test.go:428: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-832100 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-832100 status -v=7 --alsologtostderr: context deadline exceeded (0s)
ha_test.go:428: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-832100 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-832100 status -v=7 --alsologtostderr: context deadline exceeded (61.3µs)
ha_test.go:428: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-832100 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-832100 status -v=7 --alsologtostderr: context deadline exceeded (91.7µs)
ha_test.go:428: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-832100 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-832100 status -v=7 --alsologtostderr: context deadline exceeded (324.3µs)
ha_test.go:428: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-832100 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-832100 status -v=7 --alsologtostderr: context deadline exceeded (0s)
ha_test.go:428: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-832100 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-832100 status -v=7 --alsologtostderr: context deadline exceeded (0s)
ha_test.go:428: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-832100 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-832100 status -v=7 --alsologtostderr: context deadline exceeded (56.4µs)
ha_test.go:428: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-832100 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-832100 status -v=7 --alsologtostderr: context deadline exceeded (0s)
ha_test.go:428: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-832100 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-832100 status -v=7 --alsologtostderr: context deadline exceeded (0s)
ha_test.go:432: failed to run minikube status. args "out/minikube-windows-amd64.exe -p ha-832100 status -v=7 --alsologtostderr" : context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-832100 -n ha-832100
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-832100 -n ha-832100: (11.2621002s)
helpers_test.go:244: <<< TestMutliControlPlane/serial/RestartSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMutliControlPlane/serial/RestartSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-832100 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p ha-832100 logs -n 25: (8.0158086s)
helpers_test.go:252: TestMutliControlPlane/serial/RestartSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------------------------------------|-----------|-------------------|---------|---------------------|---------------------|
	| Command |                                                           Args                                                            |  Profile  |       User        | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------------------------------------|-----------|-------------------|---------|---------------------|---------------------|
	| ssh     | ha-832100 ssh -n                                                                                                          | ha-832100 | minikube7\jenkins | v1.32.0 | 14 Mar 24 18:39 UTC | 14 Mar 24 18:39 UTC |
	|         | ha-832100-m03 sudo cat                                                                                                    |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                  |           |                   |         |                     |                     |
	| cp      | ha-832100 cp ha-832100-m03:/home/docker/cp-test.txt                                                                       | ha-832100 | minikube7\jenkins | v1.32.0 | 14 Mar 24 18:39 UTC | 14 Mar 24 18:39 UTC |
	|         | ha-832100:/home/docker/cp-test_ha-832100-m03_ha-832100.txt                                                                |           |                   |         |                     |                     |
	| ssh     | ha-832100 ssh -n                                                                                                          | ha-832100 | minikube7\jenkins | v1.32.0 | 14 Mar 24 18:39 UTC | 14 Mar 24 18:39 UTC |
	|         | ha-832100-m03 sudo cat                                                                                                    |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                  |           |                   |         |                     |                     |
	| ssh     | ha-832100 ssh -n ha-832100 sudo cat                                                                                       | ha-832100 | minikube7\jenkins | v1.32.0 | 14 Mar 24 18:39 UTC | 14 Mar 24 18:39 UTC |
	|         | /home/docker/cp-test_ha-832100-m03_ha-832100.txt                                                                          |           |                   |         |                     |                     |
	| cp      | ha-832100 cp ha-832100-m03:/home/docker/cp-test.txt                                                                       | ha-832100 | minikube7\jenkins | v1.32.0 | 14 Mar 24 18:39 UTC | 14 Mar 24 18:40 UTC |
	|         | ha-832100-m02:/home/docker/cp-test_ha-832100-m03_ha-832100-m02.txt                                                        |           |                   |         |                     |                     |
	| ssh     | ha-832100 ssh -n                                                                                                          | ha-832100 | minikube7\jenkins | v1.32.0 | 14 Mar 24 18:40 UTC | 14 Mar 24 18:40 UTC |
	|         | ha-832100-m03 sudo cat                                                                                                    |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                  |           |                   |         |                     |                     |
	| ssh     | ha-832100 ssh -n ha-832100-m02 sudo cat                                                                                   | ha-832100 | minikube7\jenkins | v1.32.0 | 14 Mar 24 18:40 UTC | 14 Mar 24 18:40 UTC |
	|         | /home/docker/cp-test_ha-832100-m03_ha-832100-m02.txt                                                                      |           |                   |         |                     |                     |
	| cp      | ha-832100 cp ha-832100-m03:/home/docker/cp-test.txt                                                                       | ha-832100 | minikube7\jenkins | v1.32.0 | 14 Mar 24 18:40 UTC | 14 Mar 24 18:40 UTC |
	|         | ha-832100-m04:/home/docker/cp-test_ha-832100-m03_ha-832100-m04.txt                                                        |           |                   |         |                     |                     |
	| ssh     | ha-832100 ssh -n                                                                                                          | ha-832100 | minikube7\jenkins | v1.32.0 | 14 Mar 24 18:40 UTC | 14 Mar 24 18:40 UTC |
	|         | ha-832100-m03 sudo cat                                                                                                    |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                  |           |                   |         |                     |                     |
	| ssh     | ha-832100 ssh -n ha-832100-m04 sudo cat                                                                                   | ha-832100 | minikube7\jenkins | v1.32.0 | 14 Mar 24 18:40 UTC | 14 Mar 24 18:40 UTC |
	|         | /home/docker/cp-test_ha-832100-m03_ha-832100-m04.txt                                                                      |           |                   |         |                     |                     |
	| cp      | ha-832100 cp testdata\cp-test.txt                                                                                         | ha-832100 | minikube7\jenkins | v1.32.0 | 14 Mar 24 18:40 UTC | 14 Mar 24 18:41 UTC |
	|         | ha-832100-m04:/home/docker/cp-test.txt                                                                                    |           |                   |         |                     |                     |
	| ssh     | ha-832100 ssh -n                                                                                                          | ha-832100 | minikube7\jenkins | v1.32.0 | 14 Mar 24 18:41 UTC | 14 Mar 24 18:41 UTC |
	|         | ha-832100-m04 sudo cat                                                                                                    |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                  |           |                   |         |                     |                     |
	| cp      | ha-832100 cp ha-832100-m04:/home/docker/cp-test.txt                                                                       | ha-832100 | minikube7\jenkins | v1.32.0 | 14 Mar 24 18:41 UTC | 14 Mar 24 18:41 UTC |
	|         | C:\Users\jenkins.minikube7\AppData\Local\Temp\TestMutliControlPlaneserialCopyFile2068586315\001\cp-test_ha-832100-m04.txt |           |                   |         |                     |                     |
	| ssh     | ha-832100 ssh -n                                                                                                          | ha-832100 | minikube7\jenkins | v1.32.0 | 14 Mar 24 18:41 UTC | 14 Mar 24 18:41 UTC |
	|         | ha-832100-m04 sudo cat                                                                                                    |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                  |           |                   |         |                     |                     |
	| cp      | ha-832100 cp ha-832100-m04:/home/docker/cp-test.txt                                                                       | ha-832100 | minikube7\jenkins | v1.32.0 | 14 Mar 24 18:41 UTC | 14 Mar 24 18:41 UTC |
	|         | ha-832100:/home/docker/cp-test_ha-832100-m04_ha-832100.txt                                                                |           |                   |         |                     |                     |
	| ssh     | ha-832100 ssh -n                                                                                                          | ha-832100 | minikube7\jenkins | v1.32.0 | 14 Mar 24 18:41 UTC | 14 Mar 24 18:41 UTC |
	|         | ha-832100-m04 sudo cat                                                                                                    |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                  |           |                   |         |                     |                     |
	| ssh     | ha-832100 ssh -n ha-832100 sudo cat                                                                                       | ha-832100 | minikube7\jenkins | v1.32.0 | 14 Mar 24 18:41 UTC | 14 Mar 24 18:41 UTC |
	|         | /home/docker/cp-test_ha-832100-m04_ha-832100.txt                                                                          |           |                   |         |                     |                     |
	| cp      | ha-832100 cp ha-832100-m04:/home/docker/cp-test.txt                                                                       | ha-832100 | minikube7\jenkins | v1.32.0 | 14 Mar 24 18:41 UTC | 14 Mar 24 18:42 UTC |
	|         | ha-832100-m02:/home/docker/cp-test_ha-832100-m04_ha-832100-m02.txt                                                        |           |                   |         |                     |                     |
	| ssh     | ha-832100 ssh -n                                                                                                          | ha-832100 | minikube7\jenkins | v1.32.0 | 14 Mar 24 18:42 UTC | 14 Mar 24 18:42 UTC |
	|         | ha-832100-m04 sudo cat                                                                                                    |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                  |           |                   |         |                     |                     |
	| ssh     | ha-832100 ssh -n ha-832100-m02 sudo cat                                                                                   | ha-832100 | minikube7\jenkins | v1.32.0 | 14 Mar 24 18:42 UTC | 14 Mar 24 18:42 UTC |
	|         | /home/docker/cp-test_ha-832100-m04_ha-832100-m02.txt                                                                      |           |                   |         |                     |                     |
	| cp      | ha-832100 cp ha-832100-m04:/home/docker/cp-test.txt                                                                       | ha-832100 | minikube7\jenkins | v1.32.0 | 14 Mar 24 18:42 UTC | 14 Mar 24 18:42 UTC |
	|         | ha-832100-m03:/home/docker/cp-test_ha-832100-m04_ha-832100-m03.txt                                                        |           |                   |         |                     |                     |
	| ssh     | ha-832100 ssh -n                                                                                                          | ha-832100 | minikube7\jenkins | v1.32.0 | 14 Mar 24 18:42 UTC | 14 Mar 24 18:42 UTC |
	|         | ha-832100-m04 sudo cat                                                                                                    |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                  |           |                   |         |                     |                     |
	| ssh     | ha-832100 ssh -n ha-832100-m03 sudo cat                                                                                   | ha-832100 | minikube7\jenkins | v1.32.0 | 14 Mar 24 18:42 UTC | 14 Mar 24 18:43 UTC |
	|         | /home/docker/cp-test_ha-832100-m04_ha-832100-m03.txt                                                                      |           |                   |         |                     |                     |
	| node    | ha-832100 node stop m02 -v=7                                                                                              | ha-832100 | minikube7\jenkins | v1.32.0 | 14 Mar 24 18:43 UTC | 14 Mar 24 18:43 UTC |
	|         | --alsologtostderr                                                                                                         |           |                   |         |                     |                     |
	| node    | ha-832100 node start m02 -v=7                                                                                             | ha-832100 | minikube7\jenkins | v1.32.0 | 14 Mar 24 18:44 UTC |                     |
	|         | --alsologtostderr                                                                                                         |           |                   |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------------------------------------|-----------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/14 18:16:19
	Running on machine: minikube7
	Binary: Built with gc go1.22.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0314 18:16:19.570103    4456 out.go:291] Setting OutFile to fd 1484 ...
	I0314 18:16:19.570103    4456 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 18:16:19.570103    4456 out.go:304] Setting ErrFile to fd 1488...
	I0314 18:16:19.570103    4456 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 18:16:19.590119    4456 out.go:298] Setting JSON to false
	I0314 18:16:19.594110    4456 start.go:129] hostinfo: {"hostname":"minikube7","uptime":61984,"bootTime":1710378195,"procs":195,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4046 Build 19045.4046","kernelVersion":"10.0.19045.4046 Build 19045.4046","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f66ed2ea-6c04-4a6b-8eea-b2fb0953e990"}
	W0314 18:16:19.594110    4456 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0314 18:16:19.600257    4456 out.go:177] * [ha-832100] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4046 Build 19045.4046
	I0314 18:16:19.603483    4456 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0314 18:16:19.603483    4456 notify.go:220] Checking for updates...
	I0314 18:16:19.606301    4456 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0314 18:16:19.608697    4456 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	I0314 18:16:19.610828    4456 out.go:177]   - MINIKUBE_LOCATION=18384
	I0314 18:16:19.613298    4456 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0314 18:16:19.615748    4456 driver.go:392] Setting default libvirt URI to qemu:///system
	I0314 18:16:24.552186    4456 out.go:177] * Using the hyperv driver based on user configuration
	I0314 18:16:24.555353    4456 start.go:297] selected driver: hyperv
	I0314 18:16:24.555353    4456 start.go:901] validating driver "hyperv" against <nil>
	I0314 18:16:24.555353    4456 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0314 18:16:24.600539    4456 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0314 18:16:24.602440    4456 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0314 18:16:24.602440    4456 cni.go:84] Creating CNI manager for ""
	I0314 18:16:24.602440    4456 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0314 18:16:24.602440    4456 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0314 18:16:24.602440    4456 start.go:340] cluster config:
	{Name:ha-832100 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:ha-832100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthS
ock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 18:16:24.603083    4456 iso.go:125] acquiring lock: {Name:mk1b3e73402180391a20a865a9454da445c269fc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0314 18:16:24.608581    4456 out.go:177] * Starting "ha-832100" primary control-plane node in "ha-832100" cluster
	I0314 18:16:24.611056    4456 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0314 18:16:24.611056    4456 preload.go:147] Found local preload: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I0314 18:16:24.611056    4456 cache.go:56] Caching tarball of preloaded images
	I0314 18:16:24.611056    4456 preload.go:173] Found C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0314 18:16:24.611056    4456 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0314 18:16:24.612103    4456 profile.go:142] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-832100\config.json ...
	I0314 18:16:24.612103    4456 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-832100\config.json: {Name:mk7260dd1ee06e834018ca0cc2517aa0aa781219 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 18:16:24.613610    4456 start.go:360] acquireMachinesLock for ha-832100: {Name:mk814f158b6187cc9297257c36fdbe0d2871c950 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0314 18:16:24.613610    4456 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-832100"
	I0314 18:16:24.613610    4456 start.go:93] Provisioning new machine with config: &{Name:ha-832100 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.28.4 ClusterName:ha-832100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0314 18:16:24.613610    4456 start.go:125] createHost starting for "" (driver="hyperv")
	I0314 18:16:24.618610    4456 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0314 18:16:24.618610    4456 start.go:159] libmachine.API.Create for "ha-832100" (driver="hyperv")
	I0314 18:16:24.619612    4456 client.go:168] LocalClient.Create starting
	I0314 18:16:24.619612    4456 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem
	I0314 18:16:24.619612    4456 main.go:141] libmachine: Decoding PEM data...
	I0314 18:16:24.619612    4456 main.go:141] libmachine: Parsing certificate...
	I0314 18:16:24.619612    4456 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem
	I0314 18:16:24.619612    4456 main.go:141] libmachine: Decoding PEM data...
	I0314 18:16:24.619612    4456 main.go:141] libmachine: Parsing certificate...
	I0314 18:16:24.619612    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0314 18:16:26.574347    4456 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0314 18:16:26.574347    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:16:26.574975    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0314 18:16:28.230177    4456 main.go:141] libmachine: [stdout =====>] : False
	
	I0314 18:16:28.230177    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:16:28.230177    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0314 18:16:29.635494    4456 main.go:141] libmachine: [stdout =====>] : True
	
	I0314 18:16:29.636228    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:16:29.636228    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0314 18:16:33.032214    4456 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0314 18:16:33.033373    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:16:33.035821    4456 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube7/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.32.1-1710348681-18375-amd64.iso...
	I0314 18:16:33.399593    4456 main.go:141] libmachine: Creating SSH key...
	I0314 18:16:33.824314    4456 main.go:141] libmachine: Creating VM...
	I0314 18:16:33.824314    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0314 18:16:36.435806    4456 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0314 18:16:36.435806    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:16:36.435984    4456 main.go:141] libmachine: Using switch "Default Switch"
	I0314 18:16:36.436068    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0314 18:16:38.103622    4456 main.go:141] libmachine: [stdout =====>] : True
	
	I0314 18:16:38.103821    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:16:38.103821    4456 main.go:141] libmachine: Creating VHD
	I0314 18:16:38.103929    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\ha-832100\fixed.vhd' -SizeBytes 10MB -Fixed
	I0314 18:16:41.682302    4456 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube7
	Path                    : C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\ha-832100\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 714516BD-7790-4516-B766-B7B00B9D56C7
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0314 18:16:41.682492    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:16:41.682492    4456 main.go:141] libmachine: Writing magic tar header
	I0314 18:16:41.682580    4456 main.go:141] libmachine: Writing SSH key tar header
	I0314 18:16:41.691276    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\ha-832100\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\ha-832100\disk.vhd' -VHDType Dynamic -DeleteSource
	I0314 18:16:44.743032    4456 main.go:141] libmachine: [stdout =====>] : 
	I0314 18:16:44.743032    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:16:44.743032    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\ha-832100\disk.vhd' -SizeBytes 20000MB
	I0314 18:16:47.143393    4456 main.go:141] libmachine: [stdout =====>] : 
	I0314 18:16:47.143393    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:16:47.143826    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-832100 -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\ha-832100' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0314 18:16:50.516514    4456 main.go:141] libmachine: [stdout =====>] : 
	Name      State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----      ----- ----------- ----------------- ------   ------             -------
	ha-832100 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0314 18:16:50.516514    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:16:50.516575    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-832100 -DynamicMemoryEnabled $false
	I0314 18:16:52.613917    4456 main.go:141] libmachine: [stdout =====>] : 
	I0314 18:16:52.613917    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:16:52.614286    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-832100 -Count 2
	I0314 18:16:54.642826    4456 main.go:141] libmachine: [stdout =====>] : 
	I0314 18:16:54.642826    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:16:54.643279    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-832100 -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\ha-832100\boot2docker.iso'
	I0314 18:16:57.078584    4456 main.go:141] libmachine: [stdout =====>] : 
	I0314 18:16:57.079474    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:16:57.079474    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-832100 -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\ha-832100\disk.vhd'
	I0314 18:16:59.532932    4456 main.go:141] libmachine: [stdout =====>] : 
	I0314 18:16:59.533013    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:16:59.533013    4456 main.go:141] libmachine: Starting VM...
	I0314 18:16:59.533013    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-832100
	I0314 18:17:02.467216    4456 main.go:141] libmachine: [stdout =====>] : 
	I0314 18:17:02.468233    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:17:02.468233    4456 main.go:141] libmachine: Waiting for host to start...
	I0314 18:17:02.468460    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-832100 ).state
	I0314 18:17:04.515225    4456 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 18:17:04.515782    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:17:04.515860    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-832100 ).networkadapters[0]).ipaddresses[0]
	I0314 18:17:06.832825    4456 main.go:141] libmachine: [stdout =====>] : 
	I0314 18:17:06.832825    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:17:07.838067    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-832100 ).state
	I0314 18:17:09.858535    4456 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 18:17:09.858535    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:17:09.859025    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-832100 ).networkadapters[0]).ipaddresses[0]
	I0314 18:17:12.195879    4456 main.go:141] libmachine: [stdout =====>] : 
	I0314 18:17:12.196830    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:17:13.199730    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-832100 ).state
	I0314 18:17:15.225520    4456 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 18:17:15.225520    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:17:15.225520    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-832100 ).networkadapters[0]).ipaddresses[0]
	I0314 18:17:17.506496    4456 main.go:141] libmachine: [stdout =====>] : 
	I0314 18:17:17.510526    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:17:18.516861    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-832100 ).state
	I0314 18:17:20.524104    4456 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 18:17:20.524104    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:17:20.524104    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-832100 ).networkadapters[0]).ipaddresses[0]
	I0314 18:17:22.834956    4456 main.go:141] libmachine: [stdout =====>] : 
	I0314 18:17:22.834956    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:17:23.847829    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-832100 ).state
	I0314 18:17:25.869745    4456 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 18:17:25.870727    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:17:25.870727    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-832100 ).networkadapters[0]).ipaddresses[0]
	I0314 18:17:28.240340    4456 main.go:141] libmachine: [stdout =====>] : 172.17.90.10
	
	I0314 18:17:28.240340    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:17:28.240901    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-832100 ).state
	I0314 18:17:30.221096    4456 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 18:17:30.221986    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:17:30.221986    4456 machine.go:94] provisionDockerMachine start ...
	I0314 18:17:30.222171    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-832100 ).state
	I0314 18:17:32.207017    4456 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 18:17:32.207017    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:17:32.207017    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-832100 ).networkadapters[0]).ipaddresses[0]
	I0314 18:17:34.578365    4456 main.go:141] libmachine: [stdout =====>] : 172.17.90.10
	
	I0314 18:17:34.583581    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:17:34.588078    4456 main.go:141] libmachine: Using SSH client type: native
	I0314 18:17:34.598019    4456 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xac9f80] 0xaccb60 <nil>  [] 0s} 172.17.90.10 22 <nil> <nil>}
	I0314 18:17:34.599032    4456 main.go:141] libmachine: About to run SSH command:
	hostname
	I0314 18:17:34.733533    4456 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0314 18:17:34.733533    4456 buildroot.go:166] provisioning hostname "ha-832100"
	I0314 18:17:34.733620    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-832100 ).state
	I0314 18:17:36.719479    4456 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 18:17:36.719945    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:17:36.720139    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-832100 ).networkadapters[0]).ipaddresses[0]
	I0314 18:17:39.059374    4456 main.go:141] libmachine: [stdout =====>] : 172.17.90.10
	
	I0314 18:17:39.059374    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:17:39.063548    4456 main.go:141] libmachine: Using SSH client type: native
	I0314 18:17:39.064000    4456 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xac9f80] 0xaccb60 <nil>  [] 0s} 172.17.90.10 22 <nil> <nil>}
	I0314 18:17:39.064073    4456 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-832100 && echo "ha-832100" | sudo tee /etc/hostname
	I0314 18:17:39.214222    4456 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-832100
	
	I0314 18:17:39.214360    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-832100 ).state
	I0314 18:17:41.204669    4456 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 18:17:41.204669    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:17:41.205254    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-832100 ).networkadapters[0]).ipaddresses[0]
	I0314 18:17:43.526184    4456 main.go:141] libmachine: [stdout =====>] : 172.17.90.10
	
	I0314 18:17:43.526499    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:17:43.530496    4456 main.go:141] libmachine: Using SSH client type: native
	I0314 18:17:43.530971    4456 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xac9f80] 0xaccb60 <nil>  [] 0s} 172.17.90.10 22 <nil> <nil>}
	I0314 18:17:43.530971    4456 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-832100' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-832100/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-832100' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0314 18:17:43.672815    4456 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0314 18:17:43.672815    4456 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube7\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube7\minikube-integration\.minikube}
	I0314 18:17:43.672815    4456 buildroot.go:174] setting up certificates
	I0314 18:17:43.672815    4456 provision.go:84] configureAuth start
	I0314 18:17:43.673628    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-832100 ).state
	I0314 18:17:45.658422    4456 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 18:17:45.658422    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:17:45.659276    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-832100 ).networkadapters[0]).ipaddresses[0]
	I0314 18:17:48.050355    4456 main.go:141] libmachine: [stdout =====>] : 172.17.90.10
	
	I0314 18:17:48.051145    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:17:48.051145    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-832100 ).state
	I0314 18:17:50.017154    4456 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 18:17:50.017154    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:17:50.017517    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-832100 ).networkadapters[0]).ipaddresses[0]
	I0314 18:17:52.355606    4456 main.go:141] libmachine: [stdout =====>] : 172.17.90.10
	
	I0314 18:17:52.355606    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:17:52.356127    4456 provision.go:143] copyHostCerts
	I0314 18:17:52.356219    4456 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem
	I0314 18:17:52.356599    4456 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem, removing ...
	I0314 18:17:52.356685    4456 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.pem
	I0314 18:17:52.357086    4456 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0314 18:17:52.357602    4456 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem
	I0314 18:17:52.358274    4456 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem, removing ...
	I0314 18:17:52.358274    4456 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cert.pem
	I0314 18:17:52.358661    4456 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0314 18:17:52.359619    4456 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem
	I0314 18:17:52.359845    4456 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem, removing ...
	I0314 18:17:52.359903    4456 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\key.pem
	I0314 18:17:52.360187    4456 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem (1679 bytes)
	I0314 18:17:52.360952    4456 provision.go:117] generating server cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-832100 san=[127.0.0.1 172.17.90.10 ha-832100 localhost minikube]
	I0314 18:17:52.480194    4456 provision.go:177] copyRemoteCerts
	I0314 18:17:52.489181    4456 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0314 18:17:52.489181    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-832100 ).state
	I0314 18:17:54.464307    4456 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 18:17:54.464307    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:17:54.464385    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-832100 ).networkadapters[0]).ipaddresses[0]
	I0314 18:17:56.832309    4456 main.go:141] libmachine: [stdout =====>] : 172.17.90.10
	
	I0314 18:17:56.832950    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:17:56.832950    4456 sshutil.go:53] new ssh client: &{IP:172.17.90.10 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\ha-832100\id_rsa Username:docker}
	I0314 18:17:56.939853    4456 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.4503374s)
	I0314 18:17:56.939920    4456 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0314 18:17:56.940311    4456 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0314 18:17:56.983139    4456 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0314 18:17:56.983283    4456 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0314 18:17:57.023840    4456 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0314 18:17:57.024253    4456 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1196 bytes)
	I0314 18:17:57.069071    4456 provision.go:87] duration metric: took 13.3946102s to configureAuth
	I0314 18:17:57.069173    4456 buildroot.go:189] setting minikube options for container-runtime
	I0314 18:17:57.069987    4456 config.go:182] Loaded profile config "ha-832100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0314 18:17:57.070073    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-832100 ).state
	I0314 18:17:59.046331    4456 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 18:17:59.046331    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:17:59.046440    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-832100 ).networkadapters[0]).ipaddresses[0]
	I0314 18:18:01.433498    4456 main.go:141] libmachine: [stdout =====>] : 172.17.90.10
	
	I0314 18:18:01.433498    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:18:01.437608    4456 main.go:141] libmachine: Using SSH client type: native
	I0314 18:18:01.438011    4456 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xac9f80] 0xaccb60 <nil>  [] 0s} 172.17.90.10 22 <nil> <nil>}
	I0314 18:18:01.438011    4456 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0314 18:18:01.567376    4456 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0314 18:18:01.567376    4456 buildroot.go:70] root file system type: tmpfs
	I0314 18:18:01.568079    4456 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0314 18:18:01.568281    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-832100 ).state
	I0314 18:18:03.577590    4456 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 18:18:03.578127    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:18:03.578206    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-832100 ).networkadapters[0]).ipaddresses[0]
	I0314 18:18:05.974751    4456 main.go:141] libmachine: [stdout =====>] : 172.17.90.10
	
	I0314 18:18:05.974751    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:18:05.979974    4456 main.go:141] libmachine: Using SSH client type: native
	I0314 18:18:05.979974    4456 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xac9f80] 0xaccb60 <nil>  [] 0s} 172.17.90.10 22 <nil> <nil>}
	I0314 18:18:05.980499    4456 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0314 18:18:06.135976    4456 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0314 18:18:06.135976    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-832100 ).state
	I0314 18:18:08.130369    4456 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 18:18:08.130437    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:18:08.130437    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-832100 ).networkadapters[0]).ipaddresses[0]
	I0314 18:18:10.512425    4456 main.go:141] libmachine: [stdout =====>] : 172.17.90.10
	
	I0314 18:18:10.512425    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:18:10.516576    4456 main.go:141] libmachine: Using SSH client type: native
	I0314 18:18:10.516851    4456 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xac9f80] 0xaccb60 <nil>  [] 0s} 172.17.90.10 22 <nil> <nil>}
	I0314 18:18:10.516851    4456 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0314 18:18:12.636100    4456 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0314 18:18:12.636100    4456 machine.go:97] duration metric: took 42.410923s to provisionDockerMachine
	I0314 18:18:12.636100    4456 client.go:171] duration metric: took 1m48.0083436s to LocalClient.Create
	I0314 18:18:12.636100    4456 start.go:167] duration metric: took 1m48.0093453s to libmachine.API.Create "ha-832100"
	I0314 18:18:12.636100    4456 start.go:293] postStartSetup for "ha-832100" (driver="hyperv")
	I0314 18:18:12.636100    4456 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0314 18:18:12.645942    4456 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0314 18:18:12.645942    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-832100 ).state
	I0314 18:18:14.642369    4456 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 18:18:14.642369    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:18:14.642369    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-832100 ).networkadapters[0]).ipaddresses[0]
	I0314 18:18:17.025833    4456 main.go:141] libmachine: [stdout =====>] : 172.17.90.10
	
	I0314 18:18:17.025833    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:18:17.026295    4456 sshutil.go:53] new ssh client: &{IP:172.17.90.10 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\ha-832100\id_rsa Username:docker}
	I0314 18:18:17.124029    4456 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.4777515s)
	I0314 18:18:17.133691    4456 ssh_runner.go:195] Run: cat /etc/os-release
	I0314 18:18:17.140244    4456 info.go:137] Remote host: Buildroot 2023.02.9
	I0314 18:18:17.140244    4456 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\addons for local assets ...
	I0314 18:18:17.140773    4456 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\files for local assets ...
	I0314 18:18:17.140985    4456 filesync.go:149] local asset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\110522.pem -> 110522.pem in /etc/ssl/certs
	I0314 18:18:17.140985    4456 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\110522.pem -> /etc/ssl/certs/110522.pem
	I0314 18:18:17.150172    4456 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0314 18:18:17.166649    4456 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\110522.pem --> /etc/ssl/certs/110522.pem (1708 bytes)
	I0314 18:18:17.208803    4456 start.go:296] duration metric: took 4.5723093s for postStartSetup
	I0314 18:18:17.211304    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-832100 ).state
	I0314 18:18:19.181226    4456 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 18:18:19.181226    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:18:19.181301    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-832100 ).networkadapters[0]).ipaddresses[0]
	I0314 18:18:21.590953    4456 main.go:141] libmachine: [stdout =====>] : 172.17.90.10
	
	I0314 18:18:21.591818    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:18:21.591891    4456 profile.go:142] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-832100\config.json ...
	I0314 18:18:21.594083    4456 start.go:128] duration metric: took 1m56.9716561s to createHost
	I0314 18:18:21.594171    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-832100 ).state
	I0314 18:18:23.560059    4456 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 18:18:23.560200    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:18:23.560257    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-832100 ).networkadapters[0]).ipaddresses[0]
	I0314 18:18:25.943383    4456 main.go:141] libmachine: [stdout =====>] : 172.17.90.10
	
	I0314 18:18:25.943383    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:18:25.947291    4456 main.go:141] libmachine: Using SSH client type: native
	I0314 18:18:25.947970    4456 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xac9f80] 0xaccb60 <nil>  [] 0s} 172.17.90.10 22 <nil> <nil>}
	I0314 18:18:25.947970    4456 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0314 18:18:26.074570    4456 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710440306.334716873
	
	I0314 18:18:26.074570    4456 fix.go:216] guest clock: 1710440306.334716873
	I0314 18:18:26.074570    4456 fix.go:229] Guest: 2024-03-14 18:18:26.334716873 +0000 UTC Remote: 2024-03-14 18:18:21.5941717 +0000 UTC m=+122.153683001 (delta=4.740545173s)
	I0314 18:18:26.074570    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-832100 ).state
	I0314 18:18:28.059838    4456 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 18:18:28.059838    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:18:28.060392    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-832100 ).networkadapters[0]).ipaddresses[0]
	I0314 18:18:30.430433    4456 main.go:141] libmachine: [stdout =====>] : 172.17.90.10
	
	I0314 18:18:30.430433    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:18:30.435575    4456 main.go:141] libmachine: Using SSH client type: native
	I0314 18:18:30.435639    4456 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xac9f80] 0xaccb60 <nil>  [] 0s} 172.17.90.10 22 <nil> <nil>}
	I0314 18:18:30.435639    4456 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1710440306
	I0314 18:18:30.573055    4456 main.go:141] libmachine: SSH cmd err, output: <nil>: Thu Mar 14 18:18:26 UTC 2024
	
	I0314 18:18:30.573055    4456 fix.go:236] clock set: Thu Mar 14 18:18:26 UTC 2024
	 (err=<nil>)
	I0314 18:18:30.573055    4456 start.go:83] releasing machines lock for "ha-832100", held for 2m5.9499552s
	I0314 18:18:30.573824    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-832100 ).state
	I0314 18:18:32.595388    4456 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 18:18:32.595520    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:18:32.595605    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-832100 ).networkadapters[0]).ipaddresses[0]
	I0314 18:18:34.974371    4456 main.go:141] libmachine: [stdout =====>] : 172.17.90.10
	
	I0314 18:18:34.974371    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:18:34.978905    4456 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0314 18:18:34.978980    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-832100 ).state
	I0314 18:18:34.989738    4456 ssh_runner.go:195] Run: cat /version.json
	I0314 18:18:34.989738    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-832100 ).state
	I0314 18:18:36.979652    4456 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 18:18:36.980560    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:18:36.980560    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-832100 ).networkadapters[0]).ipaddresses[0]
	I0314 18:18:37.034815    4456 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 18:18:37.034815    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:18:37.034914    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-832100 ).networkadapters[0]).ipaddresses[0]
	I0314 18:18:39.426710    4456 main.go:141] libmachine: [stdout =====>] : 172.17.90.10
	
	I0314 18:18:39.426710    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:18:39.426710    4456 sshutil.go:53] new ssh client: &{IP:172.17.90.10 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\ha-832100\id_rsa Username:docker}
	I0314 18:18:39.445096    4456 main.go:141] libmachine: [stdout =====>] : 172.17.90.10
	
	I0314 18:18:39.445096    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:18:39.445096    4456 sshutil.go:53] new ssh client: &{IP:172.17.90.10 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\ha-832100\id_rsa Username:docker}
	I0314 18:18:39.527570    4456 ssh_runner.go:235] Completed: cat /version.json: (4.5374924s)
	I0314 18:18:39.543418    4456 ssh_runner.go:195] Run: systemctl --version
	I0314 18:18:39.663394    4456 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.6840628s)
	I0314 18:18:39.675325    4456 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0314 18:18:39.684605    4456 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0314 18:18:39.693680    4456 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0314 18:18:39.719997    4456 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0314 18:18:39.719997    4456 start.go:494] detecting cgroup driver to use...
	I0314 18:18:39.720748    4456 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0314 18:18:39.761611    4456 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0314 18:18:39.787989    4456 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0314 18:18:39.807207    4456 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0314 18:18:39.815120    4456 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0314 18:18:39.843529    4456 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0314 18:18:39.871811    4456 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0314 18:18:39.901060    4456 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0314 18:18:39.930414    4456 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0314 18:18:39.960525    4456 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0314 18:18:39.989233    4456 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0314 18:18:40.014374    4456 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0314 18:18:40.043459    4456 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 18:18:40.224375    4456 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0314 18:18:40.254873    4456 start.go:494] detecting cgroup driver to use...
	I0314 18:18:40.264215    4456 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0314 18:18:40.296123    4456 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0314 18:18:40.326827    4456 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0314 18:18:40.369696    4456 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0314 18:18:40.401357    4456 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0314 18:18:40.431556    4456 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0314 18:18:40.502128    4456 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0314 18:18:40.526311    4456 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0314 18:18:40.570400    4456 ssh_runner.go:195] Run: which cri-dockerd
	I0314 18:18:40.586347    4456 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0314 18:18:40.602878    4456 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0314 18:18:40.639724    4456 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0314 18:18:40.823799    4456 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0314 18:18:41.002917    4456 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0314 18:18:41.002917    4456 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0314 18:18:41.043462    4456 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 18:18:41.222389    4456 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0314 18:18:43.732007    4456 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5094301s)
	I0314 18:18:43.740716    4456 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0314 18:18:43.775083    4456 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0314 18:18:43.807756    4456 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0314 18:18:43.994640    4456 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0314 18:18:44.186775    4456 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 18:18:44.374442    4456 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0314 18:18:44.411100    4456 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0314 18:18:44.444240    4456 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 18:18:44.633751    4456 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0314 18:18:44.736416    4456 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0314 18:18:44.749404    4456 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0314 18:18:44.758930    4456 start.go:562] Will wait 60s for crictl version
	I0314 18:18:44.771523    4456 ssh_runner.go:195] Run: which crictl
	I0314 18:18:44.785524    4456 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0314 18:18:44.853451    4456 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  25.0.4
	RuntimeApiVersion:  v1
	I0314 18:18:44.860706    4456 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0314 18:18:44.899884    4456 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0314 18:18:44.937279    4456 out.go:204] * Preparing Kubernetes v1.28.4 on Docker 25.0.4 ...
	I0314 18:18:44.937381    4456 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0314 18:18:44.940889    4456 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0314 18:18:44.940889    4456 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0314 18:18:44.940889    4456 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0314 18:18:44.940889    4456 ip.go:207] Found interface: {Index:7 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:82:e8:09 Flags:up|broadcast|multicast|running}
	I0314 18:18:44.942906    4456 ip.go:210] interface addr: fe80::e3be:cf7e:6bd2:b964/64
	I0314 18:18:44.942906    4456 ip.go:210] interface addr: 172.17.80.1/20
	I0314 18:18:44.951913    4456 ssh_runner.go:195] Run: grep 172.17.80.1	host.minikube.internal$ /etc/hosts
	I0314 18:18:44.958036    4456 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.17.80.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0314 18:18:44.989374    4456 kubeadm.go:877] updating cluster {Name:ha-832100 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4
ClusterName:ha-832100 Namespace:default APIServerHAVIP:172.17.95.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.90.10 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:
[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0314 18:18:44.989607    4456 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0314 18:18:44.995852    4456 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0314 18:18:45.018479    4456 docker.go:685] Got preloaded images: 
	I0314 18:18:45.018479    4456 docker.go:691] registry.k8s.io/kube-apiserver:v1.28.4 wasn't preloaded
	I0314 18:18:45.027985    4456 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0314 18:18:45.054351    4456 ssh_runner.go:195] Run: which lz4
	I0314 18:18:45.059928    4456 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0314 18:18:45.068729    4456 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0314 18:18:45.074694    4456 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0314 18:18:45.074907    4456 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (423165415 bytes)
	I0314 18:18:46.912610    4456 docker.go:649] duration metric: took 1.8525431s to copy over tarball
	I0314 18:18:46.923411    4456 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0314 18:18:57.175149    4456 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (10.2509711s)
	I0314 18:18:57.175278    4456 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0314 18:18:57.243832    4456 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0314 18:18:57.261769    4456 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2629 bytes)
	I0314 18:18:57.301293    4456 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 18:18:57.491895    4456 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0314 18:19:00.670420    4456 ssh_runner.go:235] Completed: sudo systemctl restart docker: (3.1782876s)
	I0314 18:19:00.683094    4456 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0314 18:19:00.708931    4456 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.4
	registry.k8s.io/kube-proxy:v1.28.4
	registry.k8s.io/kube-controller-manager:v1.28.4
	registry.k8s.io/kube-scheduler:v1.28.4
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0314 18:19:00.709027    4456 cache_images.go:84] Images are preloaded, skipping loading
	I0314 18:19:00.709027    4456 kubeadm.go:928] updating node { 172.17.90.10 8443 v1.28.4 docker true true} ...
	I0314 18:19:00.709194    4456 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-832100 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.17.90.10
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:ha-832100 Namespace:default APIServerHAVIP:172.17.95.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0314 18:19:00.718289    4456 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0314 18:19:00.752901    4456 cni.go:84] Creating CNI manager for ""
	I0314 18:19:00.752901    4456 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0314 18:19:00.752978    4456 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0314 18:19:00.753031    4456 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.17.90.10 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-832100 NodeName:ha-832100 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.17.90.10"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.17.90.10 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/ma
nifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0314 18:19:00.753031    4456 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.17.90.10
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-832100"
	  kubeletExtraArgs:
	    node-ip: 172.17.90.10
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.17.90.10"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0314 18:19:00.753031    4456 kube-vip.go:105] generating kube-vip config ...
	I0314 18:19:00.753031    4456 kube-vip.go:125] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.17.95.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0314 18:19:00.761680    4456 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0314 18:19:00.778879    4456 binaries.go:44] Found k8s binaries, skipping transfer
	I0314 18:19:00.788003    4456 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0314 18:19:00.804013    4456 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0314 18:19:00.837584    4456 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0314 18:19:00.869535    4456 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2151 bytes)
	I0314 18:19:00.899726    4456 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1345 bytes)
	I0314 18:19:00.936437    4456 ssh_runner.go:195] Run: grep 172.17.95.254	control-plane.minikube.internal$ /etc/hosts
	I0314 18:19:00.942599    4456 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.17.95.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0314 18:19:00.971518    4456 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 18:19:01.163200    4456 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0314 18:19:01.189974    4456 certs.go:68] Setting up C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-832100 for IP: 172.17.90.10
	I0314 18:19:01.190071    4456 certs.go:194] generating shared ca certs ...
	I0314 18:19:01.190108    4456 certs.go:226] acquiring lock for ca certs: {Name:mk3bd6e475aadf28590677101b36f61c132d4290 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 18:19:01.190745    4456 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key
	I0314 18:19:01.190999    4456 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key
	I0314 18:19:01.191183    4456 certs.go:256] generating profile certs ...
	I0314 18:19:01.191220    4456 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-832100\client.key
	I0314 18:19:01.191220    4456 crypto.go:68] Generating cert C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-832100\client.crt with IP's: []
	I0314 18:19:01.463738    4456 crypto.go:156] Writing cert to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-832100\client.crt ...
	I0314 18:19:01.463738    4456 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-832100\client.crt: {Name:mke7ee85d592d623b3614c18b0b008ebca64d685 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 18:19:01.464740    4456 crypto.go:164] Writing key to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-832100\client.key ...
	I0314 18:19:01.464740    4456 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-832100\client.key: {Name:mkdce32fffea6e89971c206f5b31259fa396197c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 18:19:01.465747    4456 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-832100\apiserver.key.9d03ba8b
	I0314 18:19:01.466641    4456 crypto.go:68] Generating cert C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-832100\apiserver.crt.9d03ba8b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.17.90.10 172.17.95.254]
	I0314 18:19:02.138161    4456 crypto.go:156] Writing cert to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-832100\apiserver.crt.9d03ba8b ...
	I0314 18:19:02.138161    4456 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-832100\apiserver.crt.9d03ba8b: {Name:mk6b3b16c8ed352ed751c3eb6da317e96d566d2e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 18:19:02.140164    4456 crypto.go:164] Writing key to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-832100\apiserver.key.9d03ba8b ...
	I0314 18:19:02.140164    4456 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-832100\apiserver.key.9d03ba8b: {Name:mkf92df0e21d368f7173a4c5e155dc40a1b2ed63 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 18:19:02.141457    4456 certs.go:381] copying C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-832100\apiserver.crt.9d03ba8b -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-832100\apiserver.crt
	I0314 18:19:02.151651    4456 certs.go:385] copying C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-832100\apiserver.key.9d03ba8b -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-832100\apiserver.key
	I0314 18:19:02.152652    4456 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-832100\proxy-client.key
	I0314 18:19:02.152652    4456 crypto.go:68] Generating cert C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-832100\proxy-client.crt with IP's: []
	I0314 18:19:02.292452    4456 crypto.go:156] Writing cert to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-832100\proxy-client.crt ...
	I0314 18:19:02.292452    4456 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-832100\proxy-client.crt: {Name:mk51d01dcd9f3462515c3f3cd9453163da1a210a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 18:19:02.293472    4456 crypto.go:164] Writing key to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-832100\proxy-client.key ...
	I0314 18:19:02.293472    4456 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-832100\proxy-client.key: {Name:mk1b33d1bbd689220bdf6afe70b77dac85333b02 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 18:19:02.295078    4456 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0314 18:19:02.295078    4456 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0314 18:19:02.295078    4456 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0314 18:19:02.295078    4456 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0314 18:19:02.296174    4456 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-832100\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0314 18:19:02.296286    4456 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-832100\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0314 18:19:02.296286    4456 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-832100\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0314 18:19:02.303929    4456 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-832100\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0314 18:19:02.305075    4456 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\11052.pem (1338 bytes)
	W0314 18:19:02.305075    4456 certs.go:480] ignoring C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\11052_empty.pem, impossibly tiny 0 bytes
	I0314 18:19:02.305075    4456 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0314 18:19:02.306120    4456 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0314 18:19:02.306120    4456 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0314 18:19:02.306120    4456 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0314 18:19:02.306721    4456 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\110522.pem (1708 bytes)
	I0314 18:19:02.306879    4456 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\110522.pem -> /usr/share/ca-certificates/110522.pem
	I0314 18:19:02.306879    4456 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0314 18:19:02.306879    4456 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\11052.pem -> /usr/share/ca-certificates/11052.pem
	I0314 18:19:02.308314    4456 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0314 18:19:02.351854    4456 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0314 18:19:02.397991    4456 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0314 18:19:02.439484    4456 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0314 18:19:02.489177    4456 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-832100\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0314 18:19:02.531122    4456 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-832100\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0314 18:19:02.573853    4456 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-832100\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0314 18:19:02.614786    4456 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-832100\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0314 18:19:02.656146    4456 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\110522.pem --> /usr/share/ca-certificates/110522.pem (1708 bytes)
	I0314 18:19:02.701887    4456 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0314 18:19:02.744436    4456 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\11052.pem --> /usr/share/ca-certificates/11052.pem (1338 bytes)
	I0314 18:19:02.788793    4456 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0314 18:19:02.829103    4456 ssh_runner.go:195] Run: openssl version
	I0314 18:19:02.846387    4456 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11052.pem && ln -fs /usr/share/ca-certificates/11052.pem /etc/ssl/certs/11052.pem"
	I0314 18:19:02.873747    4456 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11052.pem
	I0314 18:19:02.880753    4456 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 14 17:58 /usr/share/ca-certificates/11052.pem
	I0314 18:19:02.890042    4456 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11052.pem
	I0314 18:19:02.907607    4456 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11052.pem /etc/ssl/certs/51391683.0"
	I0314 18:19:02.933906    4456 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110522.pem && ln -fs /usr/share/ca-certificates/110522.pem /etc/ssl/certs/110522.pem"
	I0314 18:19:02.960438    4456 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/110522.pem
	I0314 18:19:02.967437    4456 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 14 17:58 /usr/share/ca-certificates/110522.pem
	I0314 18:19:02.976216    4456 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110522.pem
	I0314 18:19:02.994051    4456 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110522.pem /etc/ssl/certs/3ec20f2e.0"
	I0314 18:19:03.022347    4456 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0314 18:19:03.050752    4456 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0314 18:19:03.058794    4456 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 14 17:44 /usr/share/ca-certificates/minikubeCA.pem
	I0314 18:19:03.067761    4456 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0314 18:19:03.086070    4456 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0314 18:19:03.114991    4456 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0314 18:19:03.121913    4456 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0314 18:19:03.122244    4456 kubeadm.go:391] StartCluster: {Name:ha-832100 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 Clu
sterName:ha-832100 Namespace:default APIServerHAVIP:172.17.95.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.90.10 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 18:19:03.128989    4456 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0314 18:19:03.168084    4456 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0314 18:19:03.195172    4456 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0314 18:19:03.221740    4456 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0314 18:19:03.238594    4456 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0314 18:19:03.238659    4456 kubeadm.go:156] found existing configuration files:
	
	I0314 18:19:03.247055    4456 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0314 18:19:03.263809    4456 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0314 18:19:03.276206    4456 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0314 18:19:03.303087    4456 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0314 18:19:03.321025    4456 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0314 18:19:03.330144    4456 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0314 18:19:03.359230    4456 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0314 18:19:03.376060    4456 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0314 18:19:03.385009    4456 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0314 18:19:03.412096    4456 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0314 18:19:03.427993    4456 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0314 18:19:03.440304    4456 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0314 18:19:03.457014    4456 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0314 18:19:03.869365    4456 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0314 18:19:18.642386    4456 kubeadm.go:309] [init] Using Kubernetes version: v1.28.4
	I0314 18:19:18.642447    4456 kubeadm.go:309] [preflight] Running pre-flight checks
	I0314 18:19:18.642750    4456 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0314 18:19:18.643097    4456 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0314 18:19:18.643097    4456 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0314 18:19:18.643097    4456 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0314 18:19:18.645704    4456 out.go:204]   - Generating certificates and keys ...
	I0314 18:19:18.645704    4456 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0314 18:19:18.645704    4456 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0314 18:19:18.645704    4456 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0314 18:19:18.645704    4456 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0314 18:19:18.645704    4456 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0314 18:19:18.646698    4456 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0314 18:19:18.646698    4456 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0314 18:19:18.646698    4456 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [ha-832100 localhost] and IPs [172.17.90.10 127.0.0.1 ::1]
	I0314 18:19:18.646698    4456 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0314 18:19:18.646698    4456 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [ha-832100 localhost] and IPs [172.17.90.10 127.0.0.1 ::1]
	I0314 18:19:18.646698    4456 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0314 18:19:18.647712    4456 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0314 18:19:18.647712    4456 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0314 18:19:18.647712    4456 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0314 18:19:18.647712    4456 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0314 18:19:18.647712    4456 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0314 18:19:18.647712    4456 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0314 18:19:18.647712    4456 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0314 18:19:18.647712    4456 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0314 18:19:18.648703    4456 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0314 18:19:18.650703    4456 out.go:204]   - Booting up control plane ...
	I0314 18:19:18.650703    4456 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0314 18:19:18.650703    4456 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0314 18:19:18.650703    4456 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0314 18:19:18.650703    4456 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0314 18:19:18.651707    4456 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0314 18:19:18.651707    4456 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0314 18:19:18.651707    4456 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0314 18:19:18.651707    4456 kubeadm.go:309] [apiclient] All control plane components are healthy after 8.607376 seconds
	I0314 18:19:18.651707    4456 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0314 18:19:18.652704    4456 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0314 18:19:18.652704    4456 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0314 18:19:18.652704    4456 kubeadm.go:309] [mark-control-plane] Marking the node ha-832100 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0314 18:19:18.652704    4456 kubeadm.go:309] [bootstrap-token] Using token: 9rmtes.0i3jfqfb19kabi9y
	I0314 18:19:18.656707    4456 out.go:204]   - Configuring RBAC rules ...
	I0314 18:19:18.656707    4456 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0314 18:19:18.656707    4456 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0314 18:19:18.657711    4456 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0314 18:19:18.657711    4456 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0314 18:19:18.657711    4456 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0314 18:19:18.657711    4456 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0314 18:19:18.658717    4456 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0314 18:19:18.658717    4456 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0314 18:19:18.658717    4456 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0314 18:19:18.658717    4456 kubeadm.go:309] 
	I0314 18:19:18.658717    4456 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0314 18:19:18.658717    4456 kubeadm.go:309] 
	I0314 18:19:18.658717    4456 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0314 18:19:18.658717    4456 kubeadm.go:309] 
	I0314 18:19:18.658717    4456 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0314 18:19:18.658717    4456 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0314 18:19:18.658717    4456 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0314 18:19:18.658717    4456 kubeadm.go:309] 
	I0314 18:19:18.658717    4456 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0314 18:19:18.659726    4456 kubeadm.go:309] 
	I0314 18:19:18.659726    4456 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0314 18:19:18.659726    4456 kubeadm.go:309] 
	I0314 18:19:18.659726    4456 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0314 18:19:18.659726    4456 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0314 18:19:18.659726    4456 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0314 18:19:18.659726    4456 kubeadm.go:309] 
	I0314 18:19:18.659726    4456 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0314 18:19:18.659726    4456 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0314 18:19:18.659726    4456 kubeadm.go:309] 
	I0314 18:19:18.660708    4456 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 9rmtes.0i3jfqfb19kabi9y \
	I0314 18:19:18.660708    4456 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:d6cca618a9b89cd38081689b02ff56bf93130e2cdd7cca4172dee33bbb1d34eb \
	I0314 18:19:18.660708    4456 kubeadm.go:309] 	--control-plane 
	I0314 18:19:18.660708    4456 kubeadm.go:309] 
	I0314 18:19:18.660708    4456 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0314 18:19:18.660708    4456 kubeadm.go:309] 
	I0314 18:19:18.660708    4456 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 9rmtes.0i3jfqfb19kabi9y \
	I0314 18:19:18.660708    4456 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:d6cca618a9b89cd38081689b02ff56bf93130e2cdd7cca4172dee33bbb1d34eb 
	I0314 18:19:18.660708    4456 cni.go:84] Creating CNI manager for ""
	I0314 18:19:18.660708    4456 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0314 18:19:18.664709    4456 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0314 18:19:18.678306    4456 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0314 18:19:18.686282    4456 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0314 18:19:18.686282    4456 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0314 18:19:18.729001    4456 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0314 18:19:20.248255    4456 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.5191406s)
	I0314 18:19:20.248255    4456 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0314 18:19:20.259272    4456 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 18:19:20.260258    4456 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-832100 minikube.k8s.io/updated_at=2024_03_14T18_19_20_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=c6f78a3db54ac629870afb44fb5bc8be9e04a8c7 minikube.k8s.io/name=ha-832100 minikube.k8s.io/primary=true
	I0314 18:19:20.265696    4456 ops.go:34] apiserver oom_adj: -16
	I0314 18:19:20.444909    4456 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 18:19:20.953298    4456 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 18:19:21.457360    4456 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 18:19:21.956138    4456 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 18:19:22.444787    4456 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 18:19:22.946295    4456 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 18:19:23.446967    4456 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 18:19:23.949730    4456 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 18:19:24.453183    4456 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 18:19:24.952493    4456 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 18:19:25.455636    4456 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 18:19:25.964798    4456 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 18:19:26.461958    4456 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 18:19:26.945667    4456 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 18:19:27.450236    4456 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 18:19:27.950498    4456 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 18:19:28.456067    4456 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 18:19:28.957103    4456 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 18:19:29.447930    4456 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 18:19:29.953230    4456 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 18:19:30.454542    4456 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 18:19:30.957463    4456 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 18:19:31.459529    4456 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 18:19:31.949434    4456 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 18:19:32.102377    4456 kubeadm.go:1106] duration metric: took 11.8532378s to wait for elevateKubeSystemPrivileges
	W0314 18:19:32.102377    4456 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0314 18:19:32.102377    4456 kubeadm.go:393] duration metric: took 28.9779701s to StartCluster
	I0314 18:19:32.103377    4456 settings.go:142] acquiring lock: {Name:mk2f48f1c2db86c45c5c20d13312e07e9c171d09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 18:19:32.103377    4456 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0314 18:19:32.104392    4456 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\kubeconfig: {Name:mk3b9816be6135fef42a073128e9ed356868417a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 18:19:32.106379    4456 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0314 18:19:32.106379    4456 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0314 18:19:32.106379    4456 start.go:232] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:172.17.90.10 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0314 18:19:32.106379    4456 addons.go:69] Setting default-storageclass=true in profile "ha-832100"
	I0314 18:19:32.106379    4456 start.go:240] waiting for startup goroutines ...
	I0314 18:19:32.106379    4456 addons.go:69] Setting storage-provisioner=true in profile "ha-832100"
	I0314 18:19:32.106379    4456 addons.go:234] Setting addon storage-provisioner=true in "ha-832100"
	I0314 18:19:32.106379    4456 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-832100"
	I0314 18:19:32.106379    4456 host.go:66] Checking if "ha-832100" exists ...
	I0314 18:19:32.106379    4456 config.go:182] Loaded profile config "ha-832100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0314 18:19:32.107390    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-832100 ).state
	I0314 18:19:32.107390    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-832100 ).state
	I0314 18:19:32.304139    4456 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.17.80.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0314 18:19:32.890121    4456 start.go:948] {"host.minikube.internal": 172.17.80.1} host record injected into CoreDNS's ConfigMap
	I0314 18:19:34.206141    4456 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 18:19:34.206348    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:19:34.208924    4456 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0314 18:19:34.206694    4456 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 18:19:34.209021    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:19:34.210206    4456 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0314 18:19:34.211183    4456 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0314 18:19:34.211776    4456 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0314 18:19:34.211776    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-832100 ).state
	I0314 18:19:34.211972    4456 kapi.go:59] client config for ha-832100: &rest.Config{Host:"https://172.17.95.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\ha-832100\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\ha-832100\\client.key", CAFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Nex
tProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1ec9180), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0314 18:19:34.213336    4456 cert_rotation.go:137] Starting client certificate rotation controller
	I0314 18:19:34.213336    4456 addons.go:234] Setting addon default-storageclass=true in "ha-832100"
	I0314 18:19:34.213336    4456 host.go:66] Checking if "ha-832100" exists ...
	I0314 18:19:34.214492    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-832100 ).state
	I0314 18:19:36.335141    4456 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 18:19:36.335311    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:19:36.335141    4456 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 18:19:36.335392    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:19:36.335392    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-832100 ).networkadapters[0]).ipaddresses[0]
	I0314 18:19:36.335392    4456 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0314 18:19:36.335392    4456 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0314 18:19:36.335392    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-832100 ).state
	I0314 18:19:38.402635    4456 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 18:19:38.402635    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:19:38.402635    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-832100 ).networkadapters[0]).ipaddresses[0]
	I0314 18:19:38.858045    4456 main.go:141] libmachine: [stdout =====>] : 172.17.90.10
	
	I0314 18:19:38.858826    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:19:38.858826    4456 sshutil.go:53] new ssh client: &{IP:172.17.90.10 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\ha-832100\id_rsa Username:docker}
	I0314 18:19:39.003206    4456 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0314 18:19:40.860077    4456 main.go:141] libmachine: [stdout =====>] : 172.17.90.10
	
	I0314 18:19:40.860589    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:19:40.860974    4456 sshutil.go:53] new ssh client: &{IP:172.17.90.10 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\ha-832100\id_rsa Username:docker}
	I0314 18:19:40.991850    4456 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0314 18:19:41.264681    4456 round_trippers.go:463] GET https://172.17.95.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0314 18:19:41.265221    4456 round_trippers.go:469] Request Headers:
	I0314 18:19:41.265221    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:19:41.265302    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:19:41.278457    4456 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0314 18:19:41.279333    4456 round_trippers.go:463] PUT https://172.17.95.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0314 18:19:41.279375    4456 round_trippers.go:469] Request Headers:
	I0314 18:19:41.279375    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:19:41.279375    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:19:41.279375    4456 round_trippers.go:473]     Content-Type: application/json
	I0314 18:19:41.283205    4456 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 18:19:41.286182    4456 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0314 18:19:41.288972    4456 addons.go:505] duration metric: took 9.1819088s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0314 18:19:41.289115    4456 start.go:245] waiting for cluster config update ...
	I0314 18:19:41.289115    4456 start.go:254] writing updated cluster config ...
	I0314 18:19:41.291529    4456 out.go:177] 
	I0314 18:19:41.301360    4456 config.go:182] Loaded profile config "ha-832100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0314 18:19:41.301360    4456 profile.go:142] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-832100\config.json ...
	I0314 18:19:41.306904    4456 out.go:177] * Starting "ha-832100-m02" control-plane node in "ha-832100" cluster
	I0314 18:19:41.310057    4456 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0314 18:19:41.310057    4456 cache.go:56] Caching tarball of preloaded images
	I0314 18:19:41.310583    4456 preload.go:173] Found C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0314 18:19:41.310583    4456 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0314 18:19:41.311122    4456 profile.go:142] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-832100\config.json ...
	I0314 18:19:41.315361    4456 start.go:360] acquireMachinesLock for ha-832100-m02: {Name:mk814f158b6187cc9297257c36fdbe0d2871c950 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0314 18:19:41.316362    4456 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-832100-m02"
	I0314 18:19:41.316763    4456 start.go:93] Provisioning new machine with config: &{Name:ha-832100 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.28.4 ClusterName:ha-832100 Namespace:default APIServerHAVIP:172.17.95.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.90.10 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:
0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0314 18:19:41.317056    4456 start.go:125] createHost starting for "m02" (driver="hyperv")
	I0314 18:19:41.320193    4456 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0314 18:19:41.320193    4456 start.go:159] libmachine.API.Create for "ha-832100" (driver="hyperv")
	I0314 18:19:41.320193    4456 client.go:168] LocalClient.Create starting
	I0314 18:19:41.320921    4456 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem
	I0314 18:19:41.321106    4456 main.go:141] libmachine: Decoding PEM data...
	I0314 18:19:41.321106    4456 main.go:141] libmachine: Parsing certificate...
	I0314 18:19:41.321289    4456 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem
	I0314 18:19:41.321472    4456 main.go:141] libmachine: Decoding PEM data...
	I0314 18:19:41.321533    4456 main.go:141] libmachine: Parsing certificate...
	I0314 18:19:41.321634    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0314 18:19:43.131412    4456 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0314 18:19:43.131581    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:19:43.131581    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0314 18:19:44.749899    4456 main.go:141] libmachine: [stdout =====>] : False
	
	I0314 18:19:44.750152    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:19:44.750225    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0314 18:19:46.143569    4456 main.go:141] libmachine: [stdout =====>] : True
	
	I0314 18:19:46.144198    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:19:46.144198    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0314 18:19:49.507067    4456 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0314 18:19:49.507116    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:19:49.510906    4456 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube7/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.32.1-1710348681-18375-amd64.iso...
	I0314 18:19:49.843434    4456 main.go:141] libmachine: Creating SSH key...
	I0314 18:19:49.942462    4456 main.go:141] libmachine: Creating VM...
	I0314 18:19:49.942462    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0314 18:19:52.604816    4456 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0314 18:19:52.604816    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:19:52.606490    4456 main.go:141] libmachine: Using switch "Default Switch"
	I0314 18:19:52.606490    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0314 18:19:54.259707    4456 main.go:141] libmachine: [stdout =====>] : True
	
	I0314 18:19:54.259707    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:19:54.259707    4456 main.go:141] libmachine: Creating VHD
	I0314 18:19:54.259707    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\ha-832100-m02\fixed.vhd' -SizeBytes 10MB -Fixed
	I0314 18:19:57.856082    4456 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube7
	Path                    : C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\ha-832100-m02\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 4D3D8511-8E83-4933-80F8-706AD157DDD8
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0314 18:19:57.856168    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:19:57.856168    4456 main.go:141] libmachine: Writing magic tar header
	I0314 18:19:57.856254    4456 main.go:141] libmachine: Writing SSH key tar header
	I0314 18:19:57.856608    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\ha-832100-m02\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\ha-832100-m02\disk.vhd' -VHDType Dynamic -DeleteSource
	I0314 18:20:00.888266    4456 main.go:141] libmachine: [stdout =====>] : 
	I0314 18:20:00.888266    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:20:00.888714    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\ha-832100-m02\disk.vhd' -SizeBytes 20000MB
	I0314 18:20:03.330163    4456 main.go:141] libmachine: [stdout =====>] : 
	I0314 18:20:03.330163    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:20:03.330488    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-832100-m02 -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\ha-832100-m02' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0314 18:20:06.720664    4456 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	ha-832100-m02 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0314 18:20:06.720664    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:20:06.720664    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-832100-m02 -DynamicMemoryEnabled $false
	I0314 18:20:08.838290    4456 main.go:141] libmachine: [stdout =====>] : 
	I0314 18:20:08.838290    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:20:08.839154    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-832100-m02 -Count 2
	I0314 18:20:10.912748    4456 main.go:141] libmachine: [stdout =====>] : 
	I0314 18:20:10.912748    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:20:10.912748    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-832100-m02 -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\ha-832100-m02\boot2docker.iso'
	I0314 18:20:13.325996    4456 main.go:141] libmachine: [stdout =====>] : 
	I0314 18:20:13.326077    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:20:13.326077    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-832100-m02 -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\ha-832100-m02\disk.vhd'
	I0314 18:20:15.801006    4456 main.go:141] libmachine: [stdout =====>] : 
	I0314 18:20:15.801006    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:20:15.801006    4456 main.go:141] libmachine: Starting VM...
	I0314 18:20:15.801006    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-832100-m02
	I0314 18:20:18.705799    4456 main.go:141] libmachine: [stdout =====>] : 
	I0314 18:20:18.705843    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:20:18.705843    4456 main.go:141] libmachine: Waiting for host to start...
	I0314 18:20:18.705887    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-832100-m02 ).state
	I0314 18:20:20.795858    4456 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 18:20:20.795858    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:20:20.795858    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-832100-m02 ).networkadapters[0]).ipaddresses[0]
	I0314 18:20:23.080745    4456 main.go:141] libmachine: [stdout =====>] : 
	I0314 18:20:23.080745    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:20:24.086566    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-832100-m02 ).state
	I0314 18:20:26.118343    4456 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 18:20:26.118532    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:20:26.118532    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-832100-m02 ).networkadapters[0]).ipaddresses[0]
	I0314 18:20:28.432690    4456 main.go:141] libmachine: [stdout =====>] : 
	I0314 18:20:28.432739    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:20:29.446734    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-832100-m02 ).state
	I0314 18:20:31.510457    4456 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 18:20:31.511401    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:20:31.511455    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-832100-m02 ).networkadapters[0]).ipaddresses[0]
	I0314 18:20:33.831428    4456 main.go:141] libmachine: [stdout =====>] : 
	I0314 18:20:33.831428    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:20:34.836006    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-832100-m02 ).state
	I0314 18:20:36.891805    4456 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 18:20:36.892536    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:20:36.892536    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-832100-m02 ).networkadapters[0]).ipaddresses[0]
	I0314 18:20:39.199446    4456 main.go:141] libmachine: [stdout =====>] : 
	I0314 18:20:39.199630    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:20:40.210024    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-832100-m02 ).state
	I0314 18:20:42.252333    4456 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 18:20:42.252425    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:20:42.252500    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-832100-m02 ).networkadapters[0]).ipaddresses[0]
	I0314 18:20:44.589079    4456 main.go:141] libmachine: [stdout =====>] : 172.17.92.203
	
	I0314 18:20:44.590096    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:20:44.590144    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-832100-m02 ).state
	I0314 18:20:46.539989    4456 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 18:20:46.540894    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:20:46.540894    4456 machine.go:94] provisionDockerMachine start ...
	I0314 18:20:46.541174    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-832100-m02 ).state
	I0314 18:20:48.515482    4456 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 18:20:48.515482    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:20:48.515543    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-832100-m02 ).networkadapters[0]).ipaddresses[0]
	I0314 18:20:50.895160    4456 main.go:141] libmachine: [stdout =====>] : 172.17.92.203
	
	I0314 18:20:50.895535    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:20:50.900320    4456 main.go:141] libmachine: Using SSH client type: native
	I0314 18:20:50.900392    4456 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xac9f80] 0xaccb60 <nil>  [] 0s} 172.17.92.203 22 <nil> <nil>}
	I0314 18:20:50.900392    4456 main.go:141] libmachine: About to run SSH command:
	hostname
	I0314 18:20:51.035434    4456 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0314 18:20:51.035434    4456 buildroot.go:166] provisioning hostname "ha-832100-m02"
	I0314 18:20:51.035434    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-832100-m02 ).state
	I0314 18:20:53.030125    4456 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 18:20:53.031170    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:20:53.031170    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-832100-m02 ).networkadapters[0]).ipaddresses[0]
	I0314 18:20:55.368399    4456 main.go:141] libmachine: [stdout =====>] : 172.17.92.203
	
	I0314 18:20:55.368399    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:20:55.372041    4456 main.go:141] libmachine: Using SSH client type: native
	I0314 18:20:55.372726    4456 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xac9f80] 0xaccb60 <nil>  [] 0s} 172.17.92.203 22 <nil> <nil>}
	I0314 18:20:55.372726    4456 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-832100-m02 && echo "ha-832100-m02" | sudo tee /etc/hostname
	I0314 18:20:55.527906    4456 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-832100-m02
	
	I0314 18:20:55.527906    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-832100-m02 ).state
	I0314 18:20:57.495035    4456 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 18:20:57.495035    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:20:57.495035    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-832100-m02 ).networkadapters[0]).ipaddresses[0]
	I0314 18:20:59.820054    4456 main.go:141] libmachine: [stdout =====>] : 172.17.92.203
	
	I0314 18:20:59.820054    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:20:59.824161    4456 main.go:141] libmachine: Using SSH client type: native
	I0314 18:20:59.824161    4456 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xac9f80] 0xaccb60 <nil>  [] 0s} 172.17.92.203 22 <nil> <nil>}
	I0314 18:20:59.824161    4456 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-832100-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-832100-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-832100-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0314 18:20:59.966337    4456 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0314 18:20:59.966414    4456 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube7\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube7\minikube-integration\.minikube}
	I0314 18:20:59.966414    4456 buildroot.go:174] setting up certificates
	I0314 18:20:59.966462    4456 provision.go:84] configureAuth start
	I0314 18:20:59.966508    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-832100-m02 ).state
	I0314 18:21:01.957124    4456 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 18:21:01.957124    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:21:01.957215    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-832100-m02 ).networkadapters[0]).ipaddresses[0]
	I0314 18:21:04.303301    4456 main.go:141] libmachine: [stdout =====>] : 172.17.92.203
	
	I0314 18:21:04.303301    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:21:04.303550    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-832100-m02 ).state
	I0314 18:21:06.279950    4456 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 18:21:06.279950    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:21:06.280007    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-832100-m02 ).networkadapters[0]).ipaddresses[0]
	I0314 18:21:08.643961    4456 main.go:141] libmachine: [stdout =====>] : 172.17.92.203
	
	I0314 18:21:08.643961    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:21:08.643961    4456 provision.go:143] copyHostCerts
	I0314 18:21:08.644204    4456 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem
	I0314 18:21:08.644251    4456 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem, removing ...
	I0314 18:21:08.644251    4456 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.pem
	I0314 18:21:08.644782    4456 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0314 18:21:08.645386    4456 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem
	I0314 18:21:08.645386    4456 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem, removing ...
	I0314 18:21:08.645386    4456 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cert.pem
	I0314 18:21:08.645984    4456 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0314 18:21:08.646683    4456 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem
	I0314 18:21:08.646683    4456 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem, removing ...
	I0314 18:21:08.646683    4456 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\key.pem
	I0314 18:21:08.647224    4456 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem (1679 bytes)
	I0314 18:21:08.648071    4456 provision.go:117] generating server cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-832100-m02 san=[127.0.0.1 172.17.92.203 ha-832100-m02 localhost minikube]
	I0314 18:21:08.715064    4456 provision.go:177] copyRemoteCerts
	I0314 18:21:08.724847    4456 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0314 18:21:08.724847    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-832100-m02 ).state
	I0314 18:21:10.728304    4456 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 18:21:10.728304    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:21:10.728810    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-832100-m02 ).networkadapters[0]).ipaddresses[0]
	I0314 18:21:13.044529    4456 main.go:141] libmachine: [stdout =====>] : 172.17.92.203
	
	I0314 18:21:13.044807    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:21:13.045208    4456 sshutil.go:53] new ssh client: &{IP:172.17.92.203 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\ha-832100-m02\id_rsa Username:docker}
	I0314 18:21:13.150465    4456 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.4251901s)
	I0314 18:21:13.150465    4456 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0314 18:21:13.150975    4456 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0314 18:21:13.195393    4456 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0314 18:21:13.195806    4456 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0314 18:21:13.236818    4456 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0314 18:21:13.236818    4456 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0314 18:21:13.278167    4456 provision.go:87] duration metric: took 13.3107223s to configureAuth
	I0314 18:21:13.278167    4456 buildroot.go:189] setting minikube options for container-runtime
	I0314 18:21:13.278167    4456 config.go:182] Loaded profile config "ha-832100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0314 18:21:13.278764    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-832100-m02 ).state
	I0314 18:21:15.240111    4456 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 18:21:15.241054    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:21:15.241134    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-832100-m02 ).networkadapters[0]).ipaddresses[0]
	I0314 18:21:17.579794    4456 main.go:141] libmachine: [stdout =====>] : 172.17.92.203
	
	I0314 18:21:17.579918    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:21:17.584051    4456 main.go:141] libmachine: Using SSH client type: native
	I0314 18:21:17.584213    4456 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xac9f80] 0xaccb60 <nil>  [] 0s} 172.17.92.203 22 <nil> <nil>}
	I0314 18:21:17.584213    4456 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0314 18:21:17.722389    4456 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0314 18:21:17.722475    4456 buildroot.go:70] root file system type: tmpfs
	I0314 18:21:17.722475    4456 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0314 18:21:17.722475    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-832100-m02 ).state
	I0314 18:21:19.728591    4456 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 18:21:19.728591    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:21:19.728689    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-832100-m02 ).networkadapters[0]).ipaddresses[0]
	I0314 18:21:22.102428    4456 main.go:141] libmachine: [stdout =====>] : 172.17.92.203
	
	I0314 18:21:22.102428    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:21:22.108989    4456 main.go:141] libmachine: Using SSH client type: native
	I0314 18:21:22.108989    4456 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xac9f80] 0xaccb60 <nil>  [] 0s} 172.17.92.203 22 <nil> <nil>}
	I0314 18:21:22.108989    4456 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.17.90.10"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0314 18:21:22.276365    4456 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.17.90.10
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0314 18:21:22.276468    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-832100-m02 ).state
	I0314 18:21:24.258775    4456 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 18:21:24.259188    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:21:24.259188    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-832100-m02 ).networkadapters[0]).ipaddresses[0]
	I0314 18:21:26.623481    4456 main.go:141] libmachine: [stdout =====>] : 172.17.92.203
	
	I0314 18:21:26.623481    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:21:26.627753    4456 main.go:141] libmachine: Using SSH client type: native
	I0314 18:21:26.628277    4456 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xac9f80] 0xaccb60 <nil>  [] 0s} 172.17.92.203 22 <nil> <nil>}
	I0314 18:21:26.628357    4456 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0314 18:21:28.742697    4456 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0314 18:21:28.742748    4456 machine.go:97] duration metric: took 42.1985223s to provisionDockerMachine
	I0314 18:21:28.742748    4456 client.go:171] duration metric: took 1m47.4146004s to LocalClient.Create
	I0314 18:21:28.742867    4456 start.go:167] duration metric: took 1m47.4146691s to libmachine.API.Create "ha-832100"
	I0314 18:21:28.742921    4456 start.go:293] postStartSetup for "ha-832100-m02" (driver="hyperv")
	I0314 18:21:28.742921    4456 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0314 18:21:28.751936    4456 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0314 18:21:28.751936    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-832100-m02 ).state
	I0314 18:21:30.706802    4456 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 18:21:30.707839    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:21:30.707839    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-832100-m02 ).networkadapters[0]).ipaddresses[0]
	I0314 18:21:33.081790    4456 main.go:141] libmachine: [stdout =====>] : 172.17.92.203
	
	I0314 18:21:33.081790    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:21:33.082137    4456 sshutil.go:53] new ssh client: &{IP:172.17.92.203 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\ha-832100-m02\id_rsa Username:docker}
	I0314 18:21:33.192817    4456 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.4405549s)
	I0314 18:21:33.201417    4456 ssh_runner.go:195] Run: cat /etc/os-release
	I0314 18:21:33.208749    4456 info.go:137] Remote host: Buildroot 2023.02.9
	I0314 18:21:33.208749    4456 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\addons for local assets ...
	I0314 18:21:33.209174    4456 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\files for local assets ...
	I0314 18:21:33.209707    4456 filesync.go:149] local asset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\110522.pem -> 110522.pem in /etc/ssl/certs
	I0314 18:21:33.209707    4456 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\110522.pem -> /etc/ssl/certs/110522.pem
	I0314 18:21:33.218523    4456 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0314 18:21:33.235981    4456 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\110522.pem --> /etc/ssl/certs/110522.pem (1708 bytes)
	I0314 18:21:33.278610    4456 start.go:296] duration metric: took 4.5353558s for postStartSetup
	I0314 18:21:33.280832    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-832100-m02 ).state
	I0314 18:21:35.259792    4456 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 18:21:35.260507    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:21:35.260507    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-832100-m02 ).networkadapters[0]).ipaddresses[0]
	I0314 18:21:37.630630    4456 main.go:141] libmachine: [stdout =====>] : 172.17.92.203
	
	I0314 18:21:37.630630    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:21:37.630954    4456 profile.go:142] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-832100\config.json ...
	I0314 18:21:37.632730    4456 start.go:128] duration metric: took 1m56.3070648s to createHost
	I0314 18:21:37.632837    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-832100-m02 ).state
	I0314 18:21:39.604201    4456 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 18:21:39.604230    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:21:39.604361    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-832100-m02 ).networkadapters[0]).ipaddresses[0]
	I0314 18:21:41.978201    4456 main.go:141] libmachine: [stdout =====>] : 172.17.92.203
	
	I0314 18:21:41.978201    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:21:41.982110    4456 main.go:141] libmachine: Using SSH client type: native
	I0314 18:21:41.982544    4456 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xac9f80] 0xaccb60 <nil>  [] 0s} 172.17.92.203 22 <nil> <nil>}
	I0314 18:21:41.982544    4456 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0314 18:21:42.119239    4456 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710440502.378736030
	
	I0314 18:21:42.119318    4456 fix.go:216] guest clock: 1710440502.378736030
	I0314 18:21:42.119394    4456 fix.go:229] Guest: 2024-03-14 18:21:42.37873603 +0000 UTC Remote: 2024-03-14 18:21:37.63273 +0000 UTC m=+318.177673601 (delta=4.74600603s)
	I0314 18:21:42.119467    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-832100-m02 ).state
	I0314 18:21:44.102908    4456 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 18:21:44.102908    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:21:44.103007    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-832100-m02 ).networkadapters[0]).ipaddresses[0]
	I0314 18:21:46.466549    4456 main.go:141] libmachine: [stdout =====>] : 172.17.92.203
	
	I0314 18:21:46.466865    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:21:46.470719    4456 main.go:141] libmachine: Using SSH client type: native
	I0314 18:21:46.471099    4456 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xac9f80] 0xaccb60 <nil>  [] 0s} 172.17.92.203 22 <nil> <nil>}
	I0314 18:21:46.471099    4456 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1710440502
	I0314 18:21:46.625861    4456 main.go:141] libmachine: SSH cmd err, output: <nil>: Thu Mar 14 18:21:42 UTC 2024
	
	I0314 18:21:46.625861    4456 fix.go:236] clock set: Thu Mar 14 18:21:42 UTC 2024
	 (err=<nil>)
	I0314 18:21:46.625861    4456 start.go:83] releasing machines lock for "ha-832100-m02", held for 2m5.3000853s
	I0314 18:21:46.625861    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-832100-m02 ).state
	I0314 18:21:48.579633    4456 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 18:21:48.580629    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:21:48.580695    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-832100-m02 ).networkadapters[0]).ipaddresses[0]
	I0314 18:21:50.951108    4456 main.go:141] libmachine: [stdout =====>] : 172.17.92.203
	
	I0314 18:21:50.951108    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:21:50.954485    4456 out.go:177] * Found network options:
	I0314 18:21:50.956768    4456 out.go:177]   - NO_PROXY=172.17.90.10
	W0314 18:21:50.958596    4456 proxy.go:119] fail to check proxy env: Error ip not in block
	I0314 18:21:50.961097    4456 out.go:177]   - NO_PROXY=172.17.90.10
	W0314 18:21:50.962375    4456 proxy.go:119] fail to check proxy env: Error ip not in block
	W0314 18:21:50.963204    4456 proxy.go:119] fail to check proxy env: Error ip not in block
	I0314 18:21:50.965202    4456 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0314 18:21:50.965202    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-832100-m02 ).state
	I0314 18:21:50.973404    4456 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0314 18:21:50.973404    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-832100-m02 ).state
	I0314 18:21:52.971332    4456 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 18:21:52.971332    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:21:52.971420    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-832100-m02 ).networkadapters[0]).ipaddresses[0]
	I0314 18:21:53.007148    4456 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 18:21:53.007148    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:21:53.007413    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-832100-m02 ).networkadapters[0]).ipaddresses[0]
	I0314 18:21:55.382068    4456 main.go:141] libmachine: [stdout =====>] : 172.17.92.203
	
	I0314 18:21:55.382123    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:21:55.382602    4456 sshutil.go:53] new ssh client: &{IP:172.17.92.203 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\ha-832100-m02\id_rsa Username:docker}
	I0314 18:21:55.410024    4456 main.go:141] libmachine: [stdout =====>] : 172.17.92.203
	
	I0314 18:21:55.411133    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:21:55.411429    4456 sshutil.go:53] new ssh client: &{IP:172.17.92.203 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\ha-832100-m02\id_rsa Username:docker}
	I0314 18:21:55.553319    4456 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (4.5795789s)
	I0314 18:21:55.553319    4456 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.5877799s)
	W0314 18:21:55.553319    4456 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0314 18:21:55.561463    4456 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0314 18:21:55.588655    4456 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0314 18:21:55.588711    4456 start.go:494] detecting cgroup driver to use...
	I0314 18:21:55.588768    4456 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0314 18:21:55.629346    4456 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0314 18:21:55.657509    4456 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0314 18:21:55.676540    4456 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0314 18:21:55.685048    4456 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0314 18:21:55.714301    4456 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0314 18:21:55.741093    4456 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0314 18:21:55.768933    4456 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0314 18:21:55.797287    4456 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0314 18:21:55.825160    4456 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0314 18:21:55.853888    4456 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0314 18:21:55.879472    4456 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0314 18:21:55.906001    4456 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 18:21:56.101249    4456 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0314 18:21:56.132783    4456 start.go:494] detecting cgroup driver to use...
	I0314 18:21:56.143142    4456 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0314 18:21:56.174635    4456 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0314 18:21:56.206349    4456 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0314 18:21:56.241135    4456 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0314 18:21:56.274036    4456 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0314 18:21:56.306547    4456 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0314 18:21:56.363873    4456 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0314 18:21:56.390643    4456 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0314 18:21:56.431733    4456 ssh_runner.go:195] Run: which cri-dockerd
	I0314 18:21:56.447557    4456 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0314 18:21:56.465136    4456 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0314 18:21:56.503837    4456 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0314 18:21:56.692789    4456 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0314 18:21:56.862304    4456 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0314 18:21:56.862304    4456 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0314 18:21:56.903157    4456 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 18:21:57.095477    4456 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0314 18:21:59.579702    4456 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.4840429s)
	I0314 18:21:59.587869    4456 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0314 18:21:59.620122    4456 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0314 18:21:59.651516    4456 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0314 18:21:59.841504    4456 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0314 18:22:00.030840    4456 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 18:22:00.223549    4456 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0314 18:22:00.265825    4456 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0314 18:22:00.298977    4456 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 18:22:00.476299    4456 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0314 18:22:00.568571    4456 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0314 18:22:00.578931    4456 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0314 18:22:00.587020    4456 start.go:562] Will wait 60s for crictl version
	I0314 18:22:00.595526    4456 ssh_runner.go:195] Run: which crictl
	I0314 18:22:00.610218    4456 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0314 18:22:00.676046    4456 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  25.0.4
	RuntimeApiVersion:  v1
	I0314 18:22:00.682932    4456 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0314 18:22:00.724216    4456 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0314 18:22:00.760230    4456 out.go:204] * Preparing Kubernetes v1.28.4 on Docker 25.0.4 ...
	I0314 18:22:00.762534    4456 out.go:177]   - env NO_PROXY=172.17.90.10
	I0314 18:22:00.764590    4456 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0314 18:22:00.767581    4456 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0314 18:22:00.767581    4456 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0314 18:22:00.767581    4456 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0314 18:22:00.767581    4456 ip.go:207] Found interface: {Index:7 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:82:e8:09 Flags:up|broadcast|multicast|running}
	I0314 18:22:00.770579    4456 ip.go:210] interface addr: fe80::e3be:cf7e:6bd2:b964/64
	I0314 18:22:00.770579    4456 ip.go:210] interface addr: 172.17.80.1/20
	I0314 18:22:00.778578    4456 ssh_runner.go:195] Run: grep 172.17.80.1	host.minikube.internal$ /etc/hosts
	I0314 18:22:00.785303    4456 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.17.80.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0314 18:22:00.805099    4456 mustload.go:65] Loading cluster: ha-832100
	I0314 18:22:00.805629    4456 config.go:182] Loaded profile config "ha-832100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0314 18:22:00.805950    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-832100 ).state
	I0314 18:22:02.783943    4456 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 18:22:02.783943    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:22:02.783943    4456 host.go:66] Checking if "ha-832100" exists ...
	I0314 18:22:02.784595    4456 certs.go:68] Setting up C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-832100 for IP: 172.17.92.203
	I0314 18:22:02.784595    4456 certs.go:194] generating shared ca certs ...
	I0314 18:22:02.784595    4456 certs.go:226] acquiring lock for ca certs: {Name:mk3bd6e475aadf28590677101b36f61c132d4290 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 18:22:02.785156    4456 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key
	I0314 18:22:02.785379    4456 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key
	I0314 18:22:02.785604    4456 certs.go:256] generating profile certs ...
	I0314 18:22:02.785798    4456 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-832100\client.key
	I0314 18:22:02.785798    4456 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-832100\apiserver.key.eb63f332
	I0314 18:22:02.785798    4456 crypto.go:68] Generating cert C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-832100\apiserver.crt.eb63f332 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.17.90.10 172.17.92.203 172.17.95.254]
	I0314 18:22:03.076603    4456 crypto.go:156] Writing cert to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-832100\apiserver.crt.eb63f332 ...
	I0314 18:22:03.076603    4456 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-832100\apiserver.crt.eb63f332: {Name:mka9d3bf3027e4ef73e17f329886422d122d9fb4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 18:22:03.077597    4456 crypto.go:164] Writing key to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-832100\apiserver.key.eb63f332 ...
	I0314 18:22:03.078615    4456 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-832100\apiserver.key.eb63f332: {Name:mk9bbe53e98d6a302e589182eb50882786a3f049 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 18:22:03.078792    4456 certs.go:381] copying C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-832100\apiserver.crt.eb63f332 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-832100\apiserver.crt
	I0314 18:22:03.089883    4456 certs.go:385] copying C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-832100\apiserver.key.eb63f332 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-832100\apiserver.key
	I0314 18:22:03.095968    4456 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-832100\proxy-client.key
	I0314 18:22:03.095968    4456 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0314 18:22:03.095968    4456 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0314 18:22:03.096988    4456 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0314 18:22:03.097140    4456 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0314 18:22:03.097188    4456 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-832100\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0314 18:22:03.097323    4456 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-832100\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0314 18:22:03.097423    4456 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-832100\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0314 18:22:03.097423    4456 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-832100\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0314 18:22:03.097423    4456 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\11052.pem (1338 bytes)
	W0314 18:22:03.098074    4456 certs.go:480] ignoring C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\11052_empty.pem, impossibly tiny 0 bytes
	I0314 18:22:03.098152    4456 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0314 18:22:03.098440    4456 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0314 18:22:03.098675    4456 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0314 18:22:03.098869    4456 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0314 18:22:03.099056    4456 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\110522.pem (1708 bytes)
	I0314 18:22:03.099368    4456 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\11052.pem -> /usr/share/ca-certificates/11052.pem
	I0314 18:22:03.099368    4456 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\110522.pem -> /usr/share/ca-certificates/110522.pem
	I0314 18:22:03.099576    4456 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0314 18:22:03.099733    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-832100 ).state
	I0314 18:22:05.099737    4456 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 18:22:05.099875    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:22:05.100091    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-832100 ).networkadapters[0]).ipaddresses[0]
	I0314 18:22:07.489184    4456 main.go:141] libmachine: [stdout =====>] : 172.17.90.10
	
	I0314 18:22:07.489184    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:22:07.489627    4456 sshutil.go:53] new ssh client: &{IP:172.17.90.10 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\ha-832100\id_rsa Username:docker}
	I0314 18:22:07.584214    4456 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0314 18:22:07.592013    4456 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0314 18:22:07.620274    4456 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0314 18:22:07.626977    4456 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0314 18:22:07.653502    4456 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0314 18:22:07.660186    4456 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0314 18:22:07.688875    4456 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0314 18:22:07.695642    4456 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0314 18:22:07.723707    4456 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0314 18:22:07.730252    4456 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0314 18:22:07.758006    4456 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0314 18:22:07.764776    4456 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0314 18:22:07.782880    4456 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0314 18:22:07.826679    4456 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0314 18:22:07.877138    4456 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0314 18:22:07.917686    4456 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0314 18:22:07.957897    4456 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-832100\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0314 18:22:07.999256    4456 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-832100\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0314 18:22:08.041079    4456 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-832100\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0314 18:22:08.085453    4456 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-832100\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0314 18:22:08.126309    4456 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\11052.pem --> /usr/share/ca-certificates/11052.pem (1338 bytes)
	I0314 18:22:08.169183    4456 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\110522.pem --> /usr/share/ca-certificates/110522.pem (1708 bytes)
	I0314 18:22:08.210358    4456 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0314 18:22:08.251733    4456 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0314 18:22:08.280623    4456 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0314 18:22:08.308117    4456 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0314 18:22:08.340433    4456 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0314 18:22:08.372023    4456 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0314 18:22:08.399929    4456 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0314 18:22:08.428519    4456 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0314 18:22:08.465981    4456 ssh_runner.go:195] Run: openssl version
	I0314 18:22:08.483293    4456 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11052.pem && ln -fs /usr/share/ca-certificates/11052.pem /etc/ssl/certs/11052.pem"
	I0314 18:22:08.510652    4456 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11052.pem
	I0314 18:22:08.517357    4456 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 14 17:58 /usr/share/ca-certificates/11052.pem
	I0314 18:22:08.526056    4456 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11052.pem
	I0314 18:22:08.542485    4456 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11052.pem /etc/ssl/certs/51391683.0"
	I0314 18:22:08.568656    4456 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110522.pem && ln -fs /usr/share/ca-certificates/110522.pem /etc/ssl/certs/110522.pem"
	I0314 18:22:08.599940    4456 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/110522.pem
	I0314 18:22:08.606468    4456 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 14 17:58 /usr/share/ca-certificates/110522.pem
	I0314 18:22:08.615236    4456 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110522.pem
	I0314 18:22:08.632437    4456 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110522.pem /etc/ssl/certs/3ec20f2e.0"
	I0314 18:22:08.660797    4456 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0314 18:22:08.687584    4456 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0314 18:22:08.695295    4456 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 14 17:44 /usr/share/ca-certificates/minikubeCA.pem
	I0314 18:22:08.703907    4456 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0314 18:22:08.722737    4456 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0314 18:22:08.749230    4456 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0314 18:22:08.755242    4456 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0314 18:22:08.755242    4456 kubeadm.go:928] updating node {m02 172.17.92.203 8443 v1.28.4 docker true true} ...
	I0314 18:22:08.755941    4456 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-832100-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.17.92.203
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:ha-832100 Namespace:default APIServerHAVIP:172.17.95.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0314 18:22:08.756008    4456 kube-vip.go:105] generating kube-vip config ...
	I0314 18:22:08.756041    4456 kube-vip.go:125] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.17.95.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0314 18:22:08.765523    4456 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0314 18:22:08.782650    4456 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.28.4: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.28.4': No such file or directory
	
	Initiating transfer...
	I0314 18:22:08.790898    4456 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.28.4
	I0314 18:22:08.810926    4456 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubeadm.sha256 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubeadm
	I0314 18:22:08.810926    4456 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubelet.sha256 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubelet
	I0314 18:22:08.810926    4456 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl.sha256 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubectl
	I0314 18:22:09.727232    4456 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubectl -> /var/lib/minikube/binaries/v1.28.4/kubectl
	I0314 18:22:09.739949    4456 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubectl
	I0314 18:22:09.750877    4456 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubectl': No such file or directory
	I0314 18:22:09.751537    4456 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubectl --> /var/lib/minikube/binaries/v1.28.4/kubectl (49885184 bytes)
	I0314 18:22:17.844585    4456 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubeadm -> /var/lib/minikube/binaries/v1.28.4/kubeadm
	I0314 18:22:17.854214    4456 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubeadm
	I0314 18:22:17.861030    4456 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubeadm': No such file or directory
	I0314 18:22:17.861222    4456 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubeadm --> /var/lib/minikube/binaries/v1.28.4/kubeadm (49102848 bytes)
	I0314 18:22:22.321414    4456 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0314 18:22:22.345359    4456 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubelet -> /var/lib/minikube/binaries/v1.28.4/kubelet
	I0314 18:22:22.355142    4456 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubelet
	I0314 18:22:22.361301    4456 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubelet': No such file or directory
	I0314 18:22:22.361301    4456 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubelet --> /var/lib/minikube/binaries/v1.28.4/kubelet (110850048 bytes)
	I0314 18:22:23.020492    4456 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0314 18:22:23.037618    4456 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0314 18:22:23.066254    4456 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0314 18:22:23.095124    4456 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1345 bytes)
	I0314 18:22:23.134899    4456 ssh_runner.go:195] Run: grep 172.17.95.254	control-plane.minikube.internal$ /etc/hosts
	I0314 18:22:23.143800    4456 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.17.95.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0314 18:22:23.172678    4456 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 18:22:23.355206    4456 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0314 18:22:23.384806    4456 host.go:66] Checking if "ha-832100" exists ...
	I0314 18:22:23.385381    4456 start.go:316] joinCluster: &{Name:ha-832100 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 Clust
erName:ha-832100 Namespace:default APIServerHAVIP:172.17.95.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.90.10 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.17.92.203 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpir
ation:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 18:22:23.385651    4456 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0314 18:22:23.385727    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-832100 ).state
	I0314 18:22:25.384807    4456 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 18:22:25.384807    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:22:25.384885    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-832100 ).networkadapters[0]).ipaddresses[0]
	I0314 18:22:27.794823    4456 main.go:141] libmachine: [stdout =====>] : 172.17.90.10
	
	I0314 18:22:27.794823    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:22:27.794823    4456 sshutil.go:53] new ssh client: &{IP:172.17.90.10 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\ha-832100\id_rsa Username:docker}
	I0314 18:22:27.992753    4456 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0": (4.6067657s)
	I0314 18:22:27.992926    4456 start.go:342] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:172.17.92.203 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0314 18:22:27.993011    4456 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token p7u2we.a8555h9i8xpsfr9n --discovery-token-ca-cert-hash sha256:d6cca618a9b89cd38081689b02ff56bf93130e2cdd7cca4172dee33bbb1d34eb --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-832100-m02 --control-plane --apiserver-advertise-address=172.17.92.203 --apiserver-bind-port=8443"
	I0314 18:23:24.453643    4456 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token p7u2we.a8555h9i8xpsfr9n --discovery-token-ca-cert-hash sha256:d6cca618a9b89cd38081689b02ff56bf93130e2cdd7cca4172dee33bbb1d34eb --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-832100-m02 --control-plane --apiserver-advertise-address=172.17.92.203 --apiserver-bind-port=8443": (56.4565214s)
	I0314 18:23:24.453643    4456 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0314 18:23:25.205488    4456 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-832100-m02 minikube.k8s.io/updated_at=2024_03_14T18_23_25_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=c6f78a3db54ac629870afb44fb5bc8be9e04a8c7 minikube.k8s.io/name=ha-832100 minikube.k8s.io/primary=false
	I0314 18:23:25.383511    4456 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-832100-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0314 18:23:25.525981    4456 start.go:318] duration metric: took 1m2.1360754s to joinCluster
	I0314 18:23:25.526232    4456 start.go:234] Will wait 6m0s for node &{Name:m02 IP:172.17.92.203 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0314 18:23:25.529115    4456 out.go:177] * Verifying Kubernetes components...
	I0314 18:23:25.526431    4456 config.go:182] Loaded profile config "ha-832100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0314 18:23:25.539529    4456 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 18:23:25.830984    4456 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0314 18:23:25.857035    4456 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0314 18:23:25.858029    4456 kapi.go:59] client config for ha-832100: &rest.Config{Host:"https://172.17.95.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\ha-832100\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\ha-832100\\client.key", CAFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Nex
tProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1ec9180), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0314 18:23:25.858029    4456 kubeadm.go:477] Overriding stale ClientConfig host https://172.17.95.254:8443 with https://172.17.90.10:8443
	I0314 18:23:25.859025    4456 node_ready.go:35] waiting up to 6m0s for node "ha-832100-m02" to be "Ready" ...
	I0314 18:23:25.859025    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/nodes/ha-832100-m02
	I0314 18:23:25.859025    4456 round_trippers.go:469] Request Headers:
	I0314 18:23:25.859025    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:23:25.859025    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:23:25.876882    4456 round_trippers.go:574] Response Status: 200 OK in 17 milliseconds
	I0314 18:23:26.373335    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/nodes/ha-832100-m02
	I0314 18:23:26.373399    4456 round_trippers.go:469] Request Headers:
	I0314 18:23:26.373399    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:23:26.373399    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:23:26.380685    4456 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0314 18:23:26.866516    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/nodes/ha-832100-m02
	I0314 18:23:26.866516    4456 round_trippers.go:469] Request Headers:
	I0314 18:23:26.866516    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:23:26.866516    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:23:26.872245    4456 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0314 18:23:27.361464    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/nodes/ha-832100-m02
	I0314 18:23:27.361685    4456 round_trippers.go:469] Request Headers:
	I0314 18:23:27.361685    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:23:27.361758    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:23:27.366521    4456 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 18:23:27.868795    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/nodes/ha-832100-m02
	I0314 18:23:27.868854    4456 round_trippers.go:469] Request Headers:
	I0314 18:23:27.868854    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:23:27.868854    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:23:27.873704    4456 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 18:23:27.873704    4456 node_ready.go:53] node "ha-832100-m02" has status "Ready":"False"
	I0314 18:23:28.361234    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/nodes/ha-832100-m02
	I0314 18:23:28.361312    4456 round_trippers.go:469] Request Headers:
	I0314 18:23:28.361312    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:23:28.361377    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:23:28.367031    4456 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0314 18:23:28.869089    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/nodes/ha-832100-m02
	I0314 18:23:28.869328    4456 round_trippers.go:469] Request Headers:
	I0314 18:23:28.869328    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:23:28.869328    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:23:28.873826    4456 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 18:23:29.361659    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/nodes/ha-832100-m02
	I0314 18:23:29.361659    4456 round_trippers.go:469] Request Headers:
	I0314 18:23:29.361659    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:23:29.361659    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:23:29.366934    4456 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0314 18:23:29.869756    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/nodes/ha-832100-m02
	I0314 18:23:29.869756    4456 round_trippers.go:469] Request Headers:
	I0314 18:23:29.869756    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:23:29.869756    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:23:29.874328    4456 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 18:23:29.874735    4456 node_ready.go:53] node "ha-832100-m02" has status "Ready":"False"
	I0314 18:23:30.361543    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/nodes/ha-832100-m02
	I0314 18:23:30.361715    4456 round_trippers.go:469] Request Headers:
	I0314 18:23:30.361715    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:23:30.361715    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:23:30.366478    4456 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 18:23:30.869395    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/nodes/ha-832100-m02
	I0314 18:23:30.869451    4456 round_trippers.go:469] Request Headers:
	I0314 18:23:30.869451    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:23:30.869451    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:23:31.228824    4456 round_trippers.go:574] Response Status: 200 OK in 359 milliseconds
	I0314 18:23:31.372366    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/nodes/ha-832100-m02
	I0314 18:23:31.372423    4456 round_trippers.go:469] Request Headers:
	I0314 18:23:31.372423    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:23:31.372423    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:23:31.378114    4456 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0314 18:23:31.863871    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/nodes/ha-832100-m02
	I0314 18:23:31.863965    4456 round_trippers.go:469] Request Headers:
	I0314 18:23:31.863965    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:23:31.863965    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:23:31.869340    4456 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0314 18:23:32.364196    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/nodes/ha-832100-m02
	I0314 18:23:32.364287    4456 round_trippers.go:469] Request Headers:
	I0314 18:23:32.364287    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:23:32.364287    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:23:32.369322    4456 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0314 18:23:32.370422    4456 node_ready.go:53] node "ha-832100-m02" has status "Ready":"False"
	I0314 18:23:32.868031    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/nodes/ha-832100-m02
	I0314 18:23:32.868124    4456 round_trippers.go:469] Request Headers:
	I0314 18:23:32.868124    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:23:32.868124    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:23:32.872833    4456 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 18:23:33.370723    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/nodes/ha-832100-m02
	I0314 18:23:33.370803    4456 round_trippers.go:469] Request Headers:
	I0314 18:23:33.370803    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:23:33.370803    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:23:33.375511    4456 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 18:23:33.873496    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/nodes/ha-832100-m02
	I0314 18:23:33.873496    4456 round_trippers.go:469] Request Headers:
	I0314 18:23:33.873496    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:23:33.873496    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:23:33.877904    4456 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 18:23:34.362897    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/nodes/ha-832100-m02
	I0314 18:23:34.362897    4456 round_trippers.go:469] Request Headers:
	I0314 18:23:34.362897    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:23:34.362897    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:23:34.369071    4456 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0314 18:23:34.867429    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/nodes/ha-832100-m02
	I0314 18:23:34.867516    4456 round_trippers.go:469] Request Headers:
	I0314 18:23:34.867516    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:23:34.867516    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:23:34.872198    4456 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 18:23:34.872748    4456 node_ready.go:53] node "ha-832100-m02" has status "Ready":"False"
	I0314 18:23:35.370004    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/nodes/ha-832100-m02
	I0314 18:23:35.370239    4456 round_trippers.go:469] Request Headers:
	I0314 18:23:35.370239    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:23:35.370239    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:23:35.375131    4456 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 18:23:35.874484    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/nodes/ha-832100-m02
	I0314 18:23:35.874685    4456 round_trippers.go:469] Request Headers:
	I0314 18:23:35.874685    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:23:35.874685    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:23:35.879800    4456 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0314 18:23:35.879984    4456 node_ready.go:49] node "ha-832100-m02" has status "Ready":"True"
	I0314 18:23:35.879984    4456 node_ready.go:38] duration metric: took 10.0202321s for node "ha-832100-m02" to be "Ready" ...
	I0314 18:23:35.879984    4456 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0314 18:23:35.880513    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/namespaces/kube-system/pods
	I0314 18:23:35.880645    4456 round_trippers.go:469] Request Headers:
	I0314 18:23:35.880645    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:23:35.880645    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:23:35.887558    4456 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0314 18:23:35.896050    4456 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-5rf5x" in "kube-system" namespace to be "Ready" ...
	I0314 18:23:35.896050    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-5rf5x
	I0314 18:23:35.896050    4456 round_trippers.go:469] Request Headers:
	I0314 18:23:35.896050    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:23:35.896050    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:23:35.900878    4456 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 18:23:35.902637    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/nodes/ha-832100
	I0314 18:23:35.902637    4456 round_trippers.go:469] Request Headers:
	I0314 18:23:35.902637    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:23:35.902637    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:23:35.906677    4456 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 18:23:35.907695    4456 pod_ready.go:92] pod "coredns-5dd5756b68-5rf5x" in "kube-system" namespace has status "Ready":"True"
	I0314 18:23:35.907695    4456 pod_ready.go:81] duration metric: took 11.6442ms for pod "coredns-5dd5756b68-5rf5x" in "kube-system" namespace to be "Ready" ...
	I0314 18:23:35.907759    4456 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-mnw55" in "kube-system" namespace to be "Ready" ...
	I0314 18:23:35.907837    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-mnw55
	I0314 18:23:35.907894    4456 round_trippers.go:469] Request Headers:
	I0314 18:23:35.907894    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:23:35.907921    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:23:35.910663    4456 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0314 18:23:35.912332    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/nodes/ha-832100
	I0314 18:23:35.912405    4456 round_trippers.go:469] Request Headers:
	I0314 18:23:35.912405    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:23:35.912405    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:23:35.917045    4456 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 18:23:35.917988    4456 pod_ready.go:92] pod "coredns-5dd5756b68-mnw55" in "kube-system" namespace has status "Ready":"True"
	I0314 18:23:35.917988    4456 pod_ready.go:81] duration metric: took 10.2286ms for pod "coredns-5dd5756b68-mnw55" in "kube-system" namespace to be "Ready" ...
	I0314 18:23:35.917988    4456 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-832100" in "kube-system" namespace to be "Ready" ...
	I0314 18:23:35.918096    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/namespaces/kube-system/pods/etcd-ha-832100
	I0314 18:23:35.918096    4456 round_trippers.go:469] Request Headers:
	I0314 18:23:35.918096    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:23:35.918096    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:23:35.920672    4456 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0314 18:23:35.921666    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/nodes/ha-832100
	I0314 18:23:35.921666    4456 round_trippers.go:469] Request Headers:
	I0314 18:23:35.921666    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:23:35.921666    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:23:35.925447    4456 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 18:23:35.926255    4456 pod_ready.go:92] pod "etcd-ha-832100" in "kube-system" namespace has status "Ready":"True"
	I0314 18:23:35.926255    4456 pod_ready.go:81] duration metric: took 8.2012ms for pod "etcd-ha-832100" in "kube-system" namespace to be "Ready" ...
	I0314 18:23:35.926255    4456 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-832100-m02" in "kube-system" namespace to be "Ready" ...
	I0314 18:23:35.926255    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/namespaces/kube-system/pods/etcd-ha-832100-m02
	I0314 18:23:35.926255    4456 round_trippers.go:469] Request Headers:
	I0314 18:23:35.926255    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:23:35.926255    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:23:35.929822    4456 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 18:23:35.930669    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/nodes/ha-832100-m02
	I0314 18:23:35.930669    4456 round_trippers.go:469] Request Headers:
	I0314 18:23:35.930669    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:23:35.930669    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:23:35.935432    4456 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 18:23:35.935819    4456 pod_ready.go:92] pod "etcd-ha-832100-m02" in "kube-system" namespace has status "Ready":"True"
	I0314 18:23:35.935819    4456 pod_ready.go:81] duration metric: took 9.563ms for pod "etcd-ha-832100-m02" in "kube-system" namespace to be "Ready" ...
	I0314 18:23:35.935819    4456 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-832100" in "kube-system" namespace to be "Ready" ...
	I0314 18:23:36.076268    4456 request.go:629] Waited for 140.4387ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.90.10:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-832100
	I0314 18:23:36.076599    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-832100
	I0314 18:23:36.076599    4456 round_trippers.go:469] Request Headers:
	I0314 18:23:36.076700    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:23:36.076700    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:23:36.081639    4456 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 18:23:36.278220    4456 request.go:629] Waited for 195.0811ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.90.10:8443/api/v1/nodes/ha-832100
	I0314 18:23:36.278567    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/nodes/ha-832100
	I0314 18:23:36.278644    4456 round_trippers.go:469] Request Headers:
	I0314 18:23:36.278644    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:23:36.278644    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:23:36.283125    4456 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 18:23:36.284840    4456 pod_ready.go:92] pod "kube-apiserver-ha-832100" in "kube-system" namespace has status "Ready":"True"
	I0314 18:23:36.284840    4456 pod_ready.go:81] duration metric: took 348.9961ms for pod "kube-apiserver-ha-832100" in "kube-system" namespace to be "Ready" ...
	I0314 18:23:36.284936    4456 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-832100-m02" in "kube-system" namespace to be "Ready" ...
	I0314 18:23:36.480791    4456 request.go:629] Waited for 195.7012ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.90.10:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-832100-m02
	I0314 18:23:36.481213    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-832100-m02
	I0314 18:23:36.481213    4456 round_trippers.go:469] Request Headers:
	I0314 18:23:36.481213    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:23:36.481213    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:23:36.491458    4456 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0314 18:23:36.683270    4456 request.go:629] Waited for 191.2322ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.90.10:8443/api/v1/nodes/ha-832100-m02
	I0314 18:23:36.683561    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/nodes/ha-832100-m02
	I0314 18:23:36.683561    4456 round_trippers.go:469] Request Headers:
	I0314 18:23:36.683561    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:23:36.683674    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:23:36.689322    4456 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0314 18:23:36.689860    4456 pod_ready.go:92] pod "kube-apiserver-ha-832100-m02" in "kube-system" namespace has status "Ready":"True"
	I0314 18:23:36.689860    4456 pod_ready.go:81] duration metric: took 404.849ms for pod "kube-apiserver-ha-832100-m02" in "kube-system" namespace to be "Ready" ...
	I0314 18:23:36.689860    4456 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-832100" in "kube-system" namespace to be "Ready" ...
	I0314 18:23:36.885618    4456 request.go:629] Waited for 195.4447ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.90.10:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-832100
	I0314 18:23:36.885618    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-832100
	I0314 18:23:36.885618    4456 round_trippers.go:469] Request Headers:
	I0314 18:23:36.885618    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:23:36.885618    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:23:36.891180    4456 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0314 18:23:37.089688    4456 request.go:629] Waited for 197.441ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.90.10:8443/api/v1/nodes/ha-832100
	I0314 18:23:37.089768    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/nodes/ha-832100
	I0314 18:23:37.089843    4456 round_trippers.go:469] Request Headers:
	I0314 18:23:37.089843    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:23:37.089843    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:23:37.094716    4456 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 18:23:37.095727    4456 pod_ready.go:92] pod "kube-controller-manager-ha-832100" in "kube-system" namespace has status "Ready":"True"
	I0314 18:23:37.095836    4456 pod_ready.go:81] duration metric: took 405.9049ms for pod "kube-controller-manager-ha-832100" in "kube-system" namespace to be "Ready" ...
	I0314 18:23:37.095869    4456 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-832100-m02" in "kube-system" namespace to be "Ready" ...
	I0314 18:23:37.279118    4456 request.go:629] Waited for 182.8903ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.90.10:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-832100-m02
	I0314 18:23:37.283777    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-832100-m02
	I0314 18:23:37.283777    4456 round_trippers.go:469] Request Headers:
	I0314 18:23:37.283777    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:23:37.283889    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:23:37.289232    4456 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0314 18:23:37.483807    4456 request.go:629] Waited for 192.7041ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.90.10:8443/api/v1/nodes/ha-832100-m02
	I0314 18:23:37.483912    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/nodes/ha-832100-m02
	I0314 18:23:37.483990    4456 round_trippers.go:469] Request Headers:
	I0314 18:23:37.483990    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:23:37.483990    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:23:37.490584    4456 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0314 18:23:37.491122    4456 pod_ready.go:92] pod "kube-controller-manager-ha-832100-m02" in "kube-system" namespace has status "Ready":"True"
	I0314 18:23:37.491122    4456 pod_ready.go:81] duration metric: took 395.1705ms for pod "kube-controller-manager-ha-832100-m02" in "kube-system" namespace to be "Ready" ...
	I0314 18:23:37.491122    4456 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-cnzzc" in "kube-system" namespace to be "Ready" ...
	I0314 18:23:37.686953    4456 request.go:629] Waited for 195.8167ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.90.10:8443/api/v1/namespaces/kube-system/pods/kube-proxy-cnzzc
	I0314 18:23:37.686953    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/namespaces/kube-system/pods/kube-proxy-cnzzc
	I0314 18:23:37.686953    4456 round_trippers.go:469] Request Headers:
	I0314 18:23:37.686953    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:23:37.686953    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:23:37.693064    4456 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0314 18:23:37.889338    4456 request.go:629] Waited for 195.204ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.90.10:8443/api/v1/nodes/ha-832100
	I0314 18:23:37.889724    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/nodes/ha-832100
	I0314 18:23:37.889724    4456 round_trippers.go:469] Request Headers:
	I0314 18:23:37.889724    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:23:37.889724    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:23:37.895028    4456 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0314 18:23:37.895851    4456 pod_ready.go:92] pod "kube-proxy-cnzzc" in "kube-system" namespace has status "Ready":"True"
	I0314 18:23:37.895851    4456 pod_ready.go:81] duration metric: took 404.6997ms for pod "kube-proxy-cnzzc" in "kube-system" namespace to be "Ready" ...
	I0314 18:23:37.895851    4456 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-g4l9q" in "kube-system" namespace to be "Ready" ...
	I0314 18:23:38.076544    4456 request.go:629] Waited for 180.551ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.90.10:8443/api/v1/namespaces/kube-system/pods/kube-proxy-g4l9q
	I0314 18:23:38.076985    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/namespaces/kube-system/pods/kube-proxy-g4l9q
	I0314 18:23:38.076985    4456 round_trippers.go:469] Request Headers:
	I0314 18:23:38.076985    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:23:38.076985    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:23:38.082123    4456 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0314 18:23:38.278303    4456 request.go:629] Waited for 194.7908ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.90.10:8443/api/v1/nodes/ha-832100-m02
	I0314 18:23:38.278622    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/nodes/ha-832100-m02
	I0314 18:23:38.278622    4456 round_trippers.go:469] Request Headers:
	I0314 18:23:38.278622    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:23:38.278622    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:23:38.284491    4456 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0314 18:23:38.285025    4456 pod_ready.go:92] pod "kube-proxy-g4l9q" in "kube-system" namespace has status "Ready":"True"
	I0314 18:23:38.285025    4456 pod_ready.go:81] duration metric: took 389.1456ms for pod "kube-proxy-g4l9q" in "kube-system" namespace to be "Ready" ...
	I0314 18:23:38.285025    4456 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-832100" in "kube-system" namespace to be "Ready" ...
	I0314 18:23:38.487247    4456 request.go:629] Waited for 202.0586ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.90.10:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-832100
	I0314 18:23:38.487569    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-832100
	I0314 18:23:38.487569    4456 round_trippers.go:469] Request Headers:
	I0314 18:23:38.487569    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:23:38.487614    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:23:38.493024    4456 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0314 18:23:38.675634    4456 request.go:629] Waited for 181.9308ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.90.10:8443/api/v1/nodes/ha-832100
	I0314 18:23:38.675976    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/nodes/ha-832100
	I0314 18:23:38.675976    4456 round_trippers.go:469] Request Headers:
	I0314 18:23:38.675976    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:23:38.675976    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:23:38.680745    4456 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 18:23:38.682067    4456 pod_ready.go:92] pod "kube-scheduler-ha-832100" in "kube-system" namespace has status "Ready":"True"
	I0314 18:23:38.682165    4456 pod_ready.go:81] duration metric: took 396.9284ms for pod "kube-scheduler-ha-832100" in "kube-system" namespace to be "Ready" ...
	I0314 18:23:38.682165    4456 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-832100-m02" in "kube-system" namespace to be "Ready" ...
	I0314 18:23:38.878111    4456 request.go:629] Waited for 195.8461ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.90.10:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-832100-m02
	I0314 18:23:38.878111    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-832100-m02
	I0314 18:23:38.878111    4456 round_trippers.go:469] Request Headers:
	I0314 18:23:38.878111    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:23:38.878111    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:23:38.883769    4456 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0314 18:23:39.081164    4456 request.go:629] Waited for 195.9411ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.90.10:8443/api/v1/nodes/ha-832100-m02
	I0314 18:23:39.081474    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/nodes/ha-832100-m02
	I0314 18:23:39.081508    4456 round_trippers.go:469] Request Headers:
	I0314 18:23:39.081508    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:23:39.081508    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:23:39.086262    4456 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 18:23:39.086262    4456 pod_ready.go:92] pod "kube-scheduler-ha-832100-m02" in "kube-system" namespace has status "Ready":"True"
	I0314 18:23:39.086262    4456 pod_ready.go:81] duration metric: took 404.0674ms for pod "kube-scheduler-ha-832100-m02" in "kube-system" namespace to be "Ready" ...
	I0314 18:23:39.086262    4456 pod_ready.go:38] duration metric: took 3.2060458s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0314 18:23:39.086262    4456 api_server.go:52] waiting for apiserver process to appear ...
	I0314 18:23:39.096839    4456 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 18:23:39.120142    4456 api_server.go:72] duration metric: took 13.5929244s to wait for apiserver process to appear ...
	I0314 18:23:39.120142    4456 api_server.go:88] waiting for apiserver healthz status ...
	I0314 18:23:39.120142    4456 api_server.go:253] Checking apiserver healthz at https://172.17.90.10:8443/healthz ...
	I0314 18:23:39.130124    4456 api_server.go:279] https://172.17.90.10:8443/healthz returned 200:
	ok
	I0314 18:23:39.130124    4456 round_trippers.go:463] GET https://172.17.90.10:8443/version
	I0314 18:23:39.130124    4456 round_trippers.go:469] Request Headers:
	I0314 18:23:39.130124    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:23:39.130124    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:23:39.132711    4456 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0314 18:23:39.133620    4456 api_server.go:141] control plane version: v1.28.4
	I0314 18:23:39.133620    4456 api_server.go:131] duration metric: took 13.4767ms to wait for apiserver health ...
	I0314 18:23:39.133699    4456 system_pods.go:43] waiting for kube-system pods to appear ...
	I0314 18:23:39.286917    4456 request.go:629] Waited for 152.9933ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.90.10:8443/api/v1/namespaces/kube-system/pods
	I0314 18:23:39.287187    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/namespaces/kube-system/pods
	I0314 18:23:39.287187    4456 round_trippers.go:469] Request Headers:
	I0314 18:23:39.287187    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:23:39.287222    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:23:39.294794    4456 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0314 18:23:39.301151    4456 system_pods.go:59] 17 kube-system pods found
	I0314 18:23:39.301151    4456 system_pods.go:61] "coredns-5dd5756b68-5rf5x" [a1975ad0-d327-4b3a-81a0-ead7c000b839] Running
	I0314 18:23:39.301151    4456 system_pods.go:61] "coredns-5dd5756b68-mnw55" [1eb87fcd-6c11-4457-b9dc-aaa8ec89f851] Running
	I0314 18:23:39.301151    4456 system_pods.go:61] "etcd-ha-832100" [db669e0d-400b-4b97-a76f-53f15d844a6d] Running
	I0314 18:23:39.301151    4456 system_pods.go:61] "etcd-ha-832100-m02" [0127bd94-9828-4de0-9724-82b7de2a3730] Running
	I0314 18:23:39.301151    4456 system_pods.go:61] "kindnet-6n7bk" [a1281a26-baf8-4566-b964-e4b042aceae9] Running
	I0314 18:23:39.301151    4456 system_pods.go:61] "kindnet-jvbts" [1070cc03-2571-4d58-9446-b704ad17b1b1] Running
	I0314 18:23:39.301151    4456 system_pods.go:61] "kube-apiserver-ha-832100" [30d411af-dab6-44d2-9887-a08a042d6150] Running
	I0314 18:23:39.301151    4456 system_pods.go:61] "kube-apiserver-ha-832100-m02" [53db6070-884e-4df1-b77b-15a6415384db] Running
	I0314 18:23:39.301151    4456 system_pods.go:61] "kube-controller-manager-ha-832100" [6d430700-f7cd-473e-98a7-c5d4f6c0b984] Running
	I0314 18:23:39.301151    4456 system_pods.go:61] "kube-controller-manager-ha-832100-m02" [81fa8e3e-357e-4a7a-8acc-4481c0292f26] Running
	I0314 18:23:39.301151    4456 system_pods.go:61] "kube-proxy-cnzzc" [83a6c448-c577-4c77-8e21-11efe6bab9ac] Running
	I0314 18:23:39.301151    4456 system_pods.go:61] "kube-proxy-g4l9q" [5e8dd3b4-2059-47f9-aca1-cadb8dc76b4d] Running
	I0314 18:23:39.301151    4456 system_pods.go:61] "kube-scheduler-ha-832100" [28207820-b6cd-4573-82b1-9fa8b88741b1] Running
	I0314 18:23:39.301151    4456 system_pods.go:61] "kube-scheduler-ha-832100-m02" [d0d35814-e1ca-4136-9e0a-5a578f4d08e2] Running
	I0314 18:23:39.301151    4456 system_pods.go:61] "kube-vip-ha-832100" [c20342af-ece8-442d-88e0-b15cd453b554] Running / Ready:ContainersNotReady (containers with unready status: [kube-vip]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-vip])
	I0314 18:23:39.301151    4456 system_pods.go:61] "kube-vip-ha-832100-m02" [f27cb2fa-b6eb-4c83-97c4-8582bb73aca7] Running / Ready:ContainersNotReady (containers with unready status: [kube-vip]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-vip])
	I0314 18:23:39.301151    4456 system_pods.go:61] "storage-provisioner" [099c1e5d-1c0b-4df7-b023-1f8da354c4e6] Running
	I0314 18:23:39.301151    4456 system_pods.go:74] duration metric: took 167.4397ms to wait for pod list to return data ...
	I0314 18:23:39.301151    4456 default_sa.go:34] waiting for default service account to be created ...
	I0314 18:23:39.477127    4456 request.go:629] Waited for 175.8707ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.90.10:8443/api/v1/namespaces/default/serviceaccounts
	I0314 18:23:39.477301    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/namespaces/default/serviceaccounts
	I0314 18:23:39.477301    4456 round_trippers.go:469] Request Headers:
	I0314 18:23:39.477301    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:23:39.477301    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:23:39.485304    4456 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0314 18:23:39.485304    4456 default_sa.go:45] found service account: "default"
	I0314 18:23:39.485304    4456 default_sa.go:55] duration metric: took 184.1393ms for default service account to be created ...
	I0314 18:23:39.485304    4456 system_pods.go:116] waiting for k8s-apps to be running ...
	I0314 18:23:39.679101    4456 request.go:629] Waited for 192.6625ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.90.10:8443/api/v1/namespaces/kube-system/pods
	I0314 18:23:39.679101    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/namespaces/kube-system/pods
	I0314 18:23:39.679101    4456 round_trippers.go:469] Request Headers:
	I0314 18:23:39.679101    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:23:39.679101    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:23:39.693764    4456 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0314 18:23:39.699719    4456 system_pods.go:86] 17 kube-system pods found
	I0314 18:23:39.699719    4456 system_pods.go:89] "coredns-5dd5756b68-5rf5x" [a1975ad0-d327-4b3a-81a0-ead7c000b839] Running
	I0314 18:23:39.699719    4456 system_pods.go:89] "coredns-5dd5756b68-mnw55" [1eb87fcd-6c11-4457-b9dc-aaa8ec89f851] Running
	I0314 18:23:39.699719    4456 system_pods.go:89] "etcd-ha-832100" [db669e0d-400b-4b97-a76f-53f15d844a6d] Running
	I0314 18:23:39.699719    4456 system_pods.go:89] "etcd-ha-832100-m02" [0127bd94-9828-4de0-9724-82b7de2a3730] Running
	I0314 18:23:39.699719    4456 system_pods.go:89] "kindnet-6n7bk" [a1281a26-baf8-4566-b964-e4b042aceae9] Running
	I0314 18:23:39.699719    4456 system_pods.go:89] "kindnet-jvbts" [1070cc03-2571-4d58-9446-b704ad17b1b1] Running
	I0314 18:23:39.699719    4456 system_pods.go:89] "kube-apiserver-ha-832100" [30d411af-dab6-44d2-9887-a08a042d6150] Running
	I0314 18:23:39.699719    4456 system_pods.go:89] "kube-apiserver-ha-832100-m02" [53db6070-884e-4df1-b77b-15a6415384db] Running
	I0314 18:23:39.700375    4456 system_pods.go:89] "kube-controller-manager-ha-832100" [6d430700-f7cd-473e-98a7-c5d4f6c0b984] Running
	I0314 18:23:39.700375    4456 system_pods.go:89] "kube-controller-manager-ha-832100-m02" [81fa8e3e-357e-4a7a-8acc-4481c0292f26] Running
	I0314 18:23:39.700375    4456 system_pods.go:89] "kube-proxy-cnzzc" [83a6c448-c577-4c77-8e21-11efe6bab9ac] Running
	I0314 18:23:39.700375    4456 system_pods.go:89] "kube-proxy-g4l9q" [5e8dd3b4-2059-47f9-aca1-cadb8dc76b4d] Running
	I0314 18:23:39.700375    4456 system_pods.go:89] "kube-scheduler-ha-832100" [28207820-b6cd-4573-82b1-9fa8b88741b1] Running
	I0314 18:23:39.700375    4456 system_pods.go:89] "kube-scheduler-ha-832100-m02" [d0d35814-e1ca-4136-9e0a-5a578f4d08e2] Running
	I0314 18:23:39.700375    4456 system_pods.go:89] "kube-vip-ha-832100" [c20342af-ece8-442d-88e0-b15cd453b554] Running / Ready:ContainersNotReady (containers with unready status: [kube-vip]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-vip])
	I0314 18:23:39.700375    4456 system_pods.go:89] "kube-vip-ha-832100-m02" [f27cb2fa-b6eb-4c83-97c4-8582bb73aca7] Running / Ready:ContainersNotReady (containers with unready status: [kube-vip]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-vip])
	I0314 18:23:39.700469    4456 system_pods.go:89] "storage-provisioner" [099c1e5d-1c0b-4df7-b023-1f8da354c4e6] Running
	I0314 18:23:39.700469    4456 system_pods.go:126] duration metric: took 214.1821ms to wait for k8s-apps to be running ...
	I0314 18:23:39.700469    4456 system_svc.go:44] waiting for kubelet service to be running ....
	I0314 18:23:39.709846    4456 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0314 18:23:39.733637    4456 system_svc.go:56] duration metric: took 33.1354ms WaitForService to wait for kubelet
	I0314 18:23:39.733684    4456 kubeadm.go:576] duration metric: took 14.2064216s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0314 18:23:39.733750    4456 node_conditions.go:102] verifying NodePressure condition ...
	I0314 18:23:39.883651    4456 request.go:629] Waited for 149.5757ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.90.10:8443/api/v1/nodes
	I0314 18:23:39.883829    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/nodes
	I0314 18:23:39.883829    4456 round_trippers.go:469] Request Headers:
	I0314 18:23:39.883829    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:23:39.883829    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:23:39.889165    4456 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0314 18:23:39.890157    4456 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0314 18:23:39.890255    4456 node_conditions.go:123] node cpu capacity is 2
	I0314 18:23:39.890255    4456 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0314 18:23:39.890255    4456 node_conditions.go:123] node cpu capacity is 2
	I0314 18:23:39.890255    4456 node_conditions.go:105] duration metric: took 156.4936ms to run NodePressure ...
	I0314 18:23:39.890359    4456 start.go:240] waiting for startup goroutines ...
	I0314 18:23:39.890450    4456 start.go:254] writing updated cluster config ...
	I0314 18:23:39.894037    4456 out.go:177] 
	I0314 18:23:39.907001    4456 config.go:182] Loaded profile config "ha-832100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0314 18:23:39.907662    4456 profile.go:142] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-832100\config.json ...
	I0314 18:23:39.912810    4456 out.go:177] * Starting "ha-832100-m03" control-plane node in "ha-832100" cluster
	I0314 18:23:39.915117    4456 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0314 18:23:39.915117    4456 cache.go:56] Caching tarball of preloaded images
	I0314 18:23:39.915784    4456 preload.go:173] Found C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0314 18:23:39.915784    4456 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0314 18:23:39.916315    4456 profile.go:142] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-832100\config.json ...
	I0314 18:23:39.921902    4456 start.go:360] acquireMachinesLock for ha-832100-m03: {Name:mk814f158b6187cc9297257c36fdbe0d2871c950 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0314 18:23:39.922007    4456 start.go:364] duration metric: took 53.1µs to acquireMachinesLock for "ha-832100-m03"
	I0314 18:23:39.922007    4456 start.go:93] Provisioning new machine with config: &{Name:ha-832100 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.28.4 ClusterName:ha-832100 Namespace:default APIServerHAVIP:172.17.95.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.90.10 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.17.92.203 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false in
gress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryM
irror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0314 18:23:39.922007    4456 start.go:125] createHost starting for "m03" (driver="hyperv")
	I0314 18:23:39.925483    4456 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0314 18:23:39.925483    4456 start.go:159] libmachine.API.Create for "ha-832100" (driver="hyperv")
	I0314 18:23:39.925483    4456 client.go:168] LocalClient.Create starting
	I0314 18:23:39.926249    4456 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem
	I0314 18:23:39.926249    4456 main.go:141] libmachine: Decoding PEM data...
	I0314 18:23:39.926249    4456 main.go:141] libmachine: Parsing certificate...
	I0314 18:23:39.926249    4456 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem
	I0314 18:23:39.926249    4456 main.go:141] libmachine: Decoding PEM data...
	I0314 18:23:39.926249    4456 main.go:141] libmachine: Parsing certificate...
	I0314 18:23:39.926249    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0314 18:23:41.726144    4456 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0314 18:23:41.726144    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:23:41.726144    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0314 18:23:43.359650    4456 main.go:141] libmachine: [stdout =====>] : False
	
	I0314 18:23:43.359650    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:23:43.359650    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0314 18:23:44.752810    4456 main.go:141] libmachine: [stdout =====>] : True
	
	I0314 18:23:44.753340    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:23:44.753340    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0314 18:23:48.198508    4456 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0314 18:23:48.198584    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:23:48.200246    4456 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube7/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.32.1-1710348681-18375-amd64.iso...
	I0314 18:23:48.506298    4456 main.go:141] libmachine: Creating SSH key...
	I0314 18:23:48.732710    4456 main.go:141] libmachine: Creating VM...
	I0314 18:23:48.732710    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0314 18:23:51.388088    4456 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0314 18:23:51.388088    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:23:51.388618    4456 main.go:141] libmachine: Using switch "Default Switch"
	I0314 18:23:51.388618    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0314 18:23:53.047491    4456 main.go:141] libmachine: [stdout =====>] : True
	
	I0314 18:23:53.047491    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:23:53.047672    4456 main.go:141] libmachine: Creating VHD
	I0314 18:23:53.047672    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\ha-832100-m03\fixed.vhd' -SizeBytes 10MB -Fixed
	I0314 18:23:56.597818    4456 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube7
	Path                    : C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\ha-832100-m03\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : ED37C6A0-44DF-40B6-8B14-3CF0BECB7168
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0314 18:23:56.597904    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:23:56.597996    4456 main.go:141] libmachine: Writing magic tar header
	I0314 18:23:56.598072    4456 main.go:141] libmachine: Writing SSH key tar header
	I0314 18:23:56.606196    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\ha-832100-m03\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\ha-832100-m03\disk.vhd' -VHDType Dynamic -DeleteSource
	I0314 18:23:59.608221    4456 main.go:141] libmachine: [stdout =====>] : 
	I0314 18:23:59.613270    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:23:59.613369    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\ha-832100-m03\disk.vhd' -SizeBytes 20000MB
	I0314 18:24:01.996273    4456 main.go:141] libmachine: [stdout =====>] : 
	I0314 18:24:01.996273    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:24:01.996273    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-832100-m03 -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\ha-832100-m03' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0314 18:24:05.415917    4456 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	ha-832100-m03 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0314 18:24:05.416259    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:24:05.416320    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-832100-m03 -DynamicMemoryEnabled $false
	I0314 18:24:07.489518    4456 main.go:141] libmachine: [stdout =====>] : 
	I0314 18:24:07.489518    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:24:07.489518    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-832100-m03 -Count 2
	I0314 18:24:09.527545    4456 main.go:141] libmachine: [stdout =====>] : 
	I0314 18:24:09.528089    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:24:09.528089    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-832100-m03 -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\ha-832100-m03\boot2docker.iso'
	I0314 18:24:11.930595    4456 main.go:141] libmachine: [stdout =====>] : 
	I0314 18:24:11.930648    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:24:11.930648    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-832100-m03 -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\ha-832100-m03\disk.vhd'
	I0314 18:24:14.389765    4456 main.go:141] libmachine: [stdout =====>] : 
	I0314 18:24:14.389765    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:24:14.389765    4456 main.go:141] libmachine: Starting VM...
	I0314 18:24:14.390501    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-832100-m03
	I0314 18:24:17.284393    4456 main.go:141] libmachine: [stdout =====>] : 
	I0314 18:24:17.284393    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:24:17.284393    4456 main.go:141] libmachine: Waiting for host to start...
	I0314 18:24:17.284644    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-832100-m03 ).state
	I0314 18:24:19.362431    4456 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 18:24:19.363045    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:24:19.363148    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-832100-m03 ).networkadapters[0]).ipaddresses[0]
	I0314 18:24:21.691601    4456 main.go:141] libmachine: [stdout =====>] : 
	I0314 18:24:21.692310    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:24:22.704524    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-832100-m03 ).state
	I0314 18:24:24.704052    4456 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 18:24:24.704207    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:24:24.704253    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-832100-m03 ).networkadapters[0]).ipaddresses[0]
	I0314 18:24:27.013832    4456 main.go:141] libmachine: [stdout =====>] : 
	I0314 18:24:27.013832    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:24:28.015167    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-832100-m03 ).state
	I0314 18:24:30.032828    4456 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 18:24:30.032828    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:24:30.032828    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-832100-m03 ).networkadapters[0]).ipaddresses[0]
	I0314 18:24:32.348790    4456 main.go:141] libmachine: [stdout =====>] : 
	I0314 18:24:32.349001    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:24:33.359120    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-832100-m03 ).state
	I0314 18:24:35.394217    4456 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 18:24:35.394217    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:24:35.394217    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-832100-m03 ).networkadapters[0]).ipaddresses[0]
	I0314 18:24:37.669171    4456 main.go:141] libmachine: [stdout =====>] : 
	I0314 18:24:37.669171    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:24:38.678659    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-832100-m03 ).state
	I0314 18:24:40.705115    4456 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 18:24:40.705906    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:24:40.705970    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-832100-m03 ).networkadapters[0]).ipaddresses[0]
	I0314 18:24:43.058138    4456 main.go:141] libmachine: [stdout =====>] : 172.17.89.54
	
	I0314 18:24:43.058138    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:24:43.058138    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-832100-m03 ).state
	I0314 18:24:44.994117    4456 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 18:24:44.994117    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:24:44.994117    4456 machine.go:94] provisionDockerMachine start ...
	I0314 18:24:44.994245    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-832100-m03 ).state
	I0314 18:24:46.985416    4456 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 18:24:46.985416    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:24:46.986069    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-832100-m03 ).networkadapters[0]).ipaddresses[0]
	I0314 18:24:49.344205    4456 main.go:141] libmachine: [stdout =====>] : 172.17.89.54
	
	I0314 18:24:49.344443    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:24:49.348000    4456 main.go:141] libmachine: Using SSH client type: native
	I0314 18:24:49.357524    4456 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xac9f80] 0xaccb60 <nil>  [] 0s} 172.17.89.54 22 <nil> <nil>}
	I0314 18:24:49.357524    4456 main.go:141] libmachine: About to run SSH command:
	hostname
	I0314 18:24:49.485978    4456 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0314 18:24:49.485978    4456 buildroot.go:166] provisioning hostname "ha-832100-m03"
	I0314 18:24:49.485978    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-832100-m03 ).state
	I0314 18:24:51.435939    4456 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 18:24:51.436654    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:24:51.436752    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-832100-m03 ).networkadapters[0]).ipaddresses[0]
	I0314 18:24:53.773459    4456 main.go:141] libmachine: [stdout =====>] : 172.17.89.54
	
	I0314 18:24:53.773459    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:24:53.778134    4456 main.go:141] libmachine: Using SSH client type: native
	I0314 18:24:53.778134    4456 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xac9f80] 0xaccb60 <nil>  [] 0s} 172.17.89.54 22 <nil> <nil>}
	I0314 18:24:53.778134    4456 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-832100-m03 && echo "ha-832100-m03" | sudo tee /etc/hostname
	I0314 18:24:53.925684    4456 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-832100-m03
	
	I0314 18:24:53.925684    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-832100-m03 ).state
	I0314 18:24:55.870992    4456 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 18:24:55.870992    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:24:55.870992    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-832100-m03 ).networkadapters[0]).ipaddresses[0]
	I0314 18:24:58.253424    4456 main.go:141] libmachine: [stdout =====>] : 172.17.89.54
	
	I0314 18:24:58.253561    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:24:58.257148    4456 main.go:141] libmachine: Using SSH client type: native
	I0314 18:24:58.257148    4456 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xac9f80] 0xaccb60 <nil>  [] 0s} 172.17.89.54 22 <nil> <nil>}
	I0314 18:24:58.257148    4456 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-832100-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-832100-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-832100-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0314 18:24:58.388339    4456 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0314 18:24:58.388339    4456 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube7\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube7\minikube-integration\.minikube}
	I0314 18:24:58.388339    4456 buildroot.go:174] setting up certificates
	I0314 18:24:58.388339    4456 provision.go:84] configureAuth start
	I0314 18:24:58.388339    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-832100-m03 ).state
	I0314 18:25:00.342689    4456 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 18:25:00.343514    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:25:00.343514    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-832100-m03 ).networkadapters[0]).ipaddresses[0]
	I0314 18:25:02.698584    4456 main.go:141] libmachine: [stdout =====>] : 172.17.89.54
	
	I0314 18:25:02.699241    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:25:02.699241    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-832100-m03 ).state
	I0314 18:25:04.661912    4456 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 18:25:04.662311    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:25:04.662311    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-832100-m03 ).networkadapters[0]).ipaddresses[0]
	I0314 18:25:07.021304    4456 main.go:141] libmachine: [stdout =====>] : 172.17.89.54
	
	I0314 18:25:07.021304    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:25:07.021304    4456 provision.go:143] copyHostCerts
	I0314 18:25:07.021477    4456 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem
	I0314 18:25:07.021477    4456 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem, removing ...
	I0314 18:25:07.021477    4456 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.pem
	I0314 18:25:07.021969    4456 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0314 18:25:07.022912    4456 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem
	I0314 18:25:07.023160    4456 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem, removing ...
	I0314 18:25:07.023160    4456 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cert.pem
	I0314 18:25:07.023516    4456 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0314 18:25:07.024393    4456 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem
	I0314 18:25:07.024739    4456 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem, removing ...
	I0314 18:25:07.024739    4456 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\key.pem
	I0314 18:25:07.025149    4456 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem (1679 bytes)
	I0314 18:25:07.025575    4456 provision.go:117] generating server cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-832100-m03 san=[127.0.0.1 172.17.89.54 ha-832100-m03 localhost minikube]
	I0314 18:25:07.222638    4456 provision.go:177] copyRemoteCerts
	I0314 18:25:07.231650    4456 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0314 18:25:07.231650    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-832100-m03 ).state
	I0314 18:25:09.205126    4456 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 18:25:09.205922    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:25:09.205922    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-832100-m03 ).networkadapters[0]).ipaddresses[0]
	I0314 18:25:11.551218    4456 main.go:141] libmachine: [stdout =====>] : 172.17.89.54
	
	I0314 18:25:11.551218    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:25:11.551601    4456 sshutil.go:53] new ssh client: &{IP:172.17.89.54 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\ha-832100-m03\id_rsa Username:docker}
	I0314 18:25:11.656175    4456 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.4242079s)
	I0314 18:25:11.656240    4456 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0314 18:25:11.656362    4456 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0314 18:25:11.703292    4456 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0314 18:25:11.703458    4456 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0314 18:25:11.751861    4456 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0314 18:25:11.751861    4456 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0314 18:25:11.795910    4456 provision.go:87] duration metric: took 13.4066102s to configureAuth
	I0314 18:25:11.796907    4456 buildroot.go:189] setting minikube options for container-runtime
	I0314 18:25:11.796907    4456 config.go:182] Loaded profile config "ha-832100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0314 18:25:11.796907    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-832100-m03 ).state
	I0314 18:25:13.753295    4456 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 18:25:13.753334    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:25:13.753407    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-832100-m03 ).networkadapters[0]).ipaddresses[0]
	I0314 18:25:16.143073    4456 main.go:141] libmachine: [stdout =====>] : 172.17.89.54
	
	I0314 18:25:16.143940    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:25:16.147776    4456 main.go:141] libmachine: Using SSH client type: native
	I0314 18:25:16.148167    4456 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xac9f80] 0xaccb60 <nil>  [] 0s} 172.17.89.54 22 <nil> <nil>}
	I0314 18:25:16.148167    4456 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0314 18:25:16.278483    4456 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0314 18:25:16.278571    4456 buildroot.go:70] root file system type: tmpfs
	I0314 18:25:16.278798    4456 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0314 18:25:16.278868    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-832100-m03 ).state
	I0314 18:25:18.244263    4456 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 18:25:18.244263    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:25:18.244374    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-832100-m03 ).networkadapters[0]).ipaddresses[0]
	I0314 18:25:20.588253    4456 main.go:141] libmachine: [stdout =====>] : 172.17.89.54
	
	I0314 18:25:20.588253    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:25:20.595073    4456 main.go:141] libmachine: Using SSH client type: native
	I0314 18:25:20.595734    4456 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xac9f80] 0xaccb60 <nil>  [] 0s} 172.17.89.54 22 <nil> <nil>}
	I0314 18:25:20.595734    4456 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.17.90.10"
	Environment="NO_PROXY=172.17.90.10,172.17.92.203"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0314 18:25:20.740991    4456 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.17.90.10
	Environment=NO_PROXY=172.17.90.10,172.17.92.203
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0314 18:25:20.741580    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-832100-m03 ).state
	I0314 18:25:22.717574    4456 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 18:25:22.717574    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:25:22.717574    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-832100-m03 ).networkadapters[0]).ipaddresses[0]
	I0314 18:25:25.067588    4456 main.go:141] libmachine: [stdout =====>] : 172.17.89.54
	
	I0314 18:25:25.067588    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:25:25.071183    4456 main.go:141] libmachine: Using SSH client type: native
	I0314 18:25:25.071242    4456 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xac9f80] 0xaccb60 <nil>  [] 0s} 172.17.89.54 22 <nil> <nil>}
	I0314 18:25:25.071242    4456 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0314 18:25:27.177254    4456 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0314 18:25:27.177254    4456 machine.go:97] duration metric: took 42.1801112s to provisionDockerMachine
	I0314 18:25:27.177254    4456 client.go:171] duration metric: took 1m47.244054s to LocalClient.Create
	I0314 18:25:27.177794    4456 start.go:167] duration metric: took 1m47.244054s to libmachine.API.Create "ha-832100"
	I0314 18:25:27.177843    4456 start.go:293] postStartSetup for "ha-832100-m03" (driver="hyperv")
	I0314 18:25:27.177866    4456 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0314 18:25:27.186184    4456 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0314 18:25:27.186184    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-832100-m03 ).state
	I0314 18:25:29.200667    4456 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 18:25:29.201124    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:25:29.201203    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-832100-m03 ).networkadapters[0]).ipaddresses[0]
	I0314 18:25:31.559491    4456 main.go:141] libmachine: [stdout =====>] : 172.17.89.54
	
	I0314 18:25:31.559491    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:25:31.560327    4456 sshutil.go:53] new ssh client: &{IP:172.17.89.54 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\ha-832100-m03\id_rsa Username:docker}
	I0314 18:25:31.653939    4456 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.4674328s)
	I0314 18:25:31.663804    4456 ssh_runner.go:195] Run: cat /etc/os-release
	I0314 18:25:31.670686    4456 info.go:137] Remote host: Buildroot 2023.02.9
	I0314 18:25:31.670771    4456 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\addons for local assets ...
	I0314 18:25:31.671063    4456 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\files for local assets ...
	I0314 18:25:31.671320    4456 filesync.go:149] local asset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\110522.pem -> 110522.pem in /etc/ssl/certs
	I0314 18:25:31.671320    4456 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\110522.pem -> /etc/ssl/certs/110522.pem
	I0314 18:25:31.680617    4456 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0314 18:25:31.698729    4456 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\110522.pem --> /etc/ssl/certs/110522.pem (1708 bytes)
	I0314 18:25:31.742770    4456 start.go:296] duration metric: took 4.5645982s for postStartSetup
	I0314 18:25:31.744965    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-832100-m03 ).state
	I0314 18:25:33.720266    4456 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 18:25:33.721015    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:25:33.721015    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-832100-m03 ).networkadapters[0]).ipaddresses[0]
	I0314 18:25:36.063260    4456 main.go:141] libmachine: [stdout =====>] : 172.17.89.54
	
	I0314 18:25:36.063260    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:25:36.064366    4456 profile.go:142] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-832100\config.json ...
	I0314 18:25:36.066192    4456 start.go:128] duration metric: took 1m56.1358274s to createHost
	I0314 18:25:36.066192    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-832100-m03 ).state
	I0314 18:25:38.012377    4456 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 18:25:38.012377    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:25:38.012517    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-832100-m03 ).networkadapters[0]).ipaddresses[0]
	I0314 18:25:40.377672    4456 main.go:141] libmachine: [stdout =====>] : 172.17.89.54
	
	I0314 18:25:40.377925    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:25:40.382059    4456 main.go:141] libmachine: Using SSH client type: native
	I0314 18:25:40.382425    4456 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xac9f80] 0xaccb60 <nil>  [] 0s} 172.17.89.54 22 <nil> <nil>}
	I0314 18:25:40.382498    4456 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0314 18:25:40.512392    4456 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710440740.773172686
	
	I0314 18:25:40.512392    4456 fix.go:216] guest clock: 1710440740.773172686
	I0314 18:25:40.512488    4456 fix.go:229] Guest: 2024-03-14 18:25:40.773172686 +0000 UTC Remote: 2024-03-14 18:25:36.0661926 +0000 UTC m=+556.593856501 (delta=4.706980086s)
	I0314 18:25:40.512488    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-832100-m03 ).state
	I0314 18:25:42.467679    4456 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 18:25:42.467679    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:25:42.468370    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-832100-m03 ).networkadapters[0]).ipaddresses[0]
	I0314 18:25:44.797230    4456 main.go:141] libmachine: [stdout =====>] : 172.17.89.54
	
	I0314 18:25:44.797230    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:25:44.802081    4456 main.go:141] libmachine: Using SSH client type: native
	I0314 18:25:44.802684    4456 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xac9f80] 0xaccb60 <nil>  [] 0s} 172.17.89.54 22 <nil> <nil>}
	I0314 18:25:44.802684    4456 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1710440740
	I0314 18:25:44.940695    4456 main.go:141] libmachine: SSH cmd err, output: <nil>: Thu Mar 14 18:25:40 UTC 2024
	
	I0314 18:25:44.941226    4456 fix.go:236] clock set: Thu Mar 14 18:25:40 UTC 2024
	 (err=<nil>)
	I0314 18:25:44.941226    4456 start.go:83] releasing machines lock for "ha-832100-m03", held for 2m5.0102203s
	I0314 18:25:44.941400    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-832100-m03 ).state
	I0314 18:25:46.886916    4456 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 18:25:46.886916    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:25:46.887149    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-832100-m03 ).networkadapters[0]).ipaddresses[0]
	I0314 18:25:49.220645    4456 main.go:141] libmachine: [stdout =====>] : 172.17.89.54
	
	I0314 18:25:49.220645    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:25:49.225136    4456 out.go:177] * Found network options:
	I0314 18:25:49.227214    4456 out.go:177]   - NO_PROXY=172.17.90.10,172.17.92.203
	W0314 18:25:49.229799    4456 proxy.go:119] fail to check proxy env: Error ip not in block
	W0314 18:25:49.229799    4456 proxy.go:119] fail to check proxy env: Error ip not in block
	I0314 18:25:49.231853    4456 out.go:177]   - NO_PROXY=172.17.90.10,172.17.92.203
	W0314 18:25:49.233235    4456 proxy.go:119] fail to check proxy env: Error ip not in block
	W0314 18:25:49.233235    4456 proxy.go:119] fail to check proxy env: Error ip not in block
	W0314 18:25:49.234233    4456 proxy.go:119] fail to check proxy env: Error ip not in block
	W0314 18:25:49.234233    4456 proxy.go:119] fail to check proxy env: Error ip not in block
	I0314 18:25:49.237013    4456 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0314 18:25:49.238091    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-832100-m03 ).state
	I0314 18:25:49.243305    4456 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0314 18:25:49.244307    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-832100-m03 ).state
	I0314 18:25:51.236990    4456 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 18:25:51.236990    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:25:51.236990    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-832100-m03 ).networkadapters[0]).ipaddresses[0]
	I0314 18:25:51.249881    4456 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 18:25:51.249881    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:25:51.249881    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-832100-m03 ).networkadapters[0]).ipaddresses[0]
	I0314 18:25:53.627475    4456 main.go:141] libmachine: [stdout =====>] : 172.17.89.54
	
	I0314 18:25:53.627475    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:25:53.627629    4456 sshutil.go:53] new ssh client: &{IP:172.17.89.54 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\ha-832100-m03\id_rsa Username:docker}
	I0314 18:25:53.650585    4456 main.go:141] libmachine: [stdout =====>] : 172.17.89.54
	
	I0314 18:25:53.650585    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:25:53.650585    4456 sshutil.go:53] new ssh client: &{IP:172.17.89.54 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\ha-832100-m03\id_rsa Username:docker}
	I0314 18:25:53.774688    4456 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (4.5310553s)
	W0314 18:25:53.774688    4456 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0314 18:25:53.774812    4456 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.5374699s)
	I0314 18:25:53.783576    4456 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0314 18:25:53.810333    4456 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0314 18:25:53.810333    4456 start.go:494] detecting cgroup driver to use...
	I0314 18:25:53.810333    4456 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0314 18:25:53.850261    4456 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0314 18:25:53.884126    4456 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0314 18:25:53.904936    4456 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0314 18:25:53.917935    4456 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0314 18:25:53.948932    4456 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0314 18:25:53.983402    4456 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0314 18:25:54.015433    4456 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0314 18:25:54.044429    4456 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0314 18:25:54.072457    4456 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0314 18:25:54.101345    4456 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0314 18:25:54.128213    4456 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0314 18:25:54.155984    4456 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 18:25:54.347837    4456 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0314 18:25:54.379685    4456 start.go:494] detecting cgroup driver to use...
	I0314 18:25:54.390679    4456 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0314 18:25:54.422190    4456 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0314 18:25:54.452805    4456 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0314 18:25:54.491981    4456 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0314 18:25:54.522884    4456 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0314 18:25:54.553646    4456 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0314 18:25:54.636093    4456 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0314 18:25:54.659013    4456 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0314 18:25:54.702208    4456 ssh_runner.go:195] Run: which cri-dockerd
	I0314 18:25:54.718333    4456 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0314 18:25:54.743201    4456 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0314 18:25:54.784354    4456 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0314 18:25:54.973991    4456 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0314 18:25:55.152370    4456 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0314 18:25:55.152370    4456 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0314 18:25:55.190439    4456 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 18:25:55.366737    4456 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0314 18:25:57.866034    4456 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.4990428s)
	I0314 18:25:57.874926    4456 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0314 18:25:57.910228    4456 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0314 18:25:57.943810    4456 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0314 18:25:58.135489    4456 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0314 18:25:58.328470    4456 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 18:25:58.511718    4456 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0314 18:25:58.548132    4456 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0314 18:25:58.580352    4456 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 18:25:58.769254    4456 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0314 18:25:58.865204    4456 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0314 18:25:58.875707    4456 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0314 18:25:58.886002    4456 start.go:562] Will wait 60s for crictl version
	I0314 18:25:58.895571    4456 ssh_runner.go:195] Run: which crictl
	I0314 18:25:58.911288    4456 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0314 18:25:58.984261    4456 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  25.0.4
	RuntimeApiVersion:  v1
	I0314 18:25:58.993731    4456 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0314 18:25:59.034062    4456 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0314 18:25:59.069392    4456 out.go:204] * Preparing Kubernetes v1.28.4 on Docker 25.0.4 ...
	I0314 18:25:59.073124    4456 out.go:177]   - env NO_PROXY=172.17.90.10
	I0314 18:25:59.075295    4456 out.go:177]   - env NO_PROXY=172.17.90.10,172.17.92.203
	I0314 18:25:59.077622    4456 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0314 18:25:59.082799    4456 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0314 18:25:59.082871    4456 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0314 18:25:59.082871    4456 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0314 18:25:59.082871    4456 ip.go:207] Found interface: {Index:7 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:82:e8:09 Flags:up|broadcast|multicast|running}
	I0314 18:25:59.085843    4456 ip.go:210] interface addr: fe80::e3be:cf7e:6bd2:b964/64
	I0314 18:25:59.085898    4456 ip.go:210] interface addr: 172.17.80.1/20
	I0314 18:25:59.097874    4456 ssh_runner.go:195] Run: grep 172.17.80.1	host.minikube.internal$ /etc/hosts
	I0314 18:25:59.103737    4456 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.17.80.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0314 18:25:59.123902    4456 mustload.go:65] Loading cluster: ha-832100
	I0314 18:25:59.124536    4456 config.go:182] Loaded profile config "ha-832100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0314 18:25:59.124722    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-832100 ).state
	I0314 18:26:01.063855    4456 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 18:26:01.063855    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:26:01.063855    4456 host.go:66] Checking if "ha-832100" exists ...
	I0314 18:26:01.065177    4456 certs.go:68] Setting up C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-832100 for IP: 172.17.89.54
	I0314 18:26:01.065177    4456 certs.go:194] generating shared ca certs ...
	I0314 18:26:01.065259    4456 certs.go:226] acquiring lock for ca certs: {Name:mk3bd6e475aadf28590677101b36f61c132d4290 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 18:26:01.065837    4456 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key
	I0314 18:26:01.066174    4456 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key
	I0314 18:26:01.066356    4456 certs.go:256] generating profile certs ...
	I0314 18:26:01.066718    4456 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-832100\client.key
	I0314 18:26:01.066718    4456 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-832100\apiserver.key.4377854c
	I0314 18:26:01.067051    4456 crypto.go:68] Generating cert C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-832100\apiserver.crt.4377854c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.17.90.10 172.17.92.203 172.17.89.54 172.17.95.254]
	I0314 18:26:01.241196    4456 crypto.go:156] Writing cert to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-832100\apiserver.crt.4377854c ...
	I0314 18:26:01.241196    4456 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-832100\apiserver.crt.4377854c: {Name:mka1507243b4541904331c4d3a2bb32413478303 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 18:26:01.242196    4456 crypto.go:164] Writing key to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-832100\apiserver.key.4377854c ...
	I0314 18:26:01.242196    4456 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-832100\apiserver.key.4377854c: {Name:mk0f1cb39b26dc4d2052fa37e53b0b761513c8aa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 18:26:01.243328    4456 certs.go:381] copying C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-832100\apiserver.crt.4377854c -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-832100\apiserver.crt
	I0314 18:26:01.256384    4456 certs.go:385] copying C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-832100\apiserver.key.4377854c -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-832100\apiserver.key
	I0314 18:26:01.257588    4456 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-832100\proxy-client.key
	I0314 18:26:01.257588    4456 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0314 18:26:01.258391    4456 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0314 18:26:01.258391    4456 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0314 18:26:01.258722    4456 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0314 18:26:01.258802    4456 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-832100\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0314 18:26:01.258926    4456 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-832100\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0314 18:26:01.266248    4456 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-832100\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0314 18:26:01.266852    4456 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-832100\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0314 18:26:01.267303    4456 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\11052.pem (1338 bytes)
	W0314 18:26:01.267543    4456 certs.go:480] ignoring C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\11052_empty.pem, impossibly tiny 0 bytes
	I0314 18:26:01.267616    4456 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0314 18:26:01.267774    4456 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0314 18:26:01.267774    4456 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0314 18:26:01.267774    4456 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0314 18:26:01.268376    4456 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\110522.pem (1708 bytes)
	I0314 18:26:01.268580    4456 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\110522.pem -> /usr/share/ca-certificates/110522.pem
	I0314 18:26:01.268728    4456 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0314 18:26:01.268802    4456 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\11052.pem -> /usr/share/ca-certificates/11052.pem
	I0314 18:26:01.268955    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-832100 ).state
	I0314 18:26:03.233614    4456 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 18:26:03.233614    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:26:03.234386    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-832100 ).networkadapters[0]).ipaddresses[0]
	I0314 18:26:05.595287    4456 main.go:141] libmachine: [stdout =====>] : 172.17.90.10
	
	I0314 18:26:05.595329    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:26:05.595810    4456 sshutil.go:53] new ssh client: &{IP:172.17.90.10 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\ha-832100\id_rsa Username:docker}
	I0314 18:26:05.687697    4456 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0314 18:26:05.695391    4456 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0314 18:26:05.722310    4456 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0314 18:26:05.729552    4456 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0314 18:26:05.756798    4456 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0314 18:26:05.764650    4456 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0314 18:26:05.792790    4456 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0314 18:26:05.799163    4456 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0314 18:26:05.825720    4456 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0314 18:26:05.832424    4456 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0314 18:26:05.860634    4456 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0314 18:26:05.866986    4456 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0314 18:26:05.884498    4456 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0314 18:26:05.930334    4456 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0314 18:26:05.971883    4456 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0314 18:26:06.014824    4456 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0314 18:26:06.057952    4456 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-832100\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0314 18:26:06.104959    4456 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-832100\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0314 18:26:06.148355    4456 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-832100\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0314 18:26:06.189938    4456 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-832100\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0314 18:26:06.232419    4456 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\110522.pem --> /usr/share/ca-certificates/110522.pem (1708 bytes)
	I0314 18:26:06.275625    4456 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0314 18:26:06.320216    4456 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\11052.pem --> /usr/share/ca-certificates/11052.pem (1338 bytes)
	I0314 18:26:06.362681    4456 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0314 18:26:06.393445    4456 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0314 18:26:06.421928    4456 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0314 18:26:06.452991    4456 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0314 18:26:06.481780    4456 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0314 18:26:06.515946    4456 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0314 18:26:06.545965    4456 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0314 18:26:06.585050    4456 ssh_runner.go:195] Run: openssl version
	I0314 18:26:06.602743    4456 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11052.pem && ln -fs /usr/share/ca-certificates/11052.pem /etc/ssl/certs/11052.pem"
	I0314 18:26:06.630114    4456 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11052.pem
	I0314 18:26:06.637902    4456 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 14 17:58 /usr/share/ca-certificates/11052.pem
	I0314 18:26:06.646914    4456 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11052.pem
	I0314 18:26:06.663889    4456 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11052.pem /etc/ssl/certs/51391683.0"
	I0314 18:26:06.690968    4456 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110522.pem && ln -fs /usr/share/ca-certificates/110522.pem /etc/ssl/certs/110522.pem"
	I0314 18:26:06.718253    4456 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/110522.pem
	I0314 18:26:06.724795    4456 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 14 17:58 /usr/share/ca-certificates/110522.pem
	I0314 18:26:06.733288    4456 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110522.pem
	I0314 18:26:06.750271    4456 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110522.pem /etc/ssl/certs/3ec20f2e.0"
	I0314 18:26:06.778689    4456 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0314 18:26:06.807161    4456 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0314 18:26:06.813667    4456 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 14 17:44 /usr/share/ca-certificates/minikubeCA.pem
	I0314 18:26:06.821890    4456 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0314 18:26:06.839789    4456 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0314 18:26:06.868313    4456 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0314 18:26:06.874502    4456 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0314 18:26:06.874806    4456 kubeadm.go:928] updating node {m03 172.17.89.54 8443 v1.28.4 docker true true} ...
	I0314 18:26:06.874957    4456 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-832100-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.17.89.54
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:ha-832100 Namespace:default APIServerHAVIP:172.17.95.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0314 18:26:06.875037    4456 kube-vip.go:105] generating kube-vip config ...
	I0314 18:26:06.875037    4456 kube-vip.go:125] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.17.95.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0314 18:26:06.883600    4456 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0314 18:26:06.900159    4456 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.28.4: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.28.4': No such file or directory
	
	Initiating transfer...
	I0314 18:26:06.904682    4456 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.28.4
	I0314 18:26:06.925551    4456 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubelet.sha256
	I0314 18:26:06.925551    4456 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl.sha256
	I0314 18:26:06.925551    4456 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubectl -> /var/lib/minikube/binaries/v1.28.4/kubectl
	I0314 18:26:06.925551    4456 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubeadm.sha256
	I0314 18:26:06.926288    4456 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubeadm -> /var/lib/minikube/binaries/v1.28.4/kubeadm
	I0314 18:26:06.937792    4456 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0314 18:26:06.938504    4456 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubeadm
	I0314 18:26:06.939797    4456 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubectl
	I0314 18:26:06.958012    4456 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubeadm': No such file or directory
	I0314 18:26:06.958092    4456 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubectl': No such file or directory
	I0314 18:26:06.958092    4456 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubelet -> /var/lib/minikube/binaries/v1.28.4/kubelet
	I0314 18:26:06.958239    4456 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubectl --> /var/lib/minikube/binaries/v1.28.4/kubectl (49885184 bytes)
	I0314 18:26:06.958239    4456 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubeadm --> /var/lib/minikube/binaries/v1.28.4/kubeadm (49102848 bytes)
	I0314 18:26:06.967370    4456 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubelet
	I0314 18:26:07.037045    4456 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubelet': No such file or directory
	I0314 18:26:07.037265    4456 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubelet --> /var/lib/minikube/binaries/v1.28.4/kubelet (110850048 bytes)
	I0314 18:26:08.136690    4456 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0314 18:26:08.154748    4456 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0314 18:26:08.188370    4456 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0314 18:26:08.220010    4456 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1345 bytes)
	I0314 18:26:08.263105    4456 ssh_runner.go:195] Run: grep 172.17.95.254	control-plane.minikube.internal$ /etc/hosts
	I0314 18:26:08.269390    4456 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.17.95.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0314 18:26:08.298928    4456 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 18:26:08.477984    4456 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0314 18:26:08.504870    4456 host.go:66] Checking if "ha-832100" exists ...
	I0314 18:26:08.505492    4456 start.go:316] joinCluster: &{Name:ha-832100 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 Clust
erName:ha-832100 Namespace:default APIServerHAVIP:172.17.95.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.90.10 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.17.92.203 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:172.17.89.54 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fa
lse inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 18:26:08.505492    4456 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0314 18:26:08.505492    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-832100 ).state
	I0314 18:26:10.468760    4456 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 18:26:10.468999    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:26:10.468999    4456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-832100 ).networkadapters[0]).ipaddresses[0]
	I0314 18:26:12.900808    4456 main.go:141] libmachine: [stdout =====>] : 172.17.90.10
	
	I0314 18:26:12.901298    4456 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:26:12.901298    4456 sshutil.go:53] new ssh client: &{IP:172.17.90.10 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\ha-832100\id_rsa Username:docker}
	I0314 18:26:13.501772    4456 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0": (4.9959166s)
	I0314 18:26:13.501937    4456 start.go:342] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:172.17.89.54 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0314 18:26:13.502040    4456 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 8p1ag1.y3mj4i16tjb2rzcp --discovery-token-ca-cert-hash sha256:d6cca618a9b89cd38081689b02ff56bf93130e2cdd7cca4172dee33bbb1d34eb --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-832100-m03 --control-plane --apiserver-advertise-address=172.17.89.54 --apiserver-bind-port=8443"
	I0314 18:26:58.220623    4456 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 8p1ag1.y3mj4i16tjb2rzcp --discovery-token-ca-cert-hash sha256:d6cca618a9b89cd38081689b02ff56bf93130e2cdd7cca4172dee33bbb1d34eb --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-832100-m03 --control-plane --apiserver-advertise-address=172.17.89.54 --apiserver-bind-port=8443": (44.7152369s)
	I0314 18:26:58.220623    4456 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0314 18:26:58.979403    4456 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-832100-m03 minikube.k8s.io/updated_at=2024_03_14T18_26_58_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=c6f78a3db54ac629870afb44fb5bc8be9e04a8c7 minikube.k8s.io/name=ha-832100 minikube.k8s.io/primary=false
	I0314 18:26:59.127322    4456 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-832100-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0314 18:26:59.387400    4456 start.go:318] duration metric: took 50.8781682s to joinCluster
	I0314 18:26:59.387400    4456 start.go:234] Will wait 6m0s for node &{Name:m03 IP:172.17.89.54 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0314 18:26:59.388421    4456 config.go:182] Loaded profile config "ha-832100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0314 18:26:59.390033    4456 out.go:177] * Verifying Kubernetes components...
	I0314 18:26:59.402988    4456 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 18:26:59.780995    4456 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0314 18:26:59.815706    4456 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0314 18:26:59.816473    4456 kapi.go:59] client config for ha-832100: &rest.Config{Host:"https://172.17.95.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\ha-832100\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\ha-832100\\client.key", CAFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Nex
tProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1ec9180), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0314 18:26:59.816618    4456 kubeadm.go:477] Overriding stale ClientConfig host https://172.17.95.254:8443 with https://172.17.90.10:8443
	I0314 18:26:59.816765    4456 node_ready.go:35] waiting up to 6m0s for node "ha-832100-m03" to be "Ready" ...
	I0314 18:26:59.817298    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/nodes/ha-832100-m03
	I0314 18:26:59.817298    4456 round_trippers.go:469] Request Headers:
	I0314 18:26:59.817298    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:26:59.817298    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:26:59.832970    4456 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I0314 18:27:00.321985    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/nodes/ha-832100-m03
	I0314 18:27:00.321985    4456 round_trippers.go:469] Request Headers:
	I0314 18:27:00.321985    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:27:00.321985    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:27:00.326553    4456 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 18:27:00.827051    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/nodes/ha-832100-m03
	I0314 18:27:00.827051    4456 round_trippers.go:469] Request Headers:
	I0314 18:27:00.827051    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:27:00.827051    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:27:00.832348    4456 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0314 18:27:01.331471    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/nodes/ha-832100-m03
	I0314 18:27:01.331551    4456 round_trippers.go:469] Request Headers:
	I0314 18:27:01.331551    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:27:01.331551    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:27:01.337065    4456 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 18:27:01.820519    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/nodes/ha-832100-m03
	I0314 18:27:01.820519    4456 round_trippers.go:469] Request Headers:
	I0314 18:27:01.820519    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:27:01.820519    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:27:01.826133    4456 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0314 18:27:01.827851    4456 node_ready.go:53] node "ha-832100-m03" has status "Ready":"False"
	I0314 18:27:02.325379    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/nodes/ha-832100-m03
	I0314 18:27:02.325602    4456 round_trippers.go:469] Request Headers:
	I0314 18:27:02.325602    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:27:02.325602    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:27:02.330183    4456 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 18:27:02.818901    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/nodes/ha-832100-m03
	I0314 18:27:02.818901    4456 round_trippers.go:469] Request Headers:
	I0314 18:27:02.819129    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:27:02.819129    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:27:02.823781    4456 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 18:27:03.327448    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/nodes/ha-832100-m03
	I0314 18:27:03.327523    4456 round_trippers.go:469] Request Headers:
	I0314 18:27:03.327523    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:27:03.327523    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:27:03.331741    4456 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 18:27:03.819758    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/nodes/ha-832100-m03
	I0314 18:27:03.819758    4456 round_trippers.go:469] Request Headers:
	I0314 18:27:03.819758    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:27:03.819758    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:27:03.825521    4456 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0314 18:27:04.327912    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/nodes/ha-832100-m03
	I0314 18:27:04.327912    4456 round_trippers.go:469] Request Headers:
	I0314 18:27:04.327912    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:27:04.327912    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:27:04.332736    4456 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 18:27:04.333348    4456 node_ready.go:53] node "ha-832100-m03" has status "Ready":"False"
	I0314 18:27:04.831778    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/nodes/ha-832100-m03
	I0314 18:27:04.847328    4456 round_trippers.go:469] Request Headers:
	I0314 18:27:04.847391    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:27:04.847391    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:27:04.945479    4456 round_trippers.go:574] Response Status: 200 OK in 98 milliseconds
	I0314 18:27:05.320788    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/nodes/ha-832100-m03
	I0314 18:27:05.320788    4456 round_trippers.go:469] Request Headers:
	I0314 18:27:05.320788    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:27:05.320788    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:27:05.324502    4456 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 18:27:05.822920    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/nodes/ha-832100-m03
	I0314 18:27:05.822920    4456 round_trippers.go:469] Request Headers:
	I0314 18:27:05.822920    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:27:05.822920    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:27:05.857661    4456 round_trippers.go:574] Response Status: 200 OK in 34 milliseconds
	I0314 18:27:06.326230    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/nodes/ha-832100-m03
	I0314 18:27:06.326512    4456 round_trippers.go:469] Request Headers:
	I0314 18:27:06.326512    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:27:06.326512    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:27:06.332824    4456 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0314 18:27:06.832639    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/nodes/ha-832100-m03
	I0314 18:27:06.832679    4456 round_trippers.go:469] Request Headers:
	I0314 18:27:06.832719    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:27:06.832719    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:27:06.837673    4456 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 18:27:06.838326    4456 node_ready.go:53] node "ha-832100-m03" has status "Ready":"False"
	I0314 18:27:07.322310    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/nodes/ha-832100-m03
	I0314 18:27:07.322392    4456 round_trippers.go:469] Request Headers:
	I0314 18:27:07.322392    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:27:07.322392    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:27:07.326670    4456 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 18:27:07.817531    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/nodes/ha-832100-m03
	I0314 18:27:07.817531    4456 round_trippers.go:469] Request Headers:
	I0314 18:27:07.817531    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:27:07.817531    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:27:07.842083    4456 round_trippers.go:574] Response Status: 200 OK in 24 milliseconds
	I0314 18:27:08.331820    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/nodes/ha-832100-m03
	I0314 18:27:08.331870    4456 round_trippers.go:469] Request Headers:
	I0314 18:27:08.331919    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:27:08.331919    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:27:08.336589    4456 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 18:27:08.819433    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/nodes/ha-832100-m03
	I0314 18:27:08.819622    4456 round_trippers.go:469] Request Headers:
	I0314 18:27:08.819622    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:27:08.819622    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:27:08.824219    4456 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 18:27:09.325914    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/nodes/ha-832100-m03
	I0314 18:27:09.325914    4456 round_trippers.go:469] Request Headers:
	I0314 18:27:09.325914    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:27:09.325914    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:27:09.330480    4456 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 18:27:09.331713    4456 node_ready.go:49] node "ha-832100-m03" has status "Ready":"True"
	I0314 18:27:09.331805    4456 node_ready.go:38] duration metric: took 9.5143332s for node "ha-832100-m03" to be "Ready" ...
	I0314 18:27:09.331805    4456 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0314 18:27:09.331913    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/namespaces/kube-system/pods
	I0314 18:27:09.331913    4456 round_trippers.go:469] Request Headers:
	I0314 18:27:09.332026    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:27:09.332026    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:27:09.340280    4456 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0314 18:27:09.350488    4456 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-5rf5x" in "kube-system" namespace to be "Ready" ...
	I0314 18:27:09.350488    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-5rf5x
	I0314 18:27:09.350488    4456 round_trippers.go:469] Request Headers:
	I0314 18:27:09.350488    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:27:09.350488    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:27:09.354738    4456 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 18:27:09.356381    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/nodes/ha-832100
	I0314 18:27:09.356498    4456 round_trippers.go:469] Request Headers:
	I0314 18:27:09.356498    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:27:09.356498    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:27:09.359663    4456 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 18:27:09.361116    4456 pod_ready.go:92] pod "coredns-5dd5756b68-5rf5x" in "kube-system" namespace has status "Ready":"True"
	I0314 18:27:09.361263    4456 pod_ready.go:81] duration metric: took 10.7736ms for pod "coredns-5dd5756b68-5rf5x" in "kube-system" namespace to be "Ready" ...
	I0314 18:27:09.361263    4456 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-mnw55" in "kube-system" namespace to be "Ready" ...
	I0314 18:27:09.361351    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-mnw55
	I0314 18:27:09.361390    4456 round_trippers.go:469] Request Headers:
	I0314 18:27:09.361404    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:27:09.361404    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:27:09.365601    4456 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 18:27:09.366466    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/nodes/ha-832100
	I0314 18:27:09.366501    4456 round_trippers.go:469] Request Headers:
	I0314 18:27:09.366501    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:27:09.366501    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:27:09.369964    4456 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 18:27:09.370214    4456 pod_ready.go:92] pod "coredns-5dd5756b68-mnw55" in "kube-system" namespace has status "Ready":"True"
	I0314 18:27:09.370214    4456 pod_ready.go:81] duration metric: took 8.9505ms for pod "coredns-5dd5756b68-mnw55" in "kube-system" namespace to be "Ready" ...
	I0314 18:27:09.370214    4456 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-832100" in "kube-system" namespace to be "Ready" ...
	I0314 18:27:09.370214    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/namespaces/kube-system/pods/etcd-ha-832100
	I0314 18:27:09.370214    4456 round_trippers.go:469] Request Headers:
	I0314 18:27:09.370214    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:27:09.370214    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:27:09.374521    4456 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 18:27:09.374521    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/nodes/ha-832100
	I0314 18:27:09.374521    4456 round_trippers.go:469] Request Headers:
	I0314 18:27:09.374521    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:27:09.374521    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:27:09.378452    4456 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 18:27:09.379684    4456 pod_ready.go:92] pod "etcd-ha-832100" in "kube-system" namespace has status "Ready":"True"
	I0314 18:27:09.379684    4456 pod_ready.go:81] duration metric: took 9.4697ms for pod "etcd-ha-832100" in "kube-system" namespace to be "Ready" ...
	I0314 18:27:09.379684    4456 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-832100-m02" in "kube-system" namespace to be "Ready" ...
	I0314 18:27:09.379684    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/namespaces/kube-system/pods/etcd-ha-832100-m02
	I0314 18:27:09.379684    4456 round_trippers.go:469] Request Headers:
	I0314 18:27:09.379684    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:27:09.379684    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:27:09.383337    4456 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 18:27:09.384283    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/nodes/ha-832100-m02
	I0314 18:27:09.384357    4456 round_trippers.go:469] Request Headers:
	I0314 18:27:09.384357    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:27:09.384357    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:27:09.387526    4456 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 18:27:09.388101    4456 pod_ready.go:92] pod "etcd-ha-832100-m02" in "kube-system" namespace has status "Ready":"True"
	I0314 18:27:09.388647    4456 pod_ready.go:81] duration metric: took 8.9619ms for pod "etcd-ha-832100-m02" in "kube-system" namespace to be "Ready" ...
	I0314 18:27:09.388647    4456 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-832100-m03" in "kube-system" namespace to be "Ready" ...
	I0314 18:27:09.527084    4456 request.go:629] Waited for 138.1577ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.90.10:8443/api/v1/namespaces/kube-system/pods/etcd-ha-832100-m03
	I0314 18:27:09.527278    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/namespaces/kube-system/pods/etcd-ha-832100-m03
	I0314 18:27:09.527278    4456 round_trippers.go:469] Request Headers:
	I0314 18:27:09.527278    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:27:09.527278    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:27:09.547113    4456 round_trippers.go:574] Response Status: 200 OK in 19 milliseconds
	I0314 18:27:09.728461    4456 request.go:629] Waited for 180.7081ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.90.10:8443/api/v1/nodes/ha-832100-m03
	I0314 18:27:09.728658    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/nodes/ha-832100-m03
	I0314 18:27:09.728658    4456 round_trippers.go:469] Request Headers:
	I0314 18:27:09.728658    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:27:09.728658    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:27:09.739027    4456 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0314 18:27:09.933901    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/namespaces/kube-system/pods/etcd-ha-832100-m03
	I0314 18:27:09.933973    4456 round_trippers.go:469] Request Headers:
	I0314 18:27:09.933973    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:27:09.933973    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:27:09.939264    4456 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0314 18:27:10.134402    4456 request.go:629] Waited for 194.1685ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.90.10:8443/api/v1/nodes/ha-832100-m03
	I0314 18:27:10.134759    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/nodes/ha-832100-m03
	I0314 18:27:10.134759    4456 round_trippers.go:469] Request Headers:
	I0314 18:27:10.134759    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:27:10.134759    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:27:10.142549    4456 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0314 18:27:10.399734    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/namespaces/kube-system/pods/etcd-ha-832100-m03
	I0314 18:27:10.399937    4456 round_trippers.go:469] Request Headers:
	I0314 18:27:10.399937    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:27:10.399937    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:27:10.406928    4456 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0314 18:27:10.538754    4456 request.go:629] Waited for 131.1569ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.90.10:8443/api/v1/nodes/ha-832100-m03
	I0314 18:27:10.539105    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/nodes/ha-832100-m03
	I0314 18:27:10.539105    4456 round_trippers.go:469] Request Headers:
	I0314 18:27:10.539105    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:27:10.539105    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:27:10.543821    4456 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 18:27:10.897196    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/namespaces/kube-system/pods/etcd-ha-832100-m03
	I0314 18:27:10.897196    4456 round_trippers.go:469] Request Headers:
	I0314 18:27:10.897196    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:27:10.897196    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:27:10.901810    4456 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 18:27:10.928607    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/nodes/ha-832100-m03
	I0314 18:27:10.929017    4456 round_trippers.go:469] Request Headers:
	I0314 18:27:10.929017    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:27:10.929017    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:27:10.933083    4456 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 18:27:10.934406    4456 pod_ready.go:92] pod "etcd-ha-832100-m03" in "kube-system" namespace has status "Ready":"True"
	I0314 18:27:10.934458    4456 pod_ready.go:81] duration metric: took 1.545634s for pod "etcd-ha-832100-m03" in "kube-system" namespace to be "Ready" ...
	I0314 18:27:10.934511    4456 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-832100" in "kube-system" namespace to be "Ready" ...
	I0314 18:27:11.131571    4456 request.go:629] Waited for 196.965ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.90.10:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-832100
	I0314 18:27:11.131571    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-832100
	I0314 18:27:11.131571    4456 round_trippers.go:469] Request Headers:
	I0314 18:27:11.131571    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:27:11.131571    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:27:11.142270    4456 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0314 18:27:11.332312    4456 request.go:629] Waited for 188.6003ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.90.10:8443/api/v1/nodes/ha-832100
	I0314 18:27:11.332670    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/nodes/ha-832100
	I0314 18:27:11.332670    4456 round_trippers.go:469] Request Headers:
	I0314 18:27:11.332670    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:27:11.332670    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:27:11.339042    4456 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0314 18:27:11.339693    4456 pod_ready.go:92] pod "kube-apiserver-ha-832100" in "kube-system" namespace has status "Ready":"True"
	I0314 18:27:11.339787    4456 pod_ready.go:81] duration metric: took 405.2467ms for pod "kube-apiserver-ha-832100" in "kube-system" namespace to be "Ready" ...
	I0314 18:27:11.339787    4456 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-832100-m02" in "kube-system" namespace to be "Ready" ...
	I0314 18:27:11.536897    4456 request.go:629] Waited for 197.01ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.90.10:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-832100-m02
	I0314 18:27:11.537306    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-832100-m02
	I0314 18:27:11.537306    4456 round_trippers.go:469] Request Headers:
	I0314 18:27:11.537306    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:27:11.537306    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:27:11.544162    4456 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0314 18:27:11.740278    4456 request.go:629] Waited for 194.8178ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.90.10:8443/api/v1/nodes/ha-832100-m02
	I0314 18:27:11.740353    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/nodes/ha-832100-m02
	I0314 18:27:11.740353    4456 round_trippers.go:469] Request Headers:
	I0314 18:27:11.740353    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:27:11.740353    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:27:11.745145    4456 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 18:27:11.745756    4456 pod_ready.go:92] pod "kube-apiserver-ha-832100-m02" in "kube-system" namespace has status "Ready":"True"
	I0314 18:27:11.745813    4456 pod_ready.go:81] duration metric: took 405.9958ms for pod "kube-apiserver-ha-832100-m02" in "kube-system" namespace to be "Ready" ...
	I0314 18:27:11.745813    4456 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-832100-m03" in "kube-system" namespace to be "Ready" ...
	I0314 18:27:11.926396    4456 request.go:629] Waited for 180.336ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.90.10:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-832100-m03
	I0314 18:27:11.926482    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-832100-m03
	I0314 18:27:11.926597    4456 round_trippers.go:469] Request Headers:
	I0314 18:27:11.926597    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:27:11.926597    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:27:11.930829    4456 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 18:27:12.130105    4456 request.go:629] Waited for 197.6038ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.90.10:8443/api/v1/nodes/ha-832100-m03
	I0314 18:27:12.130433    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/nodes/ha-832100-m03
	I0314 18:27:12.130521    4456 round_trippers.go:469] Request Headers:
	I0314 18:27:12.130521    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:27:12.130574    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:27:12.135722    4456 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0314 18:27:12.334766    4456 request.go:629] Waited for 79.4589ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.90.10:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-832100-m03
	I0314 18:27:12.334929    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-832100-m03
	I0314 18:27:12.334929    4456 round_trippers.go:469] Request Headers:
	I0314 18:27:12.334929    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:27:12.334929    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:27:12.340002    4456 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 18:27:12.536609    4456 request.go:629] Waited for 195.7417ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.90.10:8443/api/v1/nodes/ha-832100-m03
	I0314 18:27:12.536609    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/nodes/ha-832100-m03
	I0314 18:27:12.536609    4456 round_trippers.go:469] Request Headers:
	I0314 18:27:12.536609    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:27:12.536609    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:27:12.540796    4456 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 18:27:12.756441    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-832100-m03
	I0314 18:27:12.756441    4456 round_trippers.go:469] Request Headers:
	I0314 18:27:12.756504    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:27:12.756504    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:27:12.760908    4456 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 18:27:12.926934    4456 request.go:629] Waited for 164.2646ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.90.10:8443/api/v1/nodes/ha-832100-m03
	I0314 18:27:12.927023    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/nodes/ha-832100-m03
	I0314 18:27:12.927023    4456 round_trippers.go:469] Request Headers:
	I0314 18:27:12.927023    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:27:12.927023    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:27:12.933829    4456 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0314 18:27:13.255615    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-832100-m03
	I0314 18:27:13.255615    4456 round_trippers.go:469] Request Headers:
	I0314 18:27:13.255615    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:27:13.255615    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:27:13.260197    4456 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 18:27:13.326404    4456 request.go:629] Waited for 64.9859ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.90.10:8443/api/v1/nodes/ha-832100-m03
	I0314 18:27:13.326508    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/nodes/ha-832100-m03
	I0314 18:27:13.326508    4456 round_trippers.go:469] Request Headers:
	I0314 18:27:13.326508    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:27:13.326508    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:27:13.331086    4456 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 18:27:13.331682    4456 pod_ready.go:92] pod "kube-apiserver-ha-832100-m03" in "kube-system" namespace has status "Ready":"True"
	I0314 18:27:13.331682    4456 pod_ready.go:81] duration metric: took 1.585696s for pod "kube-apiserver-ha-832100-m03" in "kube-system" namespace to be "Ready" ...
	I0314 18:27:13.331682    4456 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-832100" in "kube-system" namespace to be "Ready" ...
	I0314 18:27:13.539727    4456 request.go:629] Waited for 207.2412ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.90.10:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-832100
	I0314 18:27:13.539807    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-832100
	I0314 18:27:13.539807    4456 round_trippers.go:469] Request Headers:
	I0314 18:27:13.539889    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:27:13.539889    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:27:13.543998    4456 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 18:27:13.730581    4456 request.go:629] Waited for 185.1362ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.90.10:8443/api/v1/nodes/ha-832100
	I0314 18:27:13.730765    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/nodes/ha-832100
	I0314 18:27:13.730899    4456 round_trippers.go:469] Request Headers:
	I0314 18:27:13.730899    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:27:13.730899    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:27:13.739027    4456 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0314 18:27:13.739632    4456 pod_ready.go:92] pod "kube-controller-manager-ha-832100" in "kube-system" namespace has status "Ready":"True"
	I0314 18:27:13.739632    4456 pod_ready.go:81] duration metric: took 407.9198ms for pod "kube-controller-manager-ha-832100" in "kube-system" namespace to be "Ready" ...
	I0314 18:27:13.739632    4456 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-832100-m02" in "kube-system" namespace to be "Ready" ...
	I0314 18:27:13.933100    4456 request.go:629] Waited for 193.1902ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.90.10:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-832100-m02
	I0314 18:27:13.933325    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-832100-m02
	I0314 18:27:13.933401    4456 round_trippers.go:469] Request Headers:
	I0314 18:27:13.933401    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:27:13.933401    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:27:13.937453    4456 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 18:27:14.135834    4456 request.go:629] Waited for 196.8582ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.90.10:8443/api/v1/nodes/ha-832100-m02
	I0314 18:27:14.135834    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/nodes/ha-832100-m02
	I0314 18:27:14.136111    4456 round_trippers.go:469] Request Headers:
	I0314 18:27:14.136111    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:27:14.136181    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:27:14.140868    4456 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 18:27:14.142495    4456 pod_ready.go:92] pod "kube-controller-manager-ha-832100-m02" in "kube-system" namespace has status "Ready":"True"
	I0314 18:27:14.142564    4456 pod_ready.go:81] duration metric: took 402.9017ms for pod "kube-controller-manager-ha-832100-m02" in "kube-system" namespace to be "Ready" ...
	I0314 18:27:14.142564    4456 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-832100-m03" in "kube-system" namespace to be "Ready" ...
	I0314 18:27:14.337087    4456 request.go:629] Waited for 194.4072ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.90.10:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-832100-m03
	I0314 18:27:14.337359    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-832100-m03
	I0314 18:27:14.337359    4456 round_trippers.go:469] Request Headers:
	I0314 18:27:14.337359    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:27:14.337359    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:27:14.344392    4456 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0314 18:27:14.541470    4456 request.go:629] Waited for 196.4135ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.90.10:8443/api/v1/nodes/ha-832100-m03
	I0314 18:27:14.541470    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/nodes/ha-832100-m03
	I0314 18:27:14.541470    4456 round_trippers.go:469] Request Headers:
	I0314 18:27:14.541821    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:27:14.541821    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:27:14.546625    4456 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 18:27:14.547885    4456 pod_ready.go:92] pod "kube-controller-manager-ha-832100-m03" in "kube-system" namespace has status "Ready":"True"
	I0314 18:27:14.547885    4456 pod_ready.go:81] duration metric: took 405.2909ms for pod "kube-controller-manager-ha-832100-m03" in "kube-system" namespace to be "Ready" ...
	I0314 18:27:14.547885    4456 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-cnzzc" in "kube-system" namespace to be "Ready" ...
	I0314 18:27:14.729590    4456 request.go:629] Waited for 181.4242ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.90.10:8443/api/v1/namespaces/kube-system/pods/kube-proxy-cnzzc
	I0314 18:27:14.729688    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/namespaces/kube-system/pods/kube-proxy-cnzzc
	I0314 18:27:14.729843    4456 round_trippers.go:469] Request Headers:
	I0314 18:27:14.729866    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:27:14.729866    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:27:14.734497    4456 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 18:27:14.932471    4456 request.go:629] Waited for 196.2143ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.90.10:8443/api/v1/nodes/ha-832100
	I0314 18:27:14.932719    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/nodes/ha-832100
	I0314 18:27:14.932719    4456 round_trippers.go:469] Request Headers:
	I0314 18:27:14.932719    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:27:14.932719    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:27:14.938809    4456 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0314 18:27:14.939565    4456 pod_ready.go:92] pod "kube-proxy-cnzzc" in "kube-system" namespace has status "Ready":"True"
	I0314 18:27:14.939565    4456 pod_ready.go:81] duration metric: took 391.6505ms for pod "kube-proxy-cnzzc" in "kube-system" namespace to be "Ready" ...
	I0314 18:27:14.939565    4456 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-g4l9q" in "kube-system" namespace to be "Ready" ...
	I0314 18:27:15.136238    4456 request.go:629] Waited for 196.4543ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.90.10:8443/api/v1/namespaces/kube-system/pods/kube-proxy-g4l9q
	I0314 18:27:15.136385    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/namespaces/kube-system/pods/kube-proxy-g4l9q
	I0314 18:27:15.136385    4456 round_trippers.go:469] Request Headers:
	I0314 18:27:15.136385    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:27:15.136385    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:27:15.140977    4456 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 18:27:15.339536    4456 request.go:629] Waited for 197.9938ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.90.10:8443/api/v1/nodes/ha-832100-m02
	I0314 18:27:15.339536    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/nodes/ha-832100-m02
	I0314 18:27:15.339536    4456 round_trippers.go:469] Request Headers:
	I0314 18:27:15.339775    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:27:15.339775    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:27:15.343820    4456 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 18:27:15.344305    4456 pod_ready.go:92] pod "kube-proxy-g4l9q" in "kube-system" namespace has status "Ready":"True"
	I0314 18:27:15.344305    4456 pod_ready.go:81] duration metric: took 404.7102ms for pod "kube-proxy-g4l9q" in "kube-system" namespace to be "Ready" ...
	I0314 18:27:15.344305    4456 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-z9bkt" in "kube-system" namespace to be "Ready" ...
	I0314 18:27:15.527009    4456 request.go:629] Waited for 182.1136ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.90.10:8443/api/v1/namespaces/kube-system/pods/kube-proxy-z9bkt
	I0314 18:27:15.527009    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/namespaces/kube-system/pods/kube-proxy-z9bkt
	I0314 18:27:15.527009    4456 round_trippers.go:469] Request Headers:
	I0314 18:27:15.527009    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:27:15.527009    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:27:15.531720    4456 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 18:27:15.730360    4456 request.go:629] Waited for 197.2543ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.90.10:8443/api/v1/nodes/ha-832100-m03
	I0314 18:27:15.730539    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/nodes/ha-832100-m03
	I0314 18:27:15.730539    4456 round_trippers.go:469] Request Headers:
	I0314 18:27:15.730637    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:27:15.730637    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:27:15.734814    4456 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 18:27:15.735761    4456 pod_ready.go:92] pod "kube-proxy-z9bkt" in "kube-system" namespace has status "Ready":"True"
	I0314 18:27:15.735761    4456 pod_ready.go:81] duration metric: took 391.4266ms for pod "kube-proxy-z9bkt" in "kube-system" namespace to be "Ready" ...
	I0314 18:27:15.735761    4456 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-832100" in "kube-system" namespace to be "Ready" ...
	I0314 18:27:15.931051    4456 request.go:629] Waited for 195.2762ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.90.10:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-832100
	I0314 18:27:15.931051    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-832100
	I0314 18:27:15.931051    4456 round_trippers.go:469] Request Headers:
	I0314 18:27:15.931051    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:27:15.931051    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:27:15.935993    4456 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 18:27:16.133118    4456 request.go:629] Waited for 196.3589ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.90.10:8443/api/v1/nodes/ha-832100
	I0314 18:27:16.133118    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/nodes/ha-832100
	I0314 18:27:16.133118    4456 round_trippers.go:469] Request Headers:
	I0314 18:27:16.133118    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:27:16.133118    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:27:16.138113    4456 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 18:27:16.139002    4456 pod_ready.go:92] pod "kube-scheduler-ha-832100" in "kube-system" namespace has status "Ready":"True"
	I0314 18:27:16.139531    4456 pod_ready.go:81] duration metric: took 403.7403ms for pod "kube-scheduler-ha-832100" in "kube-system" namespace to be "Ready" ...
	I0314 18:27:16.139531    4456 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-832100-m02" in "kube-system" namespace to be "Ready" ...
	I0314 18:27:16.338457    4456 request.go:629] Waited for 198.8296ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.90.10:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-832100-m02
	I0314 18:27:16.338457    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-832100-m02
	I0314 18:27:16.338457    4456 round_trippers.go:469] Request Headers:
	I0314 18:27:16.338457    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:27:16.338457    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:27:16.343118    4456 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 18:27:16.540509    4456 request.go:629] Waited for 195.9608ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.90.10:8443/api/v1/nodes/ha-832100-m02
	I0314 18:27:16.540832    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/nodes/ha-832100-m02
	I0314 18:27:16.540931    4456 round_trippers.go:469] Request Headers:
	I0314 18:27:16.540931    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:27:16.540931    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:27:16.546763    4456 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0314 18:27:16.547435    4456 pod_ready.go:92] pod "kube-scheduler-ha-832100-m02" in "kube-system" namespace has status "Ready":"True"
	I0314 18:27:16.547435    4456 pod_ready.go:81] duration metric: took 407.8734ms for pod "kube-scheduler-ha-832100-m02" in "kube-system" namespace to be "Ready" ...
	I0314 18:27:16.547435    4456 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-832100-m03" in "kube-system" namespace to be "Ready" ...
	I0314 18:27:16.726957    4456 request.go:629] Waited for 178.9807ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.90.10:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-832100-m03
	I0314 18:27:16.727180    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-832100-m03
	I0314 18:27:16.727180    4456 round_trippers.go:469] Request Headers:
	I0314 18:27:16.727180    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:27:16.727243    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:27:16.732136    4456 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 18:27:16.928539    4456 request.go:629] Waited for 195.5805ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.90.10:8443/api/v1/nodes/ha-832100-m03
	I0314 18:27:16.928777    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/nodes/ha-832100-m03
	I0314 18:27:16.928777    4456 round_trippers.go:469] Request Headers:
	I0314 18:27:16.928777    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:27:16.928777    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:27:16.931827    4456 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 18:27:16.933006    4456 pod_ready.go:92] pod "kube-scheduler-ha-832100-m03" in "kube-system" namespace has status "Ready":"True"
	I0314 18:27:16.933006    4456 pod_ready.go:81] duration metric: took 385.5425ms for pod "kube-scheduler-ha-832100-m03" in "kube-system" namespace to be "Ready" ...
	I0314 18:27:16.933006    4456 pod_ready.go:38] duration metric: took 7.6006356s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0314 18:27:16.933006    4456 api_server.go:52] waiting for apiserver process to appear ...
	I0314 18:27:16.942786    4456 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 18:27:16.965701    4456 api_server.go:72] duration metric: took 17.5769944s to wait for apiserver process to appear ...
	I0314 18:27:16.965756    4456 api_server.go:88] waiting for apiserver healthz status ...
	I0314 18:27:16.965756    4456 api_server.go:253] Checking apiserver healthz at https://172.17.90.10:8443/healthz ...
	I0314 18:27:16.975350    4456 api_server.go:279] https://172.17.90.10:8443/healthz returned 200:
	ok
	I0314 18:27:16.975628    4456 round_trippers.go:463] GET https://172.17.90.10:8443/version
	I0314 18:27:16.975628    4456 round_trippers.go:469] Request Headers:
	I0314 18:27:16.975628    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:27:16.975628    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:27:16.976813    4456 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0314 18:27:16.977666    4456 api_server.go:141] control plane version: v1.28.4
	I0314 18:27:16.977666    4456 api_server.go:131] duration metric: took 11.9095ms to wait for apiserver health ...
	I0314 18:27:16.977666    4456 system_pods.go:43] waiting for kube-system pods to appear ...
	I0314 18:27:17.131169    4456 request.go:629] Waited for 153.3849ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.90.10:8443/api/v1/namespaces/kube-system/pods
	I0314 18:27:17.131513    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/namespaces/kube-system/pods
	I0314 18:27:17.131513    4456 round_trippers.go:469] Request Headers:
	I0314 18:27:17.131513    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:27:17.131513    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:27:17.139896    4456 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0314 18:27:17.147802    4456 system_pods.go:59] 24 kube-system pods found
	I0314 18:27:17.147802    4456 system_pods.go:61] "coredns-5dd5756b68-5rf5x" [a1975ad0-d327-4b3a-81a0-ead7c000b839] Running
	I0314 18:27:17.147802    4456 system_pods.go:61] "coredns-5dd5756b68-mnw55" [1eb87fcd-6c11-4457-b9dc-aaa8ec89f851] Running
	I0314 18:27:17.147802    4456 system_pods.go:61] "etcd-ha-832100" [db669e0d-400b-4b97-a76f-53f15d844a6d] Running
	I0314 18:27:17.147802    4456 system_pods.go:61] "etcd-ha-832100-m02" [0127bd94-9828-4de0-9724-82b7de2a3730] Running
	I0314 18:27:17.147802    4456 system_pods.go:61] "etcd-ha-832100-m03" [848f4086-efb8-4323-ba6d-bef830e929aa] Running
	I0314 18:27:17.147802    4456 system_pods.go:61] "kindnet-6n7bk" [a1281a26-baf8-4566-b964-e4b042aceae9] Running
	I0314 18:27:17.147802    4456 system_pods.go:61] "kindnet-jvbts" [1070cc03-2571-4d58-9446-b704ad17b1b1] Running
	I0314 18:27:17.147802    4456 system_pods.go:61] "kindnet-trr4z" [9576d1b9-b53d-4a68-8d93-59623314b444] Running
	I0314 18:27:17.147802    4456 system_pods.go:61] "kube-apiserver-ha-832100" [30d411af-dab6-44d2-9887-a08a042d6150] Running
	I0314 18:27:17.147802    4456 system_pods.go:61] "kube-apiserver-ha-832100-m02" [53db6070-884e-4df1-b77b-15a6415384db] Running
	I0314 18:27:17.147802    4456 system_pods.go:61] "kube-apiserver-ha-832100-m03" [b6167751-0919-40b8-ad99-2fa53949189f] Running
	I0314 18:27:17.147802    4456 system_pods.go:61] "kube-controller-manager-ha-832100" [6d430700-f7cd-473e-98a7-c5d4f6c0b984] Running
	I0314 18:27:17.147802    4456 system_pods.go:61] "kube-controller-manager-ha-832100-m02" [81fa8e3e-357e-4a7a-8acc-4481c0292f26] Running
	I0314 18:27:17.147802    4456 system_pods.go:61] "kube-controller-manager-ha-832100-m03" [fd950d1b-a488-4abf-903d-f1b6f6d875ea] Running
	I0314 18:27:17.148329    4456 system_pods.go:61] "kube-proxy-cnzzc" [83a6c448-c577-4c77-8e21-11efe6bab9ac] Running
	I0314 18:27:17.148329    4456 system_pods.go:61] "kube-proxy-g4l9q" [5e8dd3b4-2059-47f9-aca1-cadb8dc76b4d] Running
	I0314 18:27:17.148329    4456 system_pods.go:61] "kube-proxy-z9bkt" [98f1ecf2-c332-4005-a248-3548fec2336b] Running
	I0314 18:27:17.148329    4456 system_pods.go:61] "kube-scheduler-ha-832100" [28207820-b6cd-4573-82b1-9fa8b88741b1] Running
	I0314 18:27:17.148329    4456 system_pods.go:61] "kube-scheduler-ha-832100-m02" [d0d35814-e1ca-4136-9e0a-5a578f4d08e2] Running
	I0314 18:27:17.148329    4456 system_pods.go:61] "kube-scheduler-ha-832100-m03" [fde2e501-8a64-4863-b806-d42ed506c339] Running
	I0314 18:27:17.148329    4456 system_pods.go:61] "kube-vip-ha-832100" [c20342af-ece8-442d-88e0-b15cd453b554] Running / Ready:ContainersNotReady (containers with unready status: [kube-vip]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-vip])
	I0314 18:27:17.148329    4456 system_pods.go:61] "kube-vip-ha-832100-m02" [f27cb2fa-b6eb-4c83-97c4-8582bb73aca7] Running / Ready:ContainersNotReady (containers with unready status: [kube-vip]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-vip])
	I0314 18:27:17.148329    4456 system_pods.go:61] "kube-vip-ha-832100-m03" [bde414f2-17e7-4b7e-b48d-e52340085739] Running / Ready:ContainersNotReady (containers with unready status: [kube-vip]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-vip])
	I0314 18:27:17.148329    4456 system_pods.go:61] "storage-provisioner" [099c1e5d-1c0b-4df7-b023-1f8da354c4e6] Running
	I0314 18:27:17.148329    4456 system_pods.go:74] duration metric: took 170.65ms to wait for pod list to return data ...
	I0314 18:27:17.148329    4456 default_sa.go:34] waiting for default service account to be created ...
	I0314 18:27:17.333291    4456 request.go:629] Waited for 184.9487ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.90.10:8443/api/v1/namespaces/default/serviceaccounts
	I0314 18:27:17.333613    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/namespaces/default/serviceaccounts
	I0314 18:27:17.333613    4456 round_trippers.go:469] Request Headers:
	I0314 18:27:17.333613    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:27:17.333613    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:27:17.338544    4456 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 18:27:17.338845    4456 default_sa.go:45] found service account: "default"
	I0314 18:27:17.338845    4456 default_sa.go:55] duration metric: took 190.5018ms for default service account to be created ...
	I0314 18:27:17.338845    4456 system_pods.go:116] waiting for k8s-apps to be running ...
	I0314 18:27:17.538125    4456 request.go:629] Waited for 199.1196ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.90.10:8443/api/v1/namespaces/kube-system/pods
	I0314 18:27:17.538125    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/namespaces/kube-system/pods
	I0314 18:27:17.538125    4456 round_trippers.go:469] Request Headers:
	I0314 18:27:17.538125    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:27:17.538125    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:27:17.551443    4456 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0314 18:27:17.560008    4456 system_pods.go:86] 24 kube-system pods found
	I0314 18:27:17.560008    4456 system_pods.go:89] "coredns-5dd5756b68-5rf5x" [a1975ad0-d327-4b3a-81a0-ead7c000b839] Running
	I0314 18:27:17.560008    4456 system_pods.go:89] "coredns-5dd5756b68-mnw55" [1eb87fcd-6c11-4457-b9dc-aaa8ec89f851] Running
	I0314 18:27:17.560008    4456 system_pods.go:89] "etcd-ha-832100" [db669e0d-400b-4b97-a76f-53f15d844a6d] Running
	I0314 18:27:17.560008    4456 system_pods.go:89] "etcd-ha-832100-m02" [0127bd94-9828-4de0-9724-82b7de2a3730] Running
	I0314 18:27:17.560008    4456 system_pods.go:89] "etcd-ha-832100-m03" [848f4086-efb8-4323-ba6d-bef830e929aa] Running
	I0314 18:27:17.560008    4456 system_pods.go:89] "kindnet-6n7bk" [a1281a26-baf8-4566-b964-e4b042aceae9] Running
	I0314 18:27:17.560008    4456 system_pods.go:89] "kindnet-jvbts" [1070cc03-2571-4d58-9446-b704ad17b1b1] Running
	I0314 18:27:17.560008    4456 system_pods.go:89] "kindnet-trr4z" [9576d1b9-b53d-4a68-8d93-59623314b444] Running
	I0314 18:27:17.560008    4456 system_pods.go:89] "kube-apiserver-ha-832100" [30d411af-dab6-44d2-9887-a08a042d6150] Running
	I0314 18:27:17.560008    4456 system_pods.go:89] "kube-apiserver-ha-832100-m02" [53db6070-884e-4df1-b77b-15a6415384db] Running
	I0314 18:27:17.560008    4456 system_pods.go:89] "kube-apiserver-ha-832100-m03" [b6167751-0919-40b8-ad99-2fa53949189f] Running
	I0314 18:27:17.560008    4456 system_pods.go:89] "kube-controller-manager-ha-832100" [6d430700-f7cd-473e-98a7-c5d4f6c0b984] Running
	I0314 18:27:17.560008    4456 system_pods.go:89] "kube-controller-manager-ha-832100-m02" [81fa8e3e-357e-4a7a-8acc-4481c0292f26] Running
	I0314 18:27:17.560008    4456 system_pods.go:89] "kube-controller-manager-ha-832100-m03" [fd950d1b-a488-4abf-903d-f1b6f6d875ea] Running
	I0314 18:27:17.560008    4456 system_pods.go:89] "kube-proxy-cnzzc" [83a6c448-c577-4c77-8e21-11efe6bab9ac] Running
	I0314 18:27:17.560008    4456 system_pods.go:89] "kube-proxy-g4l9q" [5e8dd3b4-2059-47f9-aca1-cadb8dc76b4d] Running
	I0314 18:27:17.560008    4456 system_pods.go:89] "kube-proxy-z9bkt" [98f1ecf2-c332-4005-a248-3548fec2336b] Running
	I0314 18:27:17.560539    4456 system_pods.go:89] "kube-scheduler-ha-832100" [28207820-b6cd-4573-82b1-9fa8b88741b1] Running
	I0314 18:27:17.560539    4456 system_pods.go:89] "kube-scheduler-ha-832100-m02" [d0d35814-e1ca-4136-9e0a-5a578f4d08e2] Running
	I0314 18:27:17.560539    4456 system_pods.go:89] "kube-scheduler-ha-832100-m03" [fde2e501-8a64-4863-b806-d42ed506c339] Running
	I0314 18:27:17.560539    4456 system_pods.go:89] "kube-vip-ha-832100" [c20342af-ece8-442d-88e0-b15cd453b554] Running / Ready:ContainersNotReady (containers with unready status: [kube-vip]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-vip])
	I0314 18:27:17.560539    4456 system_pods.go:89] "kube-vip-ha-832100-m02" [f27cb2fa-b6eb-4c83-97c4-8582bb73aca7] Running / Ready:ContainersNotReady (containers with unready status: [kube-vip]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-vip])
	I0314 18:27:17.560616    4456 system_pods.go:89] "kube-vip-ha-832100-m03" [bde414f2-17e7-4b7e-b48d-e52340085739] Running / Ready:ContainersNotReady (containers with unready status: [kube-vip]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-vip])
	I0314 18:27:17.560616    4456 system_pods.go:89] "storage-provisioner" [099c1e5d-1c0b-4df7-b023-1f8da354c4e6] Running
	I0314 18:27:17.560616    4456 system_pods.go:126] duration metric: took 221.7554ms to wait for k8s-apps to be running ...
	I0314 18:27:17.560616    4456 system_svc.go:44] waiting for kubelet service to be running ....
	I0314 18:27:17.569270    4456 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0314 18:27:17.598733    4456 system_svc.go:56] duration metric: took 38.1135ms WaitForService to wait for kubelet
	I0314 18:27:17.598733    4456 kubeadm.go:576] duration metric: took 18.2099796s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0314 18:27:17.598816    4456 node_conditions.go:102] verifying NodePressure condition ...
	I0314 18:27:17.738971    4456 request.go:629] Waited for 140.1448ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.90.10:8443/api/v1/nodes
	I0314 18:27:17.739355    4456 round_trippers.go:463] GET https://172.17.90.10:8443/api/v1/nodes
	I0314 18:27:17.739355    4456 round_trippers.go:469] Request Headers:
	I0314 18:27:17.739391    4456 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 18:27:17.739391    4456 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:27:17.743980    4456 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 18:27:17.746411    4456 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0314 18:27:17.746487    4456 node_conditions.go:123] node cpu capacity is 2
	I0314 18:27:17.746487    4456 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0314 18:27:17.746487    4456 node_conditions.go:123] node cpu capacity is 2
	I0314 18:27:17.746487    4456 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0314 18:27:17.746487    4456 node_conditions.go:123] node cpu capacity is 2
	I0314 18:27:17.746487    4456 node_conditions.go:105] duration metric: took 147.66ms to run NodePressure ...
	I0314 18:27:17.746487    4456 start.go:240] waiting for startup goroutines ...
	I0314 18:27:17.746553    4456 start.go:254] writing updated cluster config ...
	I0314 18:27:17.756080    4456 ssh_runner.go:195] Run: rm -f paused
	I0314 18:27:17.892103    4456 start.go:600] kubectl: 1.29.2, cluster: 1.28.4 (minor skew: 1)
	I0314 18:27:17.897265    4456 out.go:177] * Done! kubectl is now configured to use "ha-832100" cluster and "default" namespace by default
	
	
	==> Docker <==
	Mar 14 18:30:20 ha-832100 dockerd[1330]: time="2024-03-14T18:30:20.571615881Z" level=info msg="cleaning up dead shim" namespace=moby
	Mar 14 18:35:21 ha-832100 dockerd[1330]: time="2024-03-14T18:35:21.188002796Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 14 18:35:21 ha-832100 dockerd[1330]: time="2024-03-14T18:35:21.188084802Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 14 18:35:21 ha-832100 dockerd[1330]: time="2024-03-14T18:35:21.188104203Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 14 18:35:21 ha-832100 dockerd[1330]: time="2024-03-14T18:35:21.188277516Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 14 18:35:27 ha-832100 dockerd[1324]: time="2024-03-14T18:35:27.150392276Z" level=info msg="ignoring event" container=26c880deae1348ae1221108671c6e10d0bceedc298e464575439270a7515d307 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 14 18:35:27 ha-832100 dockerd[1330]: time="2024-03-14T18:35:27.151322146Z" level=info msg="shim disconnected" id=26c880deae1348ae1221108671c6e10d0bceedc298e464575439270a7515d307 namespace=moby
	Mar 14 18:35:27 ha-832100 dockerd[1330]: time="2024-03-14T18:35:27.151386551Z" level=warning msg="cleaning up after shim disconnected" id=26c880deae1348ae1221108671c6e10d0bceedc298e464575439270a7515d307 namespace=moby
	Mar 14 18:35:27 ha-832100 dockerd[1330]: time="2024-03-14T18:35:27.151398752Z" level=info msg="cleaning up dead shim" namespace=moby
	Mar 14 18:40:30 ha-832100 dockerd[1330]: time="2024-03-14T18:40:30.173600364Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 14 18:40:30 ha-832100 dockerd[1330]: time="2024-03-14T18:40:30.175276490Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 14 18:40:30 ha-832100 dockerd[1330]: time="2024-03-14T18:40:30.175392599Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 14 18:40:30 ha-832100 dockerd[1330]: time="2024-03-14T18:40:30.176666496Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 14 18:40:36 ha-832100 dockerd[1324]: time="2024-03-14T18:40:36.303015374Z" level=info msg="ignoring event" container=7f6d84ffe6eeaf887da6c9ee794ca426d1285618ce72cb57e27536cbe562d687 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 14 18:40:36 ha-832100 dockerd[1330]: time="2024-03-14T18:40:36.304945520Z" level=info msg="shim disconnected" id=7f6d84ffe6eeaf887da6c9ee794ca426d1285618ce72cb57e27536cbe562d687 namespace=moby
	Mar 14 18:40:36 ha-832100 dockerd[1330]: time="2024-03-14T18:40:36.305023326Z" level=warning msg="cleaning up after shim disconnected" id=7f6d84ffe6eeaf887da6c9ee794ca426d1285618ce72cb57e27536cbe562d687 namespace=moby
	Mar 14 18:40:36 ha-832100 dockerd[1330]: time="2024-03-14T18:40:36.305036027Z" level=info msg="cleaning up dead shim" namespace=moby
	Mar 14 18:45:48 ha-832100 dockerd[1330]: time="2024-03-14T18:45:48.197486215Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 14 18:45:48 ha-832100 dockerd[1330]: time="2024-03-14T18:45:48.198581798Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 14 18:45:48 ha-832100 dockerd[1330]: time="2024-03-14T18:45:48.198672805Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 14 18:45:48 ha-832100 dockerd[1330]: time="2024-03-14T18:45:48.198909423Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 14 18:45:54 ha-832100 dockerd[1324]: time="2024-03-14T18:45:54.887590682Z" level=info msg="ignoring event" container=4d110192b62a601b31b6d9ccaf8192a5fdd41ee15019904b37efe1ed0f1bae21 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 14 18:45:54 ha-832100 dockerd[1330]: time="2024-03-14T18:45:54.888352540Z" level=info msg="shim disconnected" id=4d110192b62a601b31b6d9ccaf8192a5fdd41ee15019904b37efe1ed0f1bae21 namespace=moby
	Mar 14 18:45:54 ha-832100 dockerd[1330]: time="2024-03-14T18:45:54.889271409Z" level=warning msg="cleaning up after shim disconnected" id=4d110192b62a601b31b6d9ccaf8192a5fdd41ee15019904b37efe1ed0f1bae21 namespace=moby
	Mar 14 18:45:54 ha-832100 dockerd[1330]: time="2024-03-14T18:45:54.889367417Z" level=info msg="cleaning up dead shim" namespace=moby
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	4d110192b62a6       22aaebb38f4a9                                                                                         About a minute ago   Exited              kube-vip                  11                  75d9846fc06fe       kube-vip-ha-832100
	4f9142c71e126       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   19 minutes ago       Running             busybox                   0                   98ed522d977f6       busybox-5b5d89c9d6-zncln
	e8d9b70930630       6e38f40d628db                                                                                         24 minutes ago       Running             storage-provisioner       1                   81016f8048464       storage-provisioner
	3e4608ed92136       ead0a4a53df89                                                                                         27 minutes ago       Running             coredns                   0                   cb8dedae57c55       coredns-5dd5756b68-mnw55
	8fe8402ba95f0       ead0a4a53df89                                                                                         27 minutes ago       Running             coredns                   0                   80577856f1776       coredns-5dd5756b68-5rf5x
	033b57e92730d       6e38f40d628db                                                                                         27 minutes ago       Exited              storage-provisioner       0                   81016f8048464       storage-provisioner
	9017dcb9908b5       kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988              27 minutes ago       Running             kindnet-cni               0                   bf4a1e0a49ad9       kindnet-jvbts
	fe9255d884de3       83f6cc407eed8                                                                                         27 minutes ago       Running             kube-proxy                0                   f6b21a276ec3a       kube-proxy-cnzzc
	ee93388e9e8be       7fe0e6f37db33                                                                                         28 minutes ago       Running             kube-apiserver            0                   49af7adad0829       kube-apiserver-ha-832100
	c62341ce43817       e3db313c6dbc0                                                                                         28 minutes ago       Running             kube-scheduler            0                   b3432d97eff2a       kube-scheduler-ha-832100
	5e44cfe6e22bc       d058aa5ab969c                                                                                         28 minutes ago       Running             kube-controller-manager   0                   cef907dc2fc23       kube-controller-manager-ha-832100
	3b28661f58ab8       73deb9a3f7025                                                                                         28 minutes ago       Running             etcd                      0                   b501cfaa98ae5       etcd-ha-832100
	
	
	==> coredns [3e4608ed9213] <==
	[INFO] 10.244.2.2:47753 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000117009s
	[INFO] 10.244.2.2:45797 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000218816s
	[INFO] 10.244.2.2:54800 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.023913859s
	[INFO] 10.244.2.2:33361 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000193514s
	[INFO] 10.244.2.2:47990 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000110708s
	[INFO] 10.244.1.2:43918 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000150311s
	[INFO] 10.244.1.2:41533 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000122209s
	[INFO] 10.244.0.4:37564 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000139711s
	[INFO] 10.244.0.4:40929 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000251018s
	[INFO] 10.244.0.4:56498 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000179413s
	[INFO] 10.244.2.2:45730 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000175913s
	[INFO] 10.244.2.2:41389 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000082607s
	[INFO] 10.244.2.2:56157 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000067305s
	[INFO] 10.244.1.2:52311 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000178813s
	[INFO] 10.244.1.2:41198 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000067105s
	[INFO] 10.244.1.2:58044 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000051203s
	[INFO] 10.244.0.4:38792 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000207815s
	[INFO] 10.244.0.4:54825 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000200015s
	[INFO] 10.244.0.4:37063 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000111609s
	[INFO] 10.244.2.2:47276 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000124309s
	[INFO] 10.244.2.2:36530 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000126309s
	[INFO] 10.244.2.2:43071 - 5 "PTR IN 1.80.17.172.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000263819s
	[INFO] 10.244.1.2:48345 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000162612s
	[INFO] 10.244.1.2:38497 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000109708s
	[INFO] 10.244.1.2:34357 - 5 "PTR IN 1.80.17.172.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000129009s
	
	
	==> coredns [8fe8402ba95f] <==
	[INFO] 10.244.0.4:52653 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 140 0.000181114s
	[INFO] 10.244.0.4:48615 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,aa,rd,ra 140 0.000071805s
	[INFO] 10.244.2.2:46048 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.057280613s
	[INFO] 10.244.2.2:33796 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00014271s
	[INFO] 10.244.2.2:37552 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000244918s
	[INFO] 10.244.1.2:39094 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000076806s
	[INFO] 10.244.1.2:54587 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000071905s
	[INFO] 10.244.1.2:32916 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000122009s
	[INFO] 10.244.1.2:51974 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.016769734s
	[INFO] 10.244.1.2:46253 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000111108s
	[INFO] 10.244.1.2:57138 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00014721s
	[INFO] 10.244.0.4:60139 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000100907s
	[INFO] 10.244.0.4:56684 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000194314s
	[INFO] 10.244.0.4:56094 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000186713s
	[INFO] 10.244.0.4:46032 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000146711s
	[INFO] 10.244.0.4:60293 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000098807s
	[INFO] 10.244.2.2:53095 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000066504s
	[INFO] 10.244.1.2:34493 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000097907s
	[INFO] 10.244.0.4:41544 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000102308s
	[INFO] 10.244.2.2:40165 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000171113s
	[INFO] 10.244.1.2:45017 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000172613s
	[INFO] 10.244.0.4:44224 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000083307s
	[INFO] 10.244.0.4:50565 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000093507s
	[INFO] 10.244.0.4:50972 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000095707s
	[INFO] 10.244.0.4:55958 - 5 "PTR IN 1.80.17.172.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000108808s
	
	
	==> describe nodes <==
	Name:               ha-832100
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-832100
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c6f78a3db54ac629870afb44fb5bc8be9e04a8c7
	                    minikube.k8s.io/name=ha-832100
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_14T18_19_20_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 14 Mar 2024 18:19:15 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-832100
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 14 Mar 2024 18:47:16 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 14 Mar 2024 18:43:51 +0000   Thu, 14 Mar 2024 18:19:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 14 Mar 2024 18:43:51 +0000   Thu, 14 Mar 2024 18:19:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 14 Mar 2024 18:43:51 +0000   Thu, 14 Mar 2024 18:19:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 14 Mar 2024 18:43:51 +0000   Thu, 14 Mar 2024 18:19:40 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.17.90.10
	  Hostname:    ha-832100
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164268Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164268Ki
	  pods:               110
	System Info:
	  Machine ID:                 171daa7552864915b65ae5f72eac34f1
	  System UUID:                8618e286-8ee3-9d4d-a418-deff29a16f18
	  Boot ID:                    00d987ca-1c21-4890-8848-50fb6e3b581e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://25.0.4
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-zncln             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  kube-system                 coredns-5dd5756b68-5rf5x             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     27m
	  kube-system                 coredns-5dd5756b68-mnw55             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     27m
	  kube-system                 etcd-ha-832100                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         28m
	  kube-system                 kindnet-jvbts                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      27m
	  kube-system                 kube-apiserver-ha-832100             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 kube-controller-manager-ha-832100    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 kube-proxy-cnzzc                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 kube-scheduler-ha-832100             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 kube-vip-ha-832100                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 27m   kube-proxy       
	  Normal  Starting                 28m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  28m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  28m   kubelet          Node ha-832100 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    28m   kubelet          Node ha-832100 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     28m   kubelet          Node ha-832100 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           27m   node-controller  Node ha-832100 event: Registered Node ha-832100 in Controller
	  Normal  NodeReady                27m   kubelet          Node ha-832100 status is now: NodeReady
	  Normal  RegisteredNode           23m   node-controller  Node ha-832100 event: Registered Node ha-832100 in Controller
	  Normal  RegisteredNode           20m   node-controller  Node ha-832100 event: Registered Node ha-832100 in Controller
	
	
	Name:               ha-832100-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-832100-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c6f78a3db54ac629870afb44fb5bc8be9e04a8c7
	                    minikube.k8s.io/name=ha-832100
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_03_14T18_23_25_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 14 Mar 2024 18:23:08 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-832100-m02
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 14 Mar 2024 18:43:12 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Thu, 14 Mar 2024 18:38:21 +0000   Thu, 14 Mar 2024 18:43:54 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Thu, 14 Mar 2024 18:38:21 +0000   Thu, 14 Mar 2024 18:43:54 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Thu, 14 Mar 2024 18:38:21 +0000   Thu, 14 Mar 2024 18:43:54 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Thu, 14 Mar 2024 18:38:21 +0000   Thu, 14 Mar 2024 18:43:54 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  172.17.92.203
	  Hostname:    ha-832100-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164268Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164268Ki
	  pods:               110
	System Info:
	  Machine ID:                 f3fe8d7a570b4a5c950d13fd91eceebd
	  System UUID:                ace7f3bc-53a3-1848-9390-7794cc938af9
	  Boot ID:                    6a313c1e-e213-49a7-9d70-d2d411d4aa42
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://25.0.4
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-qjmj7                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  kube-system                 etcd-ha-832100-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         24m
	  kube-system                 kindnet-6n7bk                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      24m
	  kube-system                 kube-apiserver-ha-832100-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24m
	  kube-system                 kube-controller-manager-ha-832100-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24m
	  kube-system                 kube-proxy-g4l9q                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24m
	  kube-system                 kube-scheduler-ha-832100-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24m
	  kube-system                 kube-vip-ha-832100-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason          Age    From             Message
	  ----    ------          ----   ----             -------
	  Normal  Starting        23m    kube-proxy       
	  Normal  RegisteredNode  23m    node-controller  Node ha-832100-m02 event: Registered Node ha-832100-m02 in Controller
	  Normal  RegisteredNode  20m    node-controller  Node ha-832100-m02 event: Registered Node ha-832100-m02 in Controller
	  Normal  NodeNotReady    3m31s  node-controller  Node ha-832100-m02 status is now: NodeNotReady
	
	
	Name:               ha-832100-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-832100-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c6f78a3db54ac629870afb44fb5bc8be9e04a8c7
	                    minikube.k8s.io/name=ha-832100
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_03_14T18_26_58_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 14 Mar 2024 18:26:54 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-832100-m03
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 14 Mar 2024 18:47:22 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 14 Mar 2024 18:43:49 +0000   Thu, 14 Mar 2024 18:26:54 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 14 Mar 2024 18:43:49 +0000   Thu, 14 Mar 2024 18:26:54 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 14 Mar 2024 18:43:49 +0000   Thu, 14 Mar 2024 18:26:54 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 14 Mar 2024 18:43:49 +0000   Thu, 14 Mar 2024 18:27:09 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.17.89.54
	  Hostname:    ha-832100-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164268Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164268Ki
	  pods:               110
	System Info:
	  Machine ID:                 b90dfaadf10c415486a200ae38e5e9e0
	  System UUID:                70953b27-5407-2f48-b92b-4ef79ac9bbf1
	  Boot ID:                    74eb7504-536e-44be-ae28-79eb68262092
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://25.0.4
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-9wj82                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  kube-system                 etcd-ha-832100-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         20m
	  kube-system                 kindnet-trr4z                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      20m
	  kube-system                 kube-apiserver-ha-832100-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	  kube-system                 kube-controller-manager-ha-832100-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	  kube-system                 kube-proxy-z9bkt                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	  kube-system                 kube-scheduler-ha-832100-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	  kube-system                 kube-vip-ha-832100-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  Starting        20m   kube-proxy       
	  Normal  RegisteredNode  20m   node-controller  Node ha-832100-m03 event: Registered Node ha-832100-m03 in Controller
	  Normal  RegisteredNode  20m   node-controller  Node ha-832100-m03 event: Registered Node ha-832100-m03 in Controller
	  Normal  RegisteredNode  20m   node-controller  Node ha-832100-m03 event: Registered Node ha-832100-m03 in Controller
	
	
	Name:               ha-832100-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-832100-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c6f78a3db54ac629870afb44fb5bc8be9e04a8c7
	                    minikube.k8s.io/name=ha-832100
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_03_14T18_31_52_0700
	                    minikube.k8s.io/version=v1.32.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 14 Mar 2024 18:31:51 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-832100-m04
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 14 Mar 2024 18:47:18 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 14 Mar 2024 18:42:31 +0000   Thu, 14 Mar 2024 18:31:51 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 14 Mar 2024 18:42:31 +0000   Thu, 14 Mar 2024 18:31:51 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 14 Mar 2024 18:42:31 +0000   Thu, 14 Mar 2024 18:31:51 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 14 Mar 2024 18:42:31 +0000   Thu, 14 Mar 2024 18:32:12 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.17.93.81
	  Hostname:    ha-832100-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164268Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164268Ki
	  pods:               110
	System Info:
	  Machine ID:                 8cad17e75d614effb4ee14a3be5b5f3d
	  System UUID:                2485c86f-4918-4c4c-9373-ca334a6cb308
	  Boot ID:                    55c43a59-7ff3-47d6-9d60-f5c92f7a4a0f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://25.0.4
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-qfnmw       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      15m
	  kube-system                 kube-proxy-z9f9r    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 15m                kube-proxy       
	  Normal  NodeHasSufficientMemory  15m (x5 over 15m)  kubelet          Node ha-832100-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m (x5 over 15m)  kubelet          Node ha-832100-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m (x5 over 15m)  kubelet          Node ha-832100-m04 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           15m                node-controller  Node ha-832100-m04 event: Registered Node ha-832100-m04 in Controller
	  Normal  RegisteredNode           15m                node-controller  Node ha-832100-m04 event: Registered Node ha-832100-m04 in Controller
	  Normal  RegisteredNode           15m                node-controller  Node ha-832100-m04 event: Registered Node ha-832100-m04 in Controller
	  Normal  NodeReady                15m                kubelet          Node ha-832100-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[  +1.449499] systemd-fstab-generator[113]: Ignoring "noauto" option for root device
	[  +5.730940] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[Mar14 18:18] systemd-fstab-generator[641]: Ignoring "noauto" option for root device
	[  +0.173200] systemd-fstab-generator[653]: Ignoring "noauto" option for root device
	[ +29.193455] systemd-fstab-generator[937]: Ignoring "noauto" option for root device
	[  +0.096956] kauditd_printk_skb: 59 callbacks suppressed
	[  +0.504877] systemd-fstab-generator[975]: Ignoring "noauto" option for root device
	[  +0.188642] systemd-fstab-generator[987]: Ignoring "noauto" option for root device
	[  +0.209471] systemd-fstab-generator[1001]: Ignoring "noauto" option for root device
	[  +2.775973] systemd-fstab-generator[1169]: Ignoring "noauto" option for root device
	[  +0.180980] systemd-fstab-generator[1181]: Ignoring "noauto" option for root device
	[  +0.196816] systemd-fstab-generator[1194]: Ignoring "noauto" option for root device
	[  +0.261377] systemd-fstab-generator[1208]: Ignoring "noauto" option for root device
	[ +12.846987] systemd-fstab-generator[1316]: Ignoring "noauto" option for root device
	[  +0.100731] kauditd_printk_skb: 205 callbacks suppressed
	[Mar14 18:19] systemd-fstab-generator[1513]: Ignoring "noauto" option for root device
	[  +7.237686] systemd-fstab-generator[1791]: Ignoring "noauto" option for root device
	[  +0.093976] kauditd_printk_skb: 73 callbacks suppressed
	[  +5.876829] kauditd_printk_skb: 67 callbacks suppressed
	[  +4.028592] systemd-fstab-generator[2788]: Ignoring "noauto" option for root device
	[  +1.191853] kauditd_printk_skb: 24 callbacks suppressed
	[ +19.447796] kauditd_printk_skb: 35 callbacks suppressed
	[  +5.701321] kauditd_printk_skb: 33 callbacks suppressed
	
	
	==> etcd [3b28661f58ab] <==
	{"level":"warn","ts":"2024-03-14T18:47:25.476221Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fb1c6b41c6abc846","from":"fb1c6b41c6abc846","remote-peer-id":"c3b06ba6d32fdebc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-14T18:47:25.549462Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fb1c6b41c6abc846","from":"fb1c6b41c6abc846","remote-peer-id":"c3b06ba6d32fdebc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-14T18:47:25.558842Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fb1c6b41c6abc846","from":"fb1c6b41c6abc846","remote-peer-id":"c3b06ba6d32fdebc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-14T18:47:25.564128Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fb1c6b41c6abc846","from":"fb1c6b41c6abc846","remote-peer-id":"c3b06ba6d32fdebc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-14T18:47:25.576049Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fb1c6b41c6abc846","from":"fb1c6b41c6abc846","remote-peer-id":"c3b06ba6d32fdebc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-14T18:47:25.58385Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fb1c6b41c6abc846","from":"fb1c6b41c6abc846","remote-peer-id":"c3b06ba6d32fdebc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-14T18:47:25.593384Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fb1c6b41c6abc846","from":"fb1c6b41c6abc846","remote-peer-id":"c3b06ba6d32fdebc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-14T18:47:25.602896Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fb1c6b41c6abc846","from":"fb1c6b41c6abc846","remote-peer-id":"c3b06ba6d32fdebc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-14T18:47:25.608312Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fb1c6b41c6abc846","from":"fb1c6b41c6abc846","remote-peer-id":"c3b06ba6d32fdebc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-14T18:47:25.614086Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fb1c6b41c6abc846","from":"fb1c6b41c6abc846","remote-peer-id":"c3b06ba6d32fdebc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-14T18:47:25.624141Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fb1c6b41c6abc846","from":"fb1c6b41c6abc846","remote-peer-id":"c3b06ba6d32fdebc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-14T18:47:25.633036Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fb1c6b41c6abc846","from":"fb1c6b41c6abc846","remote-peer-id":"c3b06ba6d32fdebc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-14T18:47:25.640632Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fb1c6b41c6abc846","from":"fb1c6b41c6abc846","remote-peer-id":"c3b06ba6d32fdebc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-14T18:47:25.645964Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fb1c6b41c6abc846","from":"fb1c6b41c6abc846","remote-peer-id":"c3b06ba6d32fdebc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-14T18:47:25.650482Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fb1c6b41c6abc846","from":"fb1c6b41c6abc846","remote-peer-id":"c3b06ba6d32fdebc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-14T18:47:25.659327Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fb1c6b41c6abc846","from":"fb1c6b41c6abc846","remote-peer-id":"c3b06ba6d32fdebc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-14T18:47:25.66737Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fb1c6b41c6abc846","from":"fb1c6b41c6abc846","remote-peer-id":"c3b06ba6d32fdebc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-14T18:47:25.674706Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fb1c6b41c6abc846","from":"fb1c6b41c6abc846","remote-peer-id":"c3b06ba6d32fdebc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-14T18:47:25.675974Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fb1c6b41c6abc846","from":"fb1c6b41c6abc846","remote-peer-id":"c3b06ba6d32fdebc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-14T18:47:25.680085Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fb1c6b41c6abc846","from":"fb1c6b41c6abc846","remote-peer-id":"c3b06ba6d32fdebc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-14T18:47:25.683993Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fb1c6b41c6abc846","from":"fb1c6b41c6abc846","remote-peer-id":"c3b06ba6d32fdebc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-14T18:47:25.691034Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fb1c6b41c6abc846","from":"fb1c6b41c6abc846","remote-peer-id":"c3b06ba6d32fdebc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-14T18:47:25.698959Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fb1c6b41c6abc846","from":"fb1c6b41c6abc846","remote-peer-id":"c3b06ba6d32fdebc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-14T18:47:25.711586Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fb1c6b41c6abc846","from":"fb1c6b41c6abc846","remote-peer-id":"c3b06ba6d32fdebc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-14T18:47:25.775313Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fb1c6b41c6abc846","from":"fb1c6b41c6abc846","remote-peer-id":"c3b06ba6d32fdebc","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 18:47:25 up 30 min,  0 users,  load average: 0.27, 0.31, 0.34
	Linux ha-832100 5.10.207 #1 SMP Wed Mar 13 22:01:28 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [9017dcb9908b] <==
	I0314 18:46:46.222540       1 main.go:250] Node ha-832100-m04 has CIDR [10.244.3.0/24] 
	I0314 18:46:56.239352       1 main.go:223] Handling node with IPs: map[172.17.90.10:{}]
	I0314 18:46:56.239697       1 main.go:227] handling current node
	I0314 18:46:56.239875       1 main.go:223] Handling node with IPs: map[172.17.92.203:{}]
	I0314 18:46:56.239906       1 main.go:250] Node ha-832100-m02 has CIDR [10.244.1.0/24] 
	I0314 18:46:56.240354       1 main.go:223] Handling node with IPs: map[172.17.89.54:{}]
	I0314 18:46:56.240378       1 main.go:250] Node ha-832100-m03 has CIDR [10.244.2.0/24] 
	I0314 18:46:56.240698       1 main.go:223] Handling node with IPs: map[172.17.93.81:{}]
	I0314 18:46:56.240761       1 main.go:250] Node ha-832100-m04 has CIDR [10.244.3.0/24] 
	I0314 18:47:06.255340       1 main.go:223] Handling node with IPs: map[172.17.90.10:{}]
	I0314 18:47:06.255490       1 main.go:227] handling current node
	I0314 18:47:06.255519       1 main.go:223] Handling node with IPs: map[172.17.92.203:{}]
	I0314 18:47:06.255540       1 main.go:250] Node ha-832100-m02 has CIDR [10.244.1.0/24] 
	I0314 18:47:06.255764       1 main.go:223] Handling node with IPs: map[172.17.89.54:{}]
	I0314 18:47:06.255876       1 main.go:250] Node ha-832100-m03 has CIDR [10.244.2.0/24] 
	I0314 18:47:06.256022       1 main.go:223] Handling node with IPs: map[172.17.93.81:{}]
	I0314 18:47:06.256106       1 main.go:250] Node ha-832100-m04 has CIDR [10.244.3.0/24] 
	I0314 18:47:16.271011       1 main.go:223] Handling node with IPs: map[172.17.90.10:{}]
	I0314 18:47:16.271189       1 main.go:227] handling current node
	I0314 18:47:16.271223       1 main.go:223] Handling node with IPs: map[172.17.92.203:{}]
	I0314 18:47:16.271509       1 main.go:250] Node ha-832100-m02 has CIDR [10.244.1.0/24] 
	I0314 18:47:16.271982       1 main.go:223] Handling node with IPs: map[172.17.89.54:{}]
	I0314 18:47:16.272048       1 main.go:250] Node ha-832100-m03 has CIDR [10.244.2.0/24] 
	I0314 18:47:16.272125       1 main.go:223] Handling node with IPs: map[172.17.93.81:{}]
	I0314 18:47:16.272132       1 main.go:250] Node ha-832100-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [ee93388e9e8b] <==
	Trace[1041591371]: ---"initial value restored" 173ms (18:31:27.582)
	Trace[1041591371]: ---"Transaction prepared" 112ms (18:31:27.694)
	Trace[1041591371]: ---"Txn call completed" 382ms (18:31:28.076)
	Trace[1041591371]: [668.065645ms] [668.065645ms] END
	I0314 18:31:43.541780       1 trace.go:236] Trace[767782937]: "Update" accept:application/json, */*,audit-id:a3c2bfa0-1268-4893-9c8a-21e066ed31d8,client:172.17.90.10,protocol:HTTP/2.0,resource:endpoints,scope:resource,url:/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath,user-agent:storage-provisioner/v0.0.0 (linux/amd64) kubernetes/$Format,verb:PUT (14-Mar-2024 18:31:42.413) (total time: 1127ms):
	Trace[767782937]: ["GuaranteedUpdate etcd3" audit-id:a3c2bfa0-1268-4893-9c8a-21e066ed31d8,key:/services/endpoints/kube-system/k8s.io-minikube-hostpath,type:*core.Endpoints,resource:endpoints 1127ms (18:31:42.414)
	Trace[767782937]:  ---"Txn call completed" 1126ms (18:31:43.541)]
	Trace[767782937]: [1.12769503s] [1.12769503s] END
	E0314 18:35:27.134794       1 writers.go:122] apiserver was unable to write a JSON response: http: Handler timeout
	E0314 18:35:27.134832       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
	E0314 18:35:27.134992       1 finisher.go:175] FinishRequest: post-timeout activity - time-elapsed: 7.101µs, panicked: false, err: context canceled, panic-reason: <nil>
	E0314 18:35:27.136523       1 writers.go:135] apiserver was unable to write a fallback JSON response: http: Handler timeout
	E0314 18:35:27.138268       1 timeout.go:142] post-timeout activity - time-elapsed: 2.834813ms, PUT "/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/plndr-cp-lock" result: <nil>
	W0314 18:43:27.478247       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [172.17.89.54 172.17.90.10]
	I0314 18:43:38.088964       1 trace.go:236] Trace[1351124473]: "GuaranteedUpdate etcd3" audit-id:,key:/masterleases/172.17.90.10,type:*v1.Endpoints,resource:apiServerIPInfo (14-Mar-2024 18:43:37.458) (total time: 629ms):
	Trace[1351124473]: ---"Transaction prepared" 127ms (18:43:37.588)
	Trace[1351124473]: ---"Txn call completed" 499ms (18:43:38.088)
	Trace[1351124473]: [629.953777ms] [629.953777ms] END
	I0314 18:43:43.854472       1 trace.go:236] Trace[1315983321]: "Get" accept:application/json, */*,audit-id:29a7a991-ac40-4ecc-8c7a-42baeeb878ba,client:172.17.90.10,protocol:HTTP/2.0,resource:endpoints,scope:resource,url:/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath,user-agent:storage-provisioner/v0.0.0 (linux/amd64) kubernetes/$Format,verb:GET (14-Mar-2024 18:43:43.289) (total time: 564ms):
	Trace[1315983321]: ---"About to write a response" 564ms (18:43:43.854)
	Trace[1315983321]: [564.581123ms] [564.581123ms] END
	I0314 18:43:48.087020       1 trace.go:236] Trace[111162415]: "GuaranteedUpdate etcd3" audit-id:,key:/masterleases/172.17.90.10,type:*v1.Endpoints,resource:apiServerIPInfo (14-Mar-2024 18:43:47.459) (total time: 627ms):
	Trace[111162415]: ---"Transaction prepared" 387ms (18:43:47.851)
	Trace[111162415]: ---"Txn call completed" 235ms (18:43:48.086)
	Trace[111162415]: [627.5425ms] [627.5425ms] END
	
	
	==> kube-controller-manager [5e44cfe6e22b] <==
	I0314 18:27:52.647934       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: busybox-5b5d89c9d6-w679p"
	I0314 18:27:52.706296       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="237.883792ms"
	I0314 18:27:52.723776       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="17.120009ms"
	I0314 18:27:52.724292       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="87.306µs"
	I0314 18:27:55.075368       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="49.08881ms"
	I0314 18:27:55.075561       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="64.005µs"
	I0314 18:27:55.335581       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="15.955174ms"
	I0314 18:27:55.336837       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="54.804µs"
	I0314 18:27:55.496246       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="15.456137ms"
	I0314 18:27:55.497261       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="191.214µs"
	E0314 18:31:50.282846       1 certificate_controller.go:146] Sync csr-c94wv failed with : error updating signature for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io "csr-c94wv": the object has been modified; please apply your changes to the latest version and try again
	I0314 18:31:51.848413       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-832100-m04\" does not exist"
	I0314 18:31:51.915303       1 range_allocator.go:380] "Set node PodCIDR" node="ha-832100-m04" podCIDRs=["10.244.3.0/24"]
	I0314 18:31:51.926342       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-qfnmw"
	I0314 18:31:51.927130       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-z9f9r"
	I0314 18:31:52.017508       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="ha-832100-m04"
	I0314 18:31:52.017999       1 event.go:307] "Event occurred" object="ha-832100-m04" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node ha-832100-m04 event: Registered Node ha-832100-m04 in Controller"
	I0314 18:31:52.138640       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: kindnet-gp228"
	I0314 18:31:52.155203       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: kube-proxy-qfgv5"
	I0314 18:31:52.246002       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: kindnet-qwcbm"
	I0314 18:31:52.262127       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: kube-proxy-kpbwb"
	I0314 18:32:12.884640       1 topologycache.go:237] "Can't get CPU or zone information for node" node="ha-832100-m04"
	I0314 18:43:54.029429       1 topologycache.go:237] "Can't get CPU or zone information for node" node="ha-832100-m04"
	I0314 18:43:54.258691       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="24.462956ms"
	I0314 18:43:54.259803       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="123.11µs"
	
	
	==> kube-proxy [fe9255d884de] <==
	I0314 18:19:33.797319       1 server_others.go:69] "Using iptables proxy"
	I0314 18:19:33.814945       1 node.go:141] Successfully retrieved node IP: 172.17.90.10
	I0314 18:19:33.905103       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0314 18:19:33.905127       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0314 18:19:33.911503       1 server_others.go:152] "Using iptables Proxier"
	I0314 18:19:33.911626       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0314 18:19:33.912017       1 server.go:846] "Version info" version="v1.28.4"
	I0314 18:19:33.912031       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0314 18:19:33.914339       1 config.go:188] "Starting service config controller"
	I0314 18:19:33.914496       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0314 18:19:33.914636       1 config.go:315] "Starting node config controller"
	I0314 18:19:33.921402       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0314 18:19:33.914655       1 config.go:97] "Starting endpoint slice config controller"
	I0314 18:19:33.921554       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0314 18:19:34.021536       1 shared_informer.go:318] Caches are synced for service config
	I0314 18:19:34.021928       1 shared_informer.go:318] Caches are synced for node config
	I0314 18:19:34.023219       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [c62341ce4381] <==
	E0314 18:19:16.522863       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0314 18:19:16.605468       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0314 18:19:16.605775       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0314 18:19:16.643979       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0314 18:19:16.644275       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0314 18:19:18.346336       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0314 18:26:54.750126       1 framework.go:1206] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-trr4z\": pod kindnet-trr4z is already assigned to node \"ha-832100-m03\"" plugin="DefaultBinder" pod="kube-system/kindnet-trr4z" node="ha-832100-m03"
	E0314 18:26:54.750588       1 schedule_one.go:989] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-trr4z\": pod kindnet-trr4z is already assigned to node \"ha-832100-m03\"" pod="kube-system/kindnet-trr4z"
	E0314 18:26:54.751535       1 framework.go:1206] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-z9bkt\": pod kube-proxy-z9bkt is already assigned to node \"ha-832100-m03\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-z9bkt" node="ha-832100-m03"
	E0314 18:26:54.751778       1 schedule_one.go:989] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-z9bkt\": pod kube-proxy-z9bkt is already assigned to node \"ha-832100-m03\"" pod="kube-system/kube-proxy-z9bkt"
	I0314 18:26:54.752484       1 schedule_one.go:1002] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-z9bkt" node="ha-832100-m03"
	E0314 18:27:52.154207       1 framework.go:1206] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-5b5d89c9d6-qjmj7\": pod busybox-5b5d89c9d6-qjmj7 is already assigned to node \"ha-832100-m02\"" plugin="DefaultBinder" pod="default/busybox-5b5d89c9d6-qjmj7" node="ha-832100-m02"
	E0314 18:27:52.154497       1 schedule_one.go:319] "scheduler cache ForgetPod failed" err="pod 0ad1b0ba-dbc3-4f27-8fa8-cc7b850d6caa(default/busybox-5b5d89c9d6-qjmj7) wasn't assumed so cannot be forgotten"
	E0314 18:27:52.154673       1 schedule_one.go:989] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-5b5d89c9d6-qjmj7\": pod busybox-5b5d89c9d6-qjmj7 is already assigned to node \"ha-832100-m02\"" pod="default/busybox-5b5d89c9d6-qjmj7"
	I0314 18:27:52.156566       1 schedule_one.go:1002] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-5b5d89c9d6-qjmj7" node="ha-832100-m02"
	E0314 18:27:52.195181       1 framework.go:1206] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-5b5d89c9d6-9wj82\": pod busybox-5b5d89c9d6-9wj82 is already assigned to node \"ha-832100-m03\"" plugin="DefaultBinder" pod="default/busybox-5b5d89c9d6-9wj82" node="ha-832100-m03"
	E0314 18:27:52.195422       1 schedule_one.go:319] "scheduler cache ForgetPod failed" err="pod 808de52b-8630-4ff2-a243-87778fd03efb(default/busybox-5b5d89c9d6-9wj82) wasn't assumed so cannot be forgotten"
	E0314 18:27:52.196041       1 schedule_one.go:989] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-5b5d89c9d6-9wj82\": pod busybox-5b5d89c9d6-9wj82 is already assigned to node \"ha-832100-m03\"" pod="default/busybox-5b5d89c9d6-9wj82"
	I0314 18:27:52.196238       1 schedule_one.go:1002] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-5b5d89c9d6-9wj82" node="ha-832100-m03"
	E0314 18:27:52.196990       1 framework.go:1206] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-5b5d89c9d6-zncln\": pod busybox-5b5d89c9d6-zncln is already assigned to node \"ha-832100\"" plugin="DefaultBinder" pod="default/busybox-5b5d89c9d6-zncln" node="ha-832100"
	E0314 18:27:52.198139       1 schedule_one.go:319] "scheduler cache ForgetPod failed" err="pod 999025ff-6bb9-4220-8616-b611779f27d1(default/busybox-5b5d89c9d6-zncln) wasn't assumed so cannot be forgotten"
	E0314 18:27:52.201114       1 schedule_one.go:989] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-5b5d89c9d6-zncln\": pod busybox-5b5d89c9d6-zncln is already assigned to node \"ha-832100\"" pod="default/busybox-5b5d89c9d6-zncln"
	I0314 18:27:52.202218       1 schedule_one.go:1002] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-5b5d89c9d6-zncln" node="ha-832100"
	E0314 18:31:51.983041       1 framework.go:1206] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-qfnmw\": pod kindnet-qfnmw is already assigned to node \"ha-832100-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-qfnmw" node="ha-832100-m04"
	E0314 18:31:51.983509       1 schedule_one.go:989] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-qfnmw\": pod kindnet-qfnmw is already assigned to node \"ha-832100-m04\"" pod="kube-system/kindnet-qfnmw"
	
	
	==> kubelet <==
	Mar 14 18:45:55 ha-832100 kubelet[2809]: I0314 18:45:55.074820    2809 scope.go:117] "RemoveContainer" containerID="7f6d84ffe6eeaf887da6c9ee794ca426d1285618ce72cb57e27536cbe562d687"
	Mar 14 18:45:55 ha-832100 kubelet[2809]: I0314 18:45:55.075273    2809 scope.go:117] "RemoveContainer" containerID="4d110192b62a601b31b6d9ccaf8192a5fdd41ee15019904b37efe1ed0f1bae21"
	Mar 14 18:45:55 ha-832100 kubelet[2809]: E0314 18:45:55.075532    2809 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-vip\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-vip pod=kube-vip-ha-832100_kube-system(a3840b1dc1c8c7700d743c49c765449b)\"" pod="kube-system/kube-vip-ha-832100" podUID="a3840b1dc1c8c7700d743c49c765449b"
	Mar 14 18:46:08 ha-832100 kubelet[2809]: I0314 18:46:08.014720    2809 scope.go:117] "RemoveContainer" containerID="4d110192b62a601b31b6d9ccaf8192a5fdd41ee15019904b37efe1ed0f1bae21"
	Mar 14 18:46:08 ha-832100 kubelet[2809]: E0314 18:46:08.015103    2809 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-vip\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-vip pod=kube-vip-ha-832100_kube-system(a3840b1dc1c8c7700d743c49c765449b)\"" pod="kube-system/kube-vip-ha-832100" podUID="a3840b1dc1c8c7700d743c49c765449b"
	Mar 14 18:46:19 ha-832100 kubelet[2809]: E0314 18:46:19.047172    2809 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 14 18:46:19 ha-832100 kubelet[2809]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 14 18:46:19 ha-832100 kubelet[2809]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 14 18:46:19 ha-832100 kubelet[2809]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 14 18:46:19 ha-832100 kubelet[2809]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 14 18:46:23 ha-832100 kubelet[2809]: I0314 18:46:23.015163    2809 scope.go:117] "RemoveContainer" containerID="4d110192b62a601b31b6d9ccaf8192a5fdd41ee15019904b37efe1ed0f1bae21"
	Mar 14 18:46:23 ha-832100 kubelet[2809]: E0314 18:46:23.016039    2809 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-vip\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-vip pod=kube-vip-ha-832100_kube-system(a3840b1dc1c8c7700d743c49c765449b)\"" pod="kube-system/kube-vip-ha-832100" podUID="a3840b1dc1c8c7700d743c49c765449b"
	Mar 14 18:46:38 ha-832100 kubelet[2809]: I0314 18:46:38.015226    2809 scope.go:117] "RemoveContainer" containerID="4d110192b62a601b31b6d9ccaf8192a5fdd41ee15019904b37efe1ed0f1bae21"
	Mar 14 18:46:38 ha-832100 kubelet[2809]: E0314 18:46:38.015648    2809 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-vip\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-vip pod=kube-vip-ha-832100_kube-system(a3840b1dc1c8c7700d743c49c765449b)\"" pod="kube-system/kube-vip-ha-832100" podUID="a3840b1dc1c8c7700d743c49c765449b"
	Mar 14 18:46:52 ha-832100 kubelet[2809]: I0314 18:46:52.015080    2809 scope.go:117] "RemoveContainer" containerID="4d110192b62a601b31b6d9ccaf8192a5fdd41ee15019904b37efe1ed0f1bae21"
	Mar 14 18:46:52 ha-832100 kubelet[2809]: E0314 18:46:52.015548    2809 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-vip\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-vip pod=kube-vip-ha-832100_kube-system(a3840b1dc1c8c7700d743c49c765449b)\"" pod="kube-system/kube-vip-ha-832100" podUID="a3840b1dc1c8c7700d743c49c765449b"
	Mar 14 18:47:03 ha-832100 kubelet[2809]: I0314 18:47:03.015076    2809 scope.go:117] "RemoveContainer" containerID="4d110192b62a601b31b6d9ccaf8192a5fdd41ee15019904b37efe1ed0f1bae21"
	Mar 14 18:47:03 ha-832100 kubelet[2809]: E0314 18:47:03.016602    2809 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-vip\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-vip pod=kube-vip-ha-832100_kube-system(a3840b1dc1c8c7700d743c49c765449b)\"" pod="kube-system/kube-vip-ha-832100" podUID="a3840b1dc1c8c7700d743c49c765449b"
	Mar 14 18:47:18 ha-832100 kubelet[2809]: I0314 18:47:18.014782    2809 scope.go:117] "RemoveContainer" containerID="4d110192b62a601b31b6d9ccaf8192a5fdd41ee15019904b37efe1ed0f1bae21"
	Mar 14 18:47:18 ha-832100 kubelet[2809]: E0314 18:47:18.015085    2809 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-vip\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-vip pod=kube-vip-ha-832100_kube-system(a3840b1dc1c8c7700d743c49c765449b)\"" pod="kube-system/kube-vip-ha-832100" podUID="a3840b1dc1c8c7700d743c49c765449b"
	Mar 14 18:47:19 ha-832100 kubelet[2809]: E0314 18:47:19.047846    2809 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 14 18:47:19 ha-832100 kubelet[2809]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 14 18:47:19 ha-832100 kubelet[2809]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 14 18:47:19 ha-832100 kubelet[2809]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 14 18:47:19 ha-832100 kubelet[2809]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0314 18:47:17.915015    6672 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-832100 -n ha-832100
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-832100 -n ha-832100: (11.1678723s)
helpers_test.go:261: (dbg) Run:  kubectl --context ha-832100 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMutliControlPlane/serial/RestartSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMutliControlPlane/serial/RestartSecondaryNode (187.72s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (53.48s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-442000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-442000 -- exec busybox-5b5d89c9d6-7446n -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-442000 -- exec busybox-5b5d89c9d6-7446n -- sh -c "ping -c 1 172.17.80.1"
multinode_test.go:583: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p multinode-442000 -- exec busybox-5b5d89c9d6-7446n -- sh -c "ping -c 1 172.17.80.1": exit status 1 (10.4220234s)

                                                
                                                
-- stdout --
	PING 172.17.80.1 (172.17.80.1): 56 data bytes
	
	--- 172.17.80.1 ping statistics ---
	1 packets transmitted, 0 packets received, 100% packet loss

                                                
                                                
-- /stdout --
** stderr ** 
	W0314 19:22:54.047280    9140 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:584: Failed to ping host (172.17.80.1) from pod (busybox-5b5d89c9d6-7446n): exit status 1
multinode_test.go:572: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-442000 -- exec busybox-5b5d89c9d6-8drpb -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-442000 -- exec busybox-5b5d89c9d6-8drpb -- sh -c "ping -c 1 172.17.80.1"
multinode_test.go:583: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p multinode-442000 -- exec busybox-5b5d89c9d6-8drpb -- sh -c "ping -c 1 172.17.80.1": exit status 1 (10.418533s)

                                                
                                                
-- stdout --
	PING 172.17.80.1 (172.17.80.1): 56 data bytes
	
	--- 172.17.80.1 ping statistics ---
	1 packets transmitted, 0 packets received, 100% packet loss

                                                
                                                
-- /stdout --
** stderr ** 
	W0314 19:23:04.929258    3848 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:584: Failed to ping host (172.17.80.1) from pod (busybox-5b5d89c9d6-8drpb): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-442000 -n multinode-442000
E0314 19:23:18.224056   11052 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-953400\client.crt: The system cannot find the path specified.
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-442000 -n multinode-442000: (11.0173485s)
helpers_test.go:244: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-442000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-442000 logs -n 25: (7.8358598s)
helpers_test.go:252: TestMultiNode/serial/PingHostFrom2Pods logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| Command |                       Args                        |       Profile        |       User        | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| ssh     | mount-start-2-049100 ssh -- ls                    | mount-start-2-049100 | minikube7\jenkins | v1.32.0 | 14 Mar 24 19:12 UTC | 14 Mar 24 19:12 UTC |
	|         | /minikube-host                                    |                      |                   |         |                     |                     |
	| delete  | -p mount-start-1-049100                           | mount-start-1-049100 | minikube7\jenkins | v1.32.0 | 14 Mar 24 19:12 UTC | 14 Mar 24 19:13 UTC |
	|         | --alsologtostderr -v=5                            |                      |                   |         |                     |                     |
	| ssh     | mount-start-2-049100 ssh -- ls                    | mount-start-2-049100 | minikube7\jenkins | v1.32.0 | 14 Mar 24 19:13 UTC | 14 Mar 24 19:13 UTC |
	|         | /minikube-host                                    |                      |                   |         |                     |                     |
	| stop    | -p mount-start-2-049100                           | mount-start-2-049100 | minikube7\jenkins | v1.32.0 | 14 Mar 24 19:13 UTC | 14 Mar 24 19:13 UTC |
	| start   | -p mount-start-2-049100                           | mount-start-2-049100 | minikube7\jenkins | v1.32.0 | 14 Mar 24 19:13 UTC | 14 Mar 24 19:15 UTC |
	| mount   | C:\Users\jenkins.minikube7:/minikube-host         | mount-start-2-049100 | minikube7\jenkins | v1.32.0 | 14 Mar 24 19:15 UTC |                     |
	|         | --profile mount-start-2-049100 --v 0              |                      |                   |         |                     |                     |
	|         | --9p-version 9p2000.L --gid 0 --ip                |                      |                   |         |                     |                     |
	|         | --msize 6543 --port 46465 --type 9p --uid         |                      |                   |         |                     |                     |
	|         |                                                 0 |                      |                   |         |                     |                     |
	| ssh     | mount-start-2-049100 ssh -- ls                    | mount-start-2-049100 | minikube7\jenkins | v1.32.0 | 14 Mar 24 19:15 UTC | 14 Mar 24 19:15 UTC |
	|         | /minikube-host                                    |                      |                   |         |                     |                     |
	| delete  | -p mount-start-2-049100                           | mount-start-2-049100 | minikube7\jenkins | v1.32.0 | 14 Mar 24 19:15 UTC | 14 Mar 24 19:16 UTC |
	| delete  | -p mount-start-1-049100                           | mount-start-1-049100 | minikube7\jenkins | v1.32.0 | 14 Mar 24 19:16 UTC | 14 Mar 24 19:16 UTC |
	| start   | -p multinode-442000                               | multinode-442000     | minikube7\jenkins | v1.32.0 | 14 Mar 24 19:16 UTC | 14 Mar 24 19:22 UTC |
	|         | --wait=true --memory=2200                         |                      |                   |         |                     |                     |
	|         | --nodes=2 -v=8                                    |                      |                   |         |                     |                     |
	|         | --alsologtostderr                                 |                      |                   |         |                     |                     |
	|         | --driver=hyperv                                   |                      |                   |         |                     |                     |
	| kubectl | -p multinode-442000 -- apply -f                   | multinode-442000     | minikube7\jenkins | v1.32.0 | 14 Mar 24 19:22 UTC | 14 Mar 24 19:22 UTC |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                      |                   |         |                     |                     |
	| kubectl | -p multinode-442000 -- rollout                    | multinode-442000     | minikube7\jenkins | v1.32.0 | 14 Mar 24 19:22 UTC | 14 Mar 24 19:22 UTC |
	|         | status deployment/busybox                         |                      |                   |         |                     |                     |
	| kubectl | -p multinode-442000 -- get pods -o                | multinode-442000     | minikube7\jenkins | v1.32.0 | 14 Mar 24 19:22 UTC | 14 Mar 24 19:22 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |                   |         |                     |                     |
	| kubectl | -p multinode-442000 -- get pods -o                | multinode-442000     | minikube7\jenkins | v1.32.0 | 14 Mar 24 19:22 UTC | 14 Mar 24 19:22 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |                   |         |                     |                     |
	| kubectl | -p multinode-442000 -- exec                       | multinode-442000     | minikube7\jenkins | v1.32.0 | 14 Mar 24 19:22 UTC | 14 Mar 24 19:22 UTC |
	|         | busybox-5b5d89c9d6-7446n --                       |                      |                   |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |                   |         |                     |                     |
	| kubectl | -p multinode-442000 -- exec                       | multinode-442000     | minikube7\jenkins | v1.32.0 | 14 Mar 24 19:22 UTC | 14 Mar 24 19:22 UTC |
	|         | busybox-5b5d89c9d6-8drpb --                       |                      |                   |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |                   |         |                     |                     |
	| kubectl | -p multinode-442000 -- exec                       | multinode-442000     | minikube7\jenkins | v1.32.0 | 14 Mar 24 19:22 UTC | 14 Mar 24 19:22 UTC |
	|         | busybox-5b5d89c9d6-7446n --                       |                      |                   |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |                   |         |                     |                     |
	| kubectl | -p multinode-442000 -- exec                       | multinode-442000     | minikube7\jenkins | v1.32.0 | 14 Mar 24 19:22 UTC | 14 Mar 24 19:22 UTC |
	|         | busybox-5b5d89c9d6-8drpb --                       |                      |                   |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |                   |         |                     |                     |
	| kubectl | -p multinode-442000 -- exec                       | multinode-442000     | minikube7\jenkins | v1.32.0 | 14 Mar 24 19:22 UTC | 14 Mar 24 19:22 UTC |
	|         | busybox-5b5d89c9d6-7446n -- nslookup              |                      |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |                   |         |                     |                     |
	| kubectl | -p multinode-442000 -- exec                       | multinode-442000     | minikube7\jenkins | v1.32.0 | 14 Mar 24 19:22 UTC | 14 Mar 24 19:22 UTC |
	|         | busybox-5b5d89c9d6-8drpb -- nslookup              |                      |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |                   |         |                     |                     |
	| kubectl | -p multinode-442000 -- get pods -o                | multinode-442000     | minikube7\jenkins | v1.32.0 | 14 Mar 24 19:22 UTC | 14 Mar 24 19:22 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |                   |         |                     |                     |
	| kubectl | -p multinode-442000 -- exec                       | multinode-442000     | minikube7\jenkins | v1.32.0 | 14 Mar 24 19:22 UTC | 14 Mar 24 19:22 UTC |
	|         | busybox-5b5d89c9d6-7446n                          |                      |                   |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |                   |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |                   |         |                     |                     |
	| kubectl | -p multinode-442000 -- exec                       | multinode-442000     | minikube7\jenkins | v1.32.0 | 14 Mar 24 19:22 UTC |                     |
	|         | busybox-5b5d89c9d6-7446n -- sh                    |                      |                   |         |                     |                     |
	|         | -c ping -c 1 172.17.80.1                          |                      |                   |         |                     |                     |
	| kubectl | -p multinode-442000 -- exec                       | multinode-442000     | minikube7\jenkins | v1.32.0 | 14 Mar 24 19:23 UTC | 14 Mar 24 19:23 UTC |
	|         | busybox-5b5d89c9d6-8drpb                          |                      |                   |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |                   |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |                   |         |                     |                     |
	| kubectl | -p multinode-442000 -- exec                       | multinode-442000     | minikube7\jenkins | v1.32.0 | 14 Mar 24 19:23 UTC |                     |
	|         | busybox-5b5d89c9d6-8drpb -- sh                    |                      |                   |         |                     |                     |
	|         | -c ping -c 1 172.17.80.1                          |                      |                   |         |                     |                     |
	|---------|---------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/14 19:16:05
	Running on machine: minikube7
	Binary: Built with gc go1.22.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0314 19:16:05.281792    9056 out.go:291] Setting OutFile to fd 1180 ...
	I0314 19:16:05.282780    9056 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 19:16:05.282780    9056 out.go:304] Setting ErrFile to fd 1292...
	I0314 19:16:05.282780    9056 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 19:16:05.310790    9056 out.go:298] Setting JSON to false
	I0314 19:16:05.314787    9056 start.go:129] hostinfo: {"hostname":"minikube7","uptime":65569,"bootTime":1710378195,"procs":197,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4046 Build 19045.4046","kernelVersion":"10.0.19045.4046 Build 19045.4046","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f66ed2ea-6c04-4a6b-8eea-b2fb0953e990"}
	W0314 19:16:05.315791    9056 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0314 19:16:05.323776    9056 out.go:177] * [multinode-442000] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4046 Build 19045.4046
	I0314 19:16:05.327782    9056 notify.go:220] Checking for updates...
	I0314 19:16:05.329779    9056 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0314 19:16:05.331779    9056 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0314 19:16:05.333789    9056 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	I0314 19:16:05.336840    9056 out.go:177]   - MINIKUBE_LOCATION=18384
	I0314 19:16:05.338784    9056 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0314 19:16:05.341780    9056 config.go:182] Loaded profile config "ha-832100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0314 19:16:05.341780    9056 driver.go:392] Setting default libvirt URI to qemu:///system
	I0314 19:16:10.372888    9056 out.go:177] * Using the hyperv driver based on user configuration
	I0314 19:16:10.376021    9056 start.go:297] selected driver: hyperv
	I0314 19:16:10.376652    9056 start.go:901] validating driver "hyperv" against <nil>
	I0314 19:16:10.376739    9056 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0314 19:16:10.435043    9056 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0314 19:16:10.436301    9056 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0314 19:16:10.436301    9056 cni.go:84] Creating CNI manager for ""
	I0314 19:16:10.436301    9056 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0314 19:16:10.436301    9056 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0314 19:16:10.437007    9056 start.go:340] cluster config:
	{Name:multinode-442000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-442000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Stat
icIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 19:16:10.437007    9056 iso.go:125] acquiring lock: {Name:mk1b3e73402180391a20a865a9454da445c269fc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0314 19:16:10.443166    9056 out.go:177] * Starting "multinode-442000" primary control-plane node in "multinode-442000" cluster
	I0314 19:16:10.445194    9056 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0314 19:16:10.445336    9056 preload.go:147] Found local preload: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I0314 19:16:10.445336    9056 cache.go:56] Caching tarball of preloaded images
	I0314 19:16:10.445336    9056 preload.go:173] Found C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0314 19:16:10.445336    9056 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0314 19:16:10.446000    9056 profile.go:142] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-442000\config.json ...
	I0314 19:16:10.446242    9056 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-442000\config.json: {Name:mka904f9f7523977aee93994c8b9f11b44f61fba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 19:16:10.447219    9056 start.go:360] acquireMachinesLock for multinode-442000: {Name:mk814f158b6187cc9297257c36fdbe0d2871c950 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0314 19:16:10.447386    9056 start.go:364] duration metric: took 53.5µs to acquireMachinesLock for "multinode-442000"
	I0314 19:16:10.447501    9056 start.go:93] Provisioning new machine with config: &{Name:multinode-442000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.28.4 ClusterName:multinode-442000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0314 19:16:10.447674    9056 start.go:125] createHost starting for "" (driver="hyperv")
	I0314 19:16:10.449637    9056 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0314 19:16:10.450253    9056 start.go:159] libmachine.API.Create for "multinode-442000" (driver="hyperv")
	I0314 19:16:10.450253    9056 client.go:168] LocalClient.Create starting
	I0314 19:16:10.450844    9056 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem
	I0314 19:16:10.451007    9056 main.go:141] libmachine: Decoding PEM data...
	I0314 19:16:10.451060    9056 main.go:141] libmachine: Parsing certificate...
	I0314 19:16:10.451276    9056 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem
	I0314 19:16:10.451439    9056 main.go:141] libmachine: Decoding PEM data...
	I0314 19:16:10.451439    9056 main.go:141] libmachine: Parsing certificate...
	I0314 19:16:10.451563    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0314 19:16:12.392205    9056 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0314 19:16:12.392729    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:16:12.392785    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0314 19:16:14.049936    9056 main.go:141] libmachine: [stdout =====>] : False
	
	I0314 19:16:14.050152    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:16:14.050152    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0314 19:16:15.465041    9056 main.go:141] libmachine: [stdout =====>] : True
	
	I0314 19:16:15.465041    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:16:15.465591    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0314 19:16:18.859602    9056 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0314 19:16:18.859602    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:16:18.861835    9056 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube7/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.32.1-1710348681-18375-amd64.iso...
	I0314 19:16:19.187583    9056 main.go:141] libmachine: Creating SSH key...
	I0314 19:16:19.321886    9056 main.go:141] libmachine: Creating VM...
	I0314 19:16:19.322884    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0314 19:16:22.031758    9056 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0314 19:16:22.031758    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:16:22.031848    9056 main.go:141] libmachine: Using switch "Default Switch"
	I0314 19:16:22.031908    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0314 19:16:23.704927    9056 main.go:141] libmachine: [stdout =====>] : True
	
	I0314 19:16:23.705236    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:16:23.705511    9056 main.go:141] libmachine: Creating VHD
	I0314 19:16:23.705721    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-442000\fixed.vhd' -SizeBytes 10MB -Fixed
	I0314 19:16:27.309624    9056 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube7
	Path                    : C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-442000\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 96D62F53-6B38-4253-BE69-5942B8815E3F
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0314 19:16:27.309717    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:16:27.309717    9056 main.go:141] libmachine: Writing magic tar header
	I0314 19:16:27.309717    9056 main.go:141] libmachine: Writing SSH key tar header
	I0314 19:16:27.319647    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-442000\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-442000\disk.vhd' -VHDType Dynamic -DeleteSource
	I0314 19:16:30.376802    9056 main.go:141] libmachine: [stdout =====>] : 
	I0314 19:16:30.376802    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:16:30.376802    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-442000\disk.vhd' -SizeBytes 20000MB
	I0314 19:16:32.759521    9056 main.go:141] libmachine: [stdout =====>] : 
	I0314 19:16:32.759521    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:16:32.759521    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM multinode-442000 -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-442000' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0314 19:16:36.207799    9056 main.go:141] libmachine: [stdout =====>] : 
	Name             State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----             ----- ----------- ----------------- ------   ------             -------
	multinode-442000 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0314 19:16:36.208609    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:16:36.208634    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName multinode-442000 -DynamicMemoryEnabled $false
	I0314 19:16:38.328355    9056 main.go:141] libmachine: [stdout =====>] : 
	I0314 19:16:38.329310    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:16:38.329310    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor multinode-442000 -Count 2
	I0314 19:16:40.363469    9056 main.go:141] libmachine: [stdout =====>] : 
	I0314 19:16:40.363469    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:16:40.363469    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName multinode-442000 -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-442000\boot2docker.iso'
	I0314 19:16:42.827513    9056 main.go:141] libmachine: [stdout =====>] : 
	I0314 19:16:42.827513    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:16:42.828225    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName multinode-442000 -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-442000\disk.vhd'
	I0314 19:16:45.290078    9056 main.go:141] libmachine: [stdout =====>] : 
	I0314 19:16:45.290828    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:16:45.290828    9056 main.go:141] libmachine: Starting VM...
	I0314 19:16:45.290904    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-442000
	I0314 19:16:48.205105    9056 main.go:141] libmachine: [stdout =====>] : 
	I0314 19:16:48.205105    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:16:48.205105    9056 main.go:141] libmachine: Waiting for host to start...
	I0314 19:16:48.205105    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000 ).state
	I0314 19:16:50.285585    9056 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:16:50.285585    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:16:50.285808    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-442000 ).networkadapters[0]).ipaddresses[0]
	I0314 19:16:52.663604    9056 main.go:141] libmachine: [stdout =====>] : 
	I0314 19:16:52.663604    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:16:53.667654    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000 ).state
	I0314 19:16:55.660156    9056 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:16:55.660923    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:16:55.660923    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-442000 ).networkadapters[0]).ipaddresses[0]
	I0314 19:16:57.982374    9056 main.go:141] libmachine: [stdout =====>] : 
	I0314 19:16:57.982374    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:16:58.998179    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000 ).state
	I0314 19:17:01.019679    9056 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:17:01.019679    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:17:01.019732    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-442000 ).networkadapters[0]).ipaddresses[0]
	I0314 19:17:03.361021    9056 main.go:141] libmachine: [stdout =====>] : 
	I0314 19:17:03.361021    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:17:04.364207    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000 ).state
	I0314 19:17:06.384129    9056 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:17:06.385136    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:17:06.385188    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-442000 ).networkadapters[0]).ipaddresses[0]
	I0314 19:17:08.673945    9056 main.go:141] libmachine: [stdout =====>] : 
	I0314 19:17:08.673945    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:17:09.678475    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000 ).state
	I0314 19:17:11.766860    9056 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:17:11.766940    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:17:11.766994    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-442000 ).networkadapters[0]).ipaddresses[0]
	I0314 19:17:14.165951    9056 main.go:141] libmachine: [stdout =====>] : 172.17.86.124
	
	I0314 19:17:14.166510    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:17:14.166510    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000 ).state
	I0314 19:17:16.155122    9056 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:17:16.155122    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:17:16.155242    9056 machine.go:94] provisionDockerMachine start ...
	I0314 19:17:16.155379    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000 ).state
	I0314 19:17:18.170935    9056 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:17:18.171786    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:17:18.171786    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-442000 ).networkadapters[0]).ipaddresses[0]
	I0314 19:17:20.570660    9056 main.go:141] libmachine: [stdout =====>] : 172.17.86.124
	
	I0314 19:17:20.571719    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:17:20.576141    9056 main.go:141] libmachine: Using SSH client type: native
	I0314 19:17:20.587624    9056 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xac9f80] 0xaccb60 <nil>  [] 0s} 172.17.86.124 22 <nil> <nil>}
	I0314 19:17:20.587624    9056 main.go:141] libmachine: About to run SSH command:
	hostname
	I0314 19:17:20.715077    9056 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0314 19:17:20.715077    9056 buildroot.go:166] provisioning hostname "multinode-442000"
	I0314 19:17:20.715077    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000 ).state
	I0314 19:17:22.673390    9056 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:17:22.673390    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:17:22.673390    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-442000 ).networkadapters[0]).ipaddresses[0]
	I0314 19:17:25.022102    9056 main.go:141] libmachine: [stdout =====>] : 172.17.86.124
	
	I0314 19:17:25.022605    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:17:25.026226    9056 main.go:141] libmachine: Using SSH client type: native
	I0314 19:17:25.026751    9056 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xac9f80] 0xaccb60 <nil>  [] 0s} 172.17.86.124 22 <nil> <nil>}
	I0314 19:17:25.026938    9056 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-442000 && echo "multinode-442000" | sudo tee /etc/hostname
	I0314 19:17:25.177771    9056 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-442000
	
	I0314 19:17:25.178006    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000 ).state
	I0314 19:17:27.156527    9056 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:17:27.156527    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:17:27.156527    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-442000 ).networkadapters[0]).ipaddresses[0]
	I0314 19:17:29.567430    9056 main.go:141] libmachine: [stdout =====>] : 172.17.86.124
	
	I0314 19:17:29.567914    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:17:29.571532    9056 main.go:141] libmachine: Using SSH client type: native
	I0314 19:17:29.572214    9056 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xac9f80] 0xaccb60 <nil>  [] 0s} 172.17.86.124 22 <nil> <nil>}
	I0314 19:17:29.572214    9056 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-442000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-442000/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-442000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0314 19:17:29.714523    9056 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0314 19:17:29.714645    9056 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube7\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube7\minikube-integration\.minikube}
	I0314 19:17:29.714780    9056 buildroot.go:174] setting up certificates
	I0314 19:17:29.714780    9056 provision.go:84] configureAuth start
	I0314 19:17:29.714841    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000 ).state
	I0314 19:17:31.692947    9056 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:17:31.692947    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:17:31.693426    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-442000 ).networkadapters[0]).ipaddresses[0]
	I0314 19:17:34.064508    9056 main.go:141] libmachine: [stdout =====>] : 172.17.86.124
	
	I0314 19:17:34.064508    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:17:34.064994    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000 ).state
	I0314 19:17:36.070069    9056 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:17:36.070069    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:17:36.070069    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-442000 ).networkadapters[0]).ipaddresses[0]
	I0314 19:17:38.448548    9056 main.go:141] libmachine: [stdout =====>] : 172.17.86.124
	
	I0314 19:17:38.448548    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:17:38.448624    9056 provision.go:143] copyHostCerts
	I0314 19:17:38.448756    9056 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem
	I0314 19:17:38.448756    9056 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem, removing ...
	I0314 19:17:38.448756    9056 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\key.pem
	I0314 19:17:38.449300    9056 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem (1679 bytes)
	I0314 19:17:38.450042    9056 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem
	I0314 19:17:38.450308    9056 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem, removing ...
	I0314 19:17:38.450308    9056 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.pem
	I0314 19:17:38.450308    9056 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0314 19:17:38.451291    9056 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem
	I0314 19:17:38.451368    9056 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem, removing ...
	I0314 19:17:38.451368    9056 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cert.pem
	I0314 19:17:38.451368    9056 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0314 19:17:38.452367    9056 provision.go:117] generating server cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-442000 san=[127.0.0.1 172.17.86.124 localhost minikube multinode-442000]
	I0314 19:17:39.012068    9056 provision.go:177] copyRemoteCerts
	I0314 19:17:39.020725    9056 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0314 19:17:39.020725    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000 ).state
	I0314 19:17:41.029685    9056 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:17:41.029685    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:17:41.030046    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-442000 ).networkadapters[0]).ipaddresses[0]
	I0314 19:17:43.409651    9056 main.go:141] libmachine: [stdout =====>] : 172.17.86.124
	
	I0314 19:17:43.410595    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:17:43.411030    9056 sshutil.go:53] new ssh client: &{IP:172.17.86.124 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-442000\id_rsa Username:docker}
	I0314 19:17:43.521102    9056 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.5000468s)
	I0314 19:17:43.521102    9056 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0314 19:17:43.521102    9056 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0314 19:17:43.562966    9056 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0314 19:17:43.562966    9056 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1216 bytes)
	I0314 19:17:43.602914    9056 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0314 19:17:43.602914    9056 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0314 19:17:43.646690    9056 provision.go:87] duration metric: took 13.9308279s to configureAuth
	I0314 19:17:43.646690    9056 buildroot.go:189] setting minikube options for container-runtime
	I0314 19:17:43.647333    9056 config.go:182] Loaded profile config "multinode-442000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0314 19:17:43.647425    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000 ).state
	I0314 19:17:45.603382    9056 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:17:45.603382    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:17:45.603382    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-442000 ).networkadapters[0]).ipaddresses[0]
	I0314 19:17:47.963636    9056 main.go:141] libmachine: [stdout =====>] : 172.17.86.124
	
	I0314 19:17:47.964123    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:17:47.970093    9056 main.go:141] libmachine: Using SSH client type: native
	I0314 19:17:47.970631    9056 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xac9f80] 0xaccb60 <nil>  [] 0s} 172.17.86.124 22 <nil> <nil>}
	I0314 19:17:47.970631    9056 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0314 19:17:48.095379    9056 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0314 19:17:48.095379    9056 buildroot.go:70] root file system type: tmpfs
	I0314 19:17:48.095379    9056 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0314 19:17:48.095379    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000 ).state
	I0314 19:17:50.077513    9056 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:17:50.077513    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:17:50.077513    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-442000 ).networkadapters[0]).ipaddresses[0]
	I0314 19:17:52.459944    9056 main.go:141] libmachine: [stdout =====>] : 172.17.86.124
	
	I0314 19:17:52.459944    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:17:52.465257    9056 main.go:141] libmachine: Using SSH client type: native
	I0314 19:17:52.466168    9056 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xac9f80] 0xaccb60 <nil>  [] 0s} 172.17.86.124 22 <nil> <nil>}
	I0314 19:17:52.466168    9056 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0314 19:17:52.623284    9056 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0314 19:17:52.623451    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000 ).state
	I0314 19:17:54.587664    9056 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:17:54.587664    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:17:54.588091    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-442000 ).networkadapters[0]).ipaddresses[0]
	I0314 19:17:56.955826    9056 main.go:141] libmachine: [stdout =====>] : 172.17.86.124
	
	I0314 19:17:56.956115    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:17:56.960039    9056 main.go:141] libmachine: Using SSH client type: native
	I0314 19:17:56.960260    9056 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xac9f80] 0xaccb60 <nil>  [] 0s} 172.17.86.124 22 <nil> <nil>}
	I0314 19:17:56.960260    9056 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0314 19:17:59.046295    9056 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0314 19:17:59.046340    9056 machine.go:97] duration metric: took 42.8879545s to provisionDockerMachine
	I0314 19:17:59.046388    9056 client.go:171] duration metric: took 1m48.5881831s to LocalClient.Create
	I0314 19:17:59.046388    9056 start.go:167] duration metric: took 1m48.5882532s to libmachine.API.Create "multinode-442000"
	I0314 19:17:59.046388    9056 start.go:293] postStartSetup for "multinode-442000" (driver="hyperv")
	I0314 19:17:59.046388    9056 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0314 19:17:59.055700    9056 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0314 19:17:59.055893    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000 ).state
	I0314 19:18:01.039352    9056 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:18:01.039352    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:18:01.039440    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-442000 ).networkadapters[0]).ipaddresses[0]
	I0314 19:18:03.420678    9056 main.go:141] libmachine: [stdout =====>] : 172.17.86.124
	
	I0314 19:18:03.420678    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:18:03.421148    9056 sshutil.go:53] new ssh client: &{IP:172.17.86.124 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-442000\id_rsa Username:docker}
	I0314 19:18:03.515040    9056 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.4589467s)
	I0314 19:18:03.524552    9056 ssh_runner.go:195] Run: cat /etc/os-release
	I0314 19:18:03.531208    9056 command_runner.go:130] > NAME=Buildroot
	I0314 19:18:03.531208    9056 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0314 19:18:03.531208    9056 command_runner.go:130] > ID=buildroot
	I0314 19:18:03.531208    9056 command_runner.go:130] > VERSION_ID=2023.02.9
	I0314 19:18:03.531208    9056 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0314 19:18:03.531208    9056 info.go:137] Remote host: Buildroot 2023.02.9
	I0314 19:18:03.531208    9056 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\addons for local assets ...
	I0314 19:18:03.531947    9056 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\files for local assets ...
	I0314 19:18:03.533011    9056 filesync.go:149] local asset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\110522.pem -> 110522.pem in /etc/ssl/certs
	I0314 19:18:03.533011    9056 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\110522.pem -> /etc/ssl/certs/110522.pem
	I0314 19:18:03.544080    9056 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0314 19:18:03.564507    9056 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\110522.pem --> /etc/ssl/certs/110522.pem (1708 bytes)
	I0314 19:18:03.614643    9056 start.go:296] duration metric: took 4.5679178s for postStartSetup
	I0314 19:18:03.616501    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000 ).state
	I0314 19:18:05.619285    9056 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:18:05.619285    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:18:05.619285    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-442000 ).networkadapters[0]).ipaddresses[0]
	I0314 19:18:07.980709    9056 main.go:141] libmachine: [stdout =====>] : 172.17.86.124
	
	I0314 19:18:07.980709    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:18:07.981508    9056 profile.go:142] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-442000\config.json ...
	I0314 19:18:07.983779    9056 start.go:128] duration metric: took 1m57.5275628s to createHost
	I0314 19:18:07.983885    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000 ).state
	I0314 19:18:09.975189    9056 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:18:09.975189    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:18:09.976274    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-442000 ).networkadapters[0]).ipaddresses[0]
	I0314 19:18:12.388249    9056 main.go:141] libmachine: [stdout =====>] : 172.17.86.124
	
	I0314 19:18:12.388671    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:18:12.394326    9056 main.go:141] libmachine: Using SSH client type: native
	I0314 19:18:12.394326    9056 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xac9f80] 0xaccb60 <nil>  [] 0s} 172.17.86.124 22 <nil> <nil>}
	I0314 19:18:12.394326    9056 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0314 19:18:12.508792    9056 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710443892.745218743
	
	I0314 19:18:12.508880    9056 fix.go:216] guest clock: 1710443892.745218743
	I0314 19:18:12.508880    9056 fix.go:229] Guest: 2024-03-14 19:18:12.745218743 +0000 UTC Remote: 2024-03-14 19:18:07.9838851 +0000 UTC m=+122.838526201 (delta=4.761333643s)
	I0314 19:18:12.508880    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000 ).state
	I0314 19:18:14.475037    9056 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:18:14.475037    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:18:14.475741    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-442000 ).networkadapters[0]).ipaddresses[0]
	I0314 19:18:16.822748    9056 main.go:141] libmachine: [stdout =====>] : 172.17.86.124
	
	I0314 19:18:16.822748    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:18:16.827285    9056 main.go:141] libmachine: Using SSH client type: native
	I0314 19:18:16.827872    9056 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xac9f80] 0xaccb60 <nil>  [] 0s} 172.17.86.124 22 <nil> <nil>}
	I0314 19:18:16.827872    9056 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1710443892
	I0314 19:18:16.959165    9056 main.go:141] libmachine: SSH cmd err, output: <nil>: Thu Mar 14 19:18:12 UTC 2024
	
	I0314 19:18:16.959165    9056 fix.go:236] clock set: Thu Mar 14 19:18:12 UTC 2024
	 (err=<nil>)
	I0314 19:18:16.959268    9056 start.go:83] releasing machines lock for "multinode-442000", held for 2m6.5026752s
	I0314 19:18:16.959454    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000 ).state
	I0314 19:18:18.968414    9056 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:18:18.968414    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:18:18.968598    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-442000 ).networkadapters[0]).ipaddresses[0]
	I0314 19:18:21.323046    9056 main.go:141] libmachine: [stdout =====>] : 172.17.86.124
	
	I0314 19:18:21.323046    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:18:21.328298    9056 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0314 19:18:21.328449    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000 ).state
	I0314 19:18:21.336498    9056 ssh_runner.go:195] Run: cat /version.json
	I0314 19:18:21.336498    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000 ).state
	I0314 19:18:23.374651    9056 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:18:23.374651    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:18:23.375272    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-442000 ).networkadapters[0]).ipaddresses[0]
	I0314 19:18:23.375573    9056 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:18:23.375678    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:18:23.375678    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-442000 ).networkadapters[0]).ipaddresses[0]
	I0314 19:18:25.778398    9056 main.go:141] libmachine: [stdout =====>] : 172.17.86.124
	
	I0314 19:18:25.778398    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:18:25.779076    9056 sshutil.go:53] new ssh client: &{IP:172.17.86.124 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-442000\id_rsa Username:docker}
	I0314 19:18:25.814356    9056 main.go:141] libmachine: [stdout =====>] : 172.17.86.124
	
	I0314 19:18:25.814356    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:18:25.814356    9056 sshutil.go:53] new ssh client: &{IP:172.17.86.124 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-442000\id_rsa Username:docker}
	I0314 19:18:25.954976    9056 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0314 19:18:25.955105    9056 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.6263344s)
	I0314 19:18:25.955105    9056 command_runner.go:130] > {"iso_version": "v1.32.1-1710348681-18375", "kicbase_version": "v0.0.42-1710284843-18375", "minikube_version": "v1.32.0", "commit": "fd5757a6603390a2c0efe3b1e5cdd797538203fd"}
	I0314 19:18:25.955219    9056 ssh_runner.go:235] Completed: cat /version.json: (4.6183785s)
	I0314 19:18:25.964367    9056 ssh_runner.go:195] Run: systemctl --version
	I0314 19:18:25.973042    9056 command_runner.go:130] > systemd 252 (252)
	I0314 19:18:25.974058    9056 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0314 19:18:25.983175    9056 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0314 19:18:25.991457    9056 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0314 19:18:25.992057    9056 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0314 19:18:26.000605    9056 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0314 19:18:26.031455    9056 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0314 19:18:26.031587    9056 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0314 19:18:26.031701    9056 start.go:494] detecting cgroup driver to use...
	I0314 19:18:26.032046    9056 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0314 19:18:26.067083    9056 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0314 19:18:26.076097    9056 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0314 19:18:26.104128    9056 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0314 19:18:26.124613    9056 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0314 19:18:26.135602    9056 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0314 19:18:26.162988    9056 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0314 19:18:26.192775    9056 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0314 19:18:26.219503    9056 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0314 19:18:26.246010    9056 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0314 19:18:26.277308    9056 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0314 19:18:26.304165    9056 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0314 19:18:26.321414    9056 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0314 19:18:26.330549    9056 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0314 19:18:26.357829    9056 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 19:18:26.534497    9056 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0314 19:18:26.562667    9056 start.go:494] detecting cgroup driver to use...
	I0314 19:18:26.571628    9056 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0314 19:18:26.593280    9056 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0314 19:18:26.593280    9056 command_runner.go:130] > [Unit]
	I0314 19:18:26.593280    9056 command_runner.go:130] > Description=Docker Application Container Engine
	I0314 19:18:26.593280    9056 command_runner.go:130] > Documentation=https://docs.docker.com
	I0314 19:18:26.593280    9056 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0314 19:18:26.593280    9056 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0314 19:18:26.593280    9056 command_runner.go:130] > StartLimitBurst=3
	I0314 19:18:26.593280    9056 command_runner.go:130] > StartLimitIntervalSec=60
	I0314 19:18:26.593280    9056 command_runner.go:130] > [Service]
	I0314 19:18:26.593280    9056 command_runner.go:130] > Type=notify
	I0314 19:18:26.593280    9056 command_runner.go:130] > Restart=on-failure
	I0314 19:18:26.593280    9056 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0314 19:18:26.593280    9056 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0314 19:18:26.593280    9056 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0314 19:18:26.593280    9056 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0314 19:18:26.593280    9056 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0314 19:18:26.593280    9056 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0314 19:18:26.593280    9056 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0314 19:18:26.593280    9056 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0314 19:18:26.593280    9056 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0314 19:18:26.593280    9056 command_runner.go:130] > ExecStart=
	I0314 19:18:26.593280    9056 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0314 19:18:26.593280    9056 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0314 19:18:26.593280    9056 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0314 19:18:26.593280    9056 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0314 19:18:26.593280    9056 command_runner.go:130] > LimitNOFILE=infinity
	I0314 19:18:26.593280    9056 command_runner.go:130] > LimitNPROC=infinity
	I0314 19:18:26.593280    9056 command_runner.go:130] > LimitCORE=infinity
	I0314 19:18:26.593280    9056 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0314 19:18:26.593280    9056 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0314 19:18:26.593280    9056 command_runner.go:130] > TasksMax=infinity
	I0314 19:18:26.593280    9056 command_runner.go:130] > TimeoutStartSec=0
	I0314 19:18:26.593280    9056 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0314 19:18:26.593280    9056 command_runner.go:130] > Delegate=yes
	I0314 19:18:26.593280    9056 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0314 19:18:26.593280    9056 command_runner.go:130] > KillMode=process
	I0314 19:18:26.593280    9056 command_runner.go:130] > [Install]
	I0314 19:18:26.593280    9056 command_runner.go:130] > WantedBy=multi-user.target
	I0314 19:18:26.605321    9056 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0314 19:18:26.636447    9056 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0314 19:18:26.682445    9056 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0314 19:18:26.713357    9056 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0314 19:18:26.746168    9056 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0314 19:18:26.802754    9056 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0314 19:18:26.824378    9056 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0314 19:18:26.860447    9056 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0314 19:18:26.870283    9056 ssh_runner.go:195] Run: which cri-dockerd
	I0314 19:18:26.876264    9056 command_runner.go:130] > /usr/bin/cri-dockerd
	I0314 19:18:26.885200    9056 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0314 19:18:26.902005    9056 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0314 19:18:26.939356    9056 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0314 19:18:27.142269    9056 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0314 19:18:27.320008    9056 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0314 19:18:27.320267    9056 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0314 19:18:27.362002    9056 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 19:18:27.549532    9056 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0314 19:18:30.042540    9056 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.492822s)
	I0314 19:18:30.054314    9056 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0314 19:18:30.088796    9056 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0314 19:18:30.124499    9056 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0314 19:18:30.308986    9056 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0314 19:18:30.496428    9056 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 19:18:30.695419    9056 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0314 19:18:30.734107    9056 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0314 19:18:30.772796    9056 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 19:18:30.969330    9056 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0314 19:18:31.068994    9056 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0314 19:18:31.080006    9056 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0314 19:18:31.088926    9056 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0314 19:18:31.088926    9056 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0314 19:18:31.088926    9056 command_runner.go:130] > Device: 0,22	Inode: 877         Links: 1
	I0314 19:18:31.089076    9056 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0314 19:18:31.089076    9056 command_runner.go:130] > Access: 2024-03-14 19:18:31.250275803 +0000
	I0314 19:18:31.089076    9056 command_runner.go:130] > Modify: 2024-03-14 19:18:31.250275803 +0000
	I0314 19:18:31.089076    9056 command_runner.go:130] > Change: 2024-03-14 19:18:31.254276381 +0000
	I0314 19:18:31.089076    9056 command_runner.go:130] >  Birth: -
	I0314 19:18:31.089131    9056 start.go:562] Will wait 60s for crictl version
	I0314 19:18:31.098643    9056 ssh_runner.go:195] Run: which crictl
	I0314 19:18:31.103542    9056 command_runner.go:130] > /usr/bin/crictl
	I0314 19:18:31.112319    9056 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0314 19:18:31.188753    9056 command_runner.go:130] > Version:  0.1.0
	I0314 19:18:31.188847    9056 command_runner.go:130] > RuntimeName:  docker
	I0314 19:18:31.188901    9056 command_runner.go:130] > RuntimeVersion:  25.0.4
	I0314 19:18:31.188901    9056 command_runner.go:130] > RuntimeApiVersion:  v1
	I0314 19:18:31.188950    9056 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  25.0.4
	RuntimeApiVersion:  v1
	I0314 19:18:31.198784    9056 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0314 19:18:31.231311    9056 command_runner.go:130] > 25.0.4
	I0314 19:18:31.239207    9056 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0314 19:18:31.269296    9056 command_runner.go:130] > 25.0.4
	I0314 19:18:31.275180    9056 out.go:204] * Preparing Kubernetes v1.28.4 on Docker 25.0.4 ...
	I0314 19:18:31.275413    9056 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0314 19:18:31.279142    9056 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0314 19:18:31.279142    9056 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0314 19:18:31.279142    9056 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0314 19:18:31.279142    9056 ip.go:207] Found interface: {Index:7 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:82:e8:09 Flags:up|broadcast|multicast|running}
	I0314 19:18:31.281293    9056 ip.go:210] interface addr: fe80::e3be:cf7e:6bd2:b964/64
	I0314 19:18:31.281293    9056 ip.go:210] interface addr: 172.17.80.1/20
	I0314 19:18:31.289921    9056 ssh_runner.go:195] Run: grep 172.17.80.1	host.minikube.internal$ /etc/hosts
	I0314 19:18:31.293554    9056 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.17.80.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0314 19:18:31.317809    9056 kubeadm.go:877] updating cluster {Name:multinode-442000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.28.4 ClusterName:multinode-442000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.86.124 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0314 19:18:31.318045    9056 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0314 19:18:31.325013    9056 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0314 19:18:31.347724    9056 docker.go:685] Got preloaded images: 
	I0314 19:18:31.347724    9056 docker.go:691] registry.k8s.io/kube-apiserver:v1.28.4 wasn't preloaded
	I0314 19:18:31.356761    9056 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0314 19:18:31.373706    9056 command_runner.go:139] > {"Repositories":{}}
	I0314 19:18:31.383003    9056 ssh_runner.go:195] Run: which lz4
	I0314 19:18:31.388759    9056 command_runner.go:130] > /usr/bin/lz4
	I0314 19:18:31.388759    9056 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0314 19:18:31.397934    9056 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0314 19:18:31.403378    9056 command_runner.go:130] ! stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0314 19:18:31.404263    9056 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0314 19:18:31.404435    9056 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (423165415 bytes)
	I0314 19:18:33.233446    9056 docker.go:649] duration metric: took 1.8445489s to copy over tarball
	I0314 19:18:33.242956    9056 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0314 19:18:43.700549    9056 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (10.4568137s)
	I0314 19:18:43.700549    9056 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0314 19:18:43.773175    9056 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0314 19:18:43.792166    9056 command_runner.go:139] > {"Repositories":{"gcr.io/k8s-minikube/storage-provisioner":{"gcr.io/k8s-minikube/storage-provisioner:v5":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562"},"registry.k8s.io/coredns/coredns":{"registry.k8s.io/coredns/coredns:v1.10.1":"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e":"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc"},"registry.k8s.io/etcd":{"registry.k8s.io/etcd:3.5.9-0":"sha256:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9","registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3":"sha256:73deb9a3f702532592a4167455f8
bf2e5f5d900bcc959ba2fd2d35c321de1af9"},"registry.k8s.io/kube-apiserver":{"registry.k8s.io/kube-apiserver:v1.28.4":"sha256:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257","registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb":"sha256:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257"},"registry.k8s.io/kube-controller-manager":{"registry.k8s.io/kube-controller-manager:v1.28.4":"sha256:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591","registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c":"sha256:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591"},"registry.k8s.io/kube-proxy":{"registry.k8s.io/kube-proxy:v1.28.4":"sha256:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e","registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532":"sha256:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021
a3a2899304398e"},"registry.k8s.io/kube-scheduler":{"registry.k8s.io/kube-scheduler:v1.28.4":"sha256:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1","registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba":"sha256:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1"},"registry.k8s.io/pause":{"registry.k8s.io/pause:3.9":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c"}}}
	I0314 19:18:43.792451    9056 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2629 bytes)
	I0314 19:18:43.840521    9056 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 19:18:44.029057    9056 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0314 19:18:46.586431    9056 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5571073s)
	I0314 19:18:46.598318    9056 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0314 19:18:46.622400    9056 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.28.4
	I0314 19:18:46.622400    9056 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.28.4
	I0314 19:18:46.622400    9056 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.28.4
	I0314 19:18:46.622400    9056 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.28.4
	I0314 19:18:46.622400    9056 command_runner.go:130] > registry.k8s.io/etcd:3.5.9-0
	I0314 19:18:46.622400    9056 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.10.1
	I0314 19:18:46.622400    9056 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0314 19:18:46.622400    9056 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0314 19:18:46.622400    9056 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.4
	registry.k8s.io/kube-scheduler:v1.28.4
	registry.k8s.io/kube-proxy:v1.28.4
	registry.k8s.io/kube-controller-manager:v1.28.4
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0314 19:18:46.622400    9056 cache_images.go:84] Images are preloaded, skipping loading
	I0314 19:18:46.622400    9056 kubeadm.go:928] updating node { 172.17.86.124 8443 v1.28.4 docker true true} ...
	I0314 19:18:46.622986    9056 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-442000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.17.86.124
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-442000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0314 19:18:46.629705    9056 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0314 19:18:46.666793    9056 command_runner.go:130] > cgroupfs
	I0314 19:18:46.667103    9056 cni.go:84] Creating CNI manager for ""
	I0314 19:18:46.667103    9056 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0314 19:18:46.667103    9056 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0314 19:18:46.667230    9056 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.17.86.124 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-442000 NodeName:multinode-442000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.17.86.124"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.17.86.124 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0314 19:18:46.667230    9056 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.17.86.124
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-442000"
	  kubeletExtraArgs:
	    node-ip: 172.17.86.124
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.17.86.124"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0314 19:18:46.678139    9056 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0314 19:18:46.696679    9056 command_runner.go:130] > kubeadm
	I0314 19:18:46.696679    9056 command_runner.go:130] > kubectl
	I0314 19:18:46.696679    9056 command_runner.go:130] > kubelet
	I0314 19:18:46.696842    9056 binaries.go:44] Found k8s binaries, skipping transfer
	I0314 19:18:46.708843    9056 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0314 19:18:46.724257    9056 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0314 19:18:46.752717    9056 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0314 19:18:46.780544    9056 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0314 19:18:46.822612    9056 ssh_runner.go:195] Run: grep 172.17.86.124	control-plane.minikube.internal$ /etc/hosts
	I0314 19:18:46.829333    9056 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.17.86.124	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0314 19:18:46.861506    9056 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 19:18:47.054190    9056 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0314 19:18:47.081136    9056 certs.go:68] Setting up C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-442000 for IP: 172.17.86.124
	I0314 19:18:47.081136    9056 certs.go:194] generating shared ca certs ...
	I0314 19:18:47.081136    9056 certs.go:226] acquiring lock for ca certs: {Name:mk3bd6e475aadf28590677101b36f61c132d4290 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 19:18:47.081954    9056 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key
	I0314 19:18:47.082211    9056 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key
	I0314 19:18:47.082413    9056 certs.go:256] generating profile certs ...
	I0314 19:18:47.082596    9056 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-442000\client.key
	I0314 19:18:47.082596    9056 crypto.go:68] Generating cert C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-442000\client.crt with IP's: []
	I0314 19:18:47.772197    9056 crypto.go:156] Writing cert to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-442000\client.crt ...
	I0314 19:18:47.772197    9056 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-442000\client.crt: {Name:mk545a60be574dec3fdd9c0bdd4bc1a78ea65cfe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 19:18:47.773873    9056 crypto.go:164] Writing key to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-442000\client.key ...
	I0314 19:18:47.773873    9056 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-442000\client.key: {Name:mk2d9c6fdded790c868f4caa7c901c68b0d2eeab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 19:18:47.774624    9056 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-442000\apiserver.key.002627ae
	I0314 19:18:47.774624    9056 crypto.go:68] Generating cert C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-442000\apiserver.crt.002627ae with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.17.86.124]
	I0314 19:18:47.871579    9056 crypto.go:156] Writing cert to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-442000\apiserver.crt.002627ae ...
	I0314 19:18:47.871579    9056 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-442000\apiserver.crt.002627ae: {Name:mk63e0c2d38619ba447112803b6467570af87b1a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 19:18:47.873221    9056 crypto.go:164] Writing key to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-442000\apiserver.key.002627ae ...
	I0314 19:18:47.873221    9056 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-442000\apiserver.key.002627ae: {Name:mk6888b3a912b516db6a768e391b58d87d8289c2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 19:18:47.874381    9056 certs.go:381] copying C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-442000\apiserver.crt.002627ae -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-442000\apiserver.crt
	I0314 19:18:47.884576    9056 certs.go:385] copying C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-442000\apiserver.key.002627ae -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-442000\apiserver.key
	I0314 19:18:47.885021    9056 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-442000\proxy-client.key
	I0314 19:18:47.885021    9056 crypto.go:68] Generating cert C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-442000\proxy-client.crt with IP's: []
	I0314 19:18:48.305106    9056 crypto.go:156] Writing cert to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-442000\proxy-client.crt ...
	I0314 19:18:48.305106    9056 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-442000\proxy-client.crt: {Name:mk5cc46379e7ac8682b21938dc25812f50e62cd1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 19:18:48.307104    9056 crypto.go:164] Writing key to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-442000\proxy-client.key ...
	I0314 19:18:48.307104    9056 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-442000\proxy-client.key: {Name:mkfc5ae5158a2239c8b58cc48dab0132785bd0ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 19:18:48.308098    9056 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0314 19:18:48.308098    9056 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0314 19:18:48.308098    9056 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0314 19:18:48.309098    9056 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0314 19:18:48.309098    9056 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-442000\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0314 19:18:48.309098    9056 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-442000\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0314 19:18:48.309098    9056 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-442000\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0314 19:18:48.318105    9056 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-442000\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0314 19:18:48.319273    9056 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\11052.pem (1338 bytes)
	W0314 19:18:48.319683    9056 certs.go:480] ignoring C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\11052_empty.pem, impossibly tiny 0 bytes
	I0314 19:18:48.319683    9056 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0314 19:18:48.319978    9056 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0314 19:18:48.320169    9056 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0314 19:18:48.320372    9056 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0314 19:18:48.320508    9056 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\110522.pem (1708 bytes)
	I0314 19:18:48.320808    9056 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0314 19:18:48.320922    9056 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\11052.pem -> /usr/share/ca-certificates/11052.pem
	I0314 19:18:48.321033    9056 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\110522.pem -> /usr/share/ca-certificates/110522.pem
	I0314 19:18:48.321174    9056 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0314 19:18:48.365785    9056 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0314 19:18:48.412843    9056 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0314 19:18:48.455211    9056 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0314 19:18:48.502515    9056 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-442000\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0314 19:18:48.545760    9056 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-442000\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0314 19:18:48.588738    9056 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-442000\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0314 19:18:48.630735    9056 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-442000\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0314 19:18:48.673162    9056 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0314 19:18:48.716890    9056 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\11052.pem --> /usr/share/ca-certificates/11052.pem (1338 bytes)
	I0314 19:18:48.760292    9056 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\110522.pem --> /usr/share/ca-certificates/110522.pem (1708 bytes)
	I0314 19:18:48.806419    9056 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0314 19:18:48.847648    9056 ssh_runner.go:195] Run: openssl version
	I0314 19:18:48.856127    9056 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0314 19:18:48.865886    9056 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0314 19:18:48.896637    9056 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0314 19:18:48.903918    9056 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Mar 14 17:44 /usr/share/ca-certificates/minikubeCA.pem
	I0314 19:18:48.904076    9056 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 14 17:44 /usr/share/ca-certificates/minikubeCA.pem
	I0314 19:18:48.916662    9056 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0314 19:18:48.925461    9056 command_runner.go:130] > b5213941
	I0314 19:18:48.934798    9056 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0314 19:18:48.962523    9056 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11052.pem && ln -fs /usr/share/ca-certificates/11052.pem /etc/ssl/certs/11052.pem"
	I0314 19:18:48.998814    9056 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11052.pem
	I0314 19:18:49.006963    9056 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Mar 14 17:58 /usr/share/ca-certificates/11052.pem
	I0314 19:18:49.007903    9056 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 14 17:58 /usr/share/ca-certificates/11052.pem
	I0314 19:18:49.015949    9056 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11052.pem
	I0314 19:18:49.024724    9056 command_runner.go:130] > 51391683
	I0314 19:18:49.035096    9056 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11052.pem /etc/ssl/certs/51391683.0"
	I0314 19:18:49.062566    9056 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110522.pem && ln -fs /usr/share/ca-certificates/110522.pem /etc/ssl/certs/110522.pem"
	I0314 19:18:49.091548    9056 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/110522.pem
	I0314 19:18:49.098717    9056 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Mar 14 17:58 /usr/share/ca-certificates/110522.pem
	I0314 19:18:49.098786    9056 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 14 17:58 /usr/share/ca-certificates/110522.pem
	I0314 19:18:49.107493    9056 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110522.pem
	I0314 19:18:49.115720    9056 command_runner.go:130] > 3ec20f2e
	I0314 19:18:49.124628    9056 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110522.pem /etc/ssl/certs/3ec20f2e.0"
	I0314 19:18:49.152394    9056 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0314 19:18:49.161813    9056 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0314 19:18:49.162265    9056 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0314 19:18:49.162566    9056 kubeadm.go:391] StartCluster: {Name:multinode-442000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.2
8.4 ClusterName:multinode-442000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.86.124 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 19:18:49.169506    9056 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0314 19:18:49.203112    9056 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0314 19:18:49.219922    9056 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I0314 19:18:49.219922    9056 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I0314 19:18:49.219922    9056 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I0314 19:18:49.229511    9056 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0314 19:18:49.255020    9056 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0314 19:18:49.270515    9056 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0314 19:18:49.271515    9056 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0314 19:18:49.271515    9056 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0314 19:18:49.271515    9056 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0314 19:18:49.271515    9056 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0314 19:18:49.271515    9056 kubeadm.go:156] found existing configuration files:
	
	I0314 19:18:49.280229    9056 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0314 19:18:49.296207    9056 command_runner.go:130] ! grep: /etc/kubernetes/admin.conf: No such file or directory
	I0314 19:18:49.296308    9056 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0314 19:18:49.304317    9056 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0314 19:18:49.330327    9056 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0314 19:18:49.345989    9056 command_runner.go:130] ! grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0314 19:18:49.345989    9056 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0314 19:18:49.354886    9056 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0314 19:18:49.378036    9056 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0314 19:18:49.397827    9056 command_runner.go:130] ! grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0314 19:18:49.397827    9056 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0314 19:18:49.407020    9056 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0314 19:18:49.433744    9056 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0314 19:18:49.448828    9056 command_runner.go:130] ! grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0314 19:18:49.449779    9056 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0314 19:18:49.461484    9056 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0314 19:18:49.477305    9056 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0314 19:18:49.871822    9056 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0314 19:18:49.871822    9056 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0314 19:19:04.214413    9056 kubeadm.go:309] [init] Using Kubernetes version: v1.28.4
	I0314 19:19:04.214413    9056 command_runner.go:130] > [init] Using Kubernetes version: v1.28.4
	I0314 19:19:04.214575    9056 command_runner.go:130] > [preflight] Running pre-flight checks
	I0314 19:19:04.214680    9056 kubeadm.go:309] [preflight] Running pre-flight checks
	I0314 19:19:04.214975    9056 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0314 19:19:04.214975    9056 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I0314 19:19:04.215276    9056 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0314 19:19:04.215276    9056 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0314 19:19:04.215482    9056 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0314 19:19:04.215482    9056 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0314 19:19:04.215699    9056 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0314 19:19:04.215753    9056 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0314 19:19:04.220176    9056 out.go:204]   - Generating certificates and keys ...
	I0314 19:19:04.220176    9056 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0314 19:19:04.220176    9056 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0314 19:19:04.220721    9056 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0314 19:19:04.220721    9056 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0314 19:19:04.220892    9056 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I0314 19:19:04.220892    9056 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0314 19:19:04.220892    9056 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I0314 19:19:04.220892    9056 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0314 19:19:04.220892    9056 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I0314 19:19:04.220892    9056 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0314 19:19:04.221436    9056 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0314 19:19:04.221436    9056 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I0314 19:19:04.221567    9056 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0314 19:19:04.221567    9056 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I0314 19:19:04.221777    9056 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-442000] and IPs [172.17.86.124 127.0.0.1 ::1]
	I0314 19:19:04.221777    9056 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-442000] and IPs [172.17.86.124 127.0.0.1 ::1]
	I0314 19:19:04.221777    9056 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0314 19:19:04.221777    9056 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I0314 19:19:04.221777    9056 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-442000] and IPs [172.17.86.124 127.0.0.1 ::1]
	I0314 19:19:04.221777    9056 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-442000] and IPs [172.17.86.124 127.0.0.1 ::1]
	I0314 19:19:04.221777    9056 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I0314 19:19:04.222315    9056 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0314 19:19:04.222467    9056 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0314 19:19:04.222467    9056 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I0314 19:19:04.222467    9056 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0314 19:19:04.222467    9056 command_runner.go:130] > [certs] Generating "sa" key and public key
	I0314 19:19:04.222467    9056 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0314 19:19:04.222467    9056 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0314 19:19:04.222467    9056 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0314 19:19:04.222467    9056 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0314 19:19:04.222467    9056 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0314 19:19:04.222467    9056 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0314 19:19:04.222467    9056 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0314 19:19:04.222467    9056 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0314 19:19:04.222467    9056 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0314 19:19:04.222467    9056 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0314 19:19:04.222467    9056 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0314 19:19:04.222467    9056 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0314 19:19:04.223490    9056 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0314 19:19:04.223490    9056 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0314 19:19:04.225760    9056 out.go:204]   - Booting up control plane ...
	I0314 19:19:04.225760    9056 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0314 19:19:04.225760    9056 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0314 19:19:04.225760    9056 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0314 19:19:04.225760    9056 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0314 19:19:04.226779    9056 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0314 19:19:04.226833    9056 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0314 19:19:04.227045    9056 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0314 19:19:04.227045    9056 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0314 19:19:04.227256    9056 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0314 19:19:04.227256    9056 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0314 19:19:04.227411    9056 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0314 19:19:04.227411    9056 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0314 19:19:04.227563    9056 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0314 19:19:04.227563    9056 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0314 19:19:04.227926    9056 command_runner.go:130] > [apiclient] All control plane components are healthy after 8.003900 seconds
	I0314 19:19:04.227926    9056 kubeadm.go:309] [apiclient] All control plane components are healthy after 8.003900 seconds
	I0314 19:19:04.227926    9056 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0314 19:19:04.227926    9056 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0314 19:19:04.227926    9056 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0314 19:19:04.228466    9056 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0314 19:19:04.228595    9056 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I0314 19:19:04.228646    9056 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0314 19:19:04.228805    9056 kubeadm.go:309] [mark-control-plane] Marking the node multinode-442000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0314 19:19:04.228805    9056 command_runner.go:130] > [mark-control-plane] Marking the node multinode-442000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0314 19:19:04.228805    9056 command_runner.go:130] > [bootstrap-token] Using token: 7bdjrk.zjci8xrpcan3qcz1
	I0314 19:19:04.229217    9056 kubeadm.go:309] [bootstrap-token] Using token: 7bdjrk.zjci8xrpcan3qcz1
	I0314 19:19:04.233385    9056 out.go:204]   - Configuring RBAC rules ...
	I0314 19:19:04.233385    9056 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0314 19:19:04.233385    9056 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0314 19:19:04.233385    9056 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0314 19:19:04.233385    9056 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0314 19:19:04.233385    9056 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0314 19:19:04.233385    9056 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0314 19:19:04.234382    9056 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0314 19:19:04.234382    9056 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0314 19:19:04.234382    9056 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0314 19:19:04.234382    9056 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0314 19:19:04.234382    9056 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0314 19:19:04.234382    9056 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0314 19:19:04.234382    9056 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0314 19:19:04.234382    9056 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0314 19:19:04.234382    9056 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0314 19:19:04.234382    9056 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0314 19:19:04.235394    9056 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0314 19:19:04.235394    9056 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0314 19:19:04.235448    9056 kubeadm.go:309] 
	I0314 19:19:04.235579    9056 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I0314 19:19:04.235579    9056 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0314 19:19:04.235579    9056 kubeadm.go:309] 
	I0314 19:19:04.235787    9056 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I0314 19:19:04.235787    9056 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0314 19:19:04.235787    9056 kubeadm.go:309] 
	I0314 19:19:04.235787    9056 command_runner.go:130] >   mkdir -p $HOME/.kube
	I0314 19:19:04.235787    9056 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0314 19:19:04.236009    9056 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0314 19:19:04.236061    9056 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0314 19:19:04.236263    9056 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0314 19:19:04.236263    9056 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0314 19:19:04.236312    9056 kubeadm.go:309] 
	I0314 19:19:04.236465    9056 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0314 19:19:04.236465    9056 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I0314 19:19:04.236513    9056 kubeadm.go:309] 
	I0314 19:19:04.236667    9056 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0314 19:19:04.236667    9056 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0314 19:19:04.236667    9056 kubeadm.go:309] 
	I0314 19:19:04.236770    9056 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0314 19:19:04.236828    9056 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I0314 19:19:04.236977    9056 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0314 19:19:04.236977    9056 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0314 19:19:04.237099    9056 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0314 19:19:04.237148    9056 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0314 19:19:04.237148    9056 kubeadm.go:309] 
	I0314 19:19:04.237311    9056 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I0314 19:19:04.237311    9056 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0314 19:19:04.237518    9056 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I0314 19:19:04.237518    9056 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0314 19:19:04.237518    9056 kubeadm.go:309] 
	I0314 19:19:04.237695    9056 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token 7bdjrk.zjci8xrpcan3qcz1 \
	I0314 19:19:04.237695    9056 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 7bdjrk.zjci8xrpcan3qcz1 \
	I0314 19:19:04.237904    9056 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:d6cca618a9b89cd38081689b02ff56bf93130e2cdd7cca4172dee33bbb1d34eb \
	I0314 19:19:04.237904    9056 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:d6cca618a9b89cd38081689b02ff56bf93130e2cdd7cca4172dee33bbb1d34eb \
	I0314 19:19:04.237904    9056 command_runner.go:130] > 	--control-plane 
	I0314 19:19:04.237904    9056 kubeadm.go:309] 	--control-plane 
	I0314 19:19:04.237904    9056 kubeadm.go:309] 
	I0314 19:19:04.238240    9056 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I0314 19:19:04.238289    9056 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0314 19:19:04.238289    9056 kubeadm.go:309] 
	I0314 19:19:04.238523    9056 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 7bdjrk.zjci8xrpcan3qcz1 \
	I0314 19:19:04.238523    9056 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token 7bdjrk.zjci8xrpcan3qcz1 \
	I0314 19:19:04.238523    9056 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:d6cca618a9b89cd38081689b02ff56bf93130e2cdd7cca4172dee33bbb1d34eb 
	I0314 19:19:04.238523    9056 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:d6cca618a9b89cd38081689b02ff56bf93130e2cdd7cca4172dee33bbb1d34eb 
	I0314 19:19:04.238523    9056 cni.go:84] Creating CNI manager for ""
	I0314 19:19:04.238523    9056 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0314 19:19:04.242798    9056 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0314 19:19:04.260930    9056 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0314 19:19:04.269683    9056 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0314 19:19:04.269683    9056 command_runner.go:130] >   Size: 2694104   	Blocks: 5264       IO Block: 4096   regular file
	I0314 19:19:04.269736    9056 command_runner.go:130] > Device: 0,17	Inode: 3497        Links: 1
	I0314 19:19:04.269736    9056 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0314 19:19:04.269736    9056 command_runner.go:130] > Access: 2024-03-14 19:17:10.602275800 +0000
	I0314 19:19:04.269736    9056 command_runner.go:130] > Modify: 2024-03-13 22:53:41.000000000 +0000
	I0314 19:19:04.269736    9056 command_runner.go:130] > Change: 2024-03-14 19:17:03.878000000 +0000
	I0314 19:19:04.269736    9056 command_runner.go:130] >  Birth: -
	I0314 19:19:04.269818    9056 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0314 19:19:04.269818    9056 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0314 19:19:04.339460    9056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0314 19:19:05.676402    9056 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I0314 19:19:05.676483    9056 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I0314 19:19:05.676483    9056 command_runner.go:130] > serviceaccount/kindnet created
	I0314 19:19:05.676483    9056 command_runner.go:130] > daemonset.apps/kindnet created
	I0314 19:19:05.676483    9056 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.3368686s)
	I0314 19:19:05.676483    9056 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0314 19:19:05.688358    9056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:19:05.688358    9056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-442000 minikube.k8s.io/updated_at=2024_03_14T19_19_05_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=c6f78a3db54ac629870afb44fb5bc8be9e04a8c7 minikube.k8s.io/name=multinode-442000 minikube.k8s.io/primary=true
	I0314 19:19:05.699880    9056 command_runner.go:130] > -16
	I0314 19:19:05.699988    9056 ops.go:34] apiserver oom_adj: -16
	I0314 19:19:05.830705    9056 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I0314 19:19:05.845452    9056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:19:05.873831    9056 command_runner.go:130] > node/multinode-442000 labeled
	I0314 19:19:05.976674    9056 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0314 19:19:06.351246    9056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:19:06.468279    9056 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0314 19:19:06.853167    9056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:19:06.972342    9056 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0314 19:19:07.357540    9056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:19:07.473687    9056 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0314 19:19:07.859031    9056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:19:07.972654    9056 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0314 19:19:08.348837    9056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:19:08.464633    9056 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0314 19:19:08.851437    9056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:19:08.978823    9056 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0314 19:19:09.358192    9056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:19:09.480197    9056 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0314 19:19:09.859579    9056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:19:09.974860    9056 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0314 19:19:10.359445    9056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:19:10.474730    9056 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0314 19:19:10.848958    9056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:19:10.961216    9056 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0314 19:19:11.351509    9056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:19:11.470062    9056 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0314 19:19:11.857829    9056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:19:11.974743    9056 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0314 19:19:12.362418    9056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:19:12.476698    9056 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0314 19:19:12.863953    9056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:19:12.981830    9056 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0314 19:19:13.358618    9056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:19:13.482240    9056 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0314 19:19:13.850705    9056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:19:13.984052    9056 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0314 19:19:14.354759    9056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:19:14.496496    9056 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0314 19:19:14.855799    9056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:19:14.974371    9056 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0314 19:19:15.357773    9056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:19:15.480087    9056 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0314 19:19:15.860414    9056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:19:16.018775    9056 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0314 19:19:16.361497    9056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:19:16.492509    9056 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0314 19:19:16.853007    9056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:19:16.992271    9056 command_runner.go:130] > NAME      SECRETS   AGE
	I0314 19:19:16.992338    9056 command_runner.go:130] > default   0         1s
	I0314 19:19:16.992943    9056 kubeadm.go:1106] duration metric: took 11.3154205s to wait for elevateKubeSystemPrivileges
	W0314 19:19:16.992980    9056 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0314 19:19:16.992980    9056 kubeadm.go:393] duration metric: took 27.8283313s to StartCluster
	I0314 19:19:16.992980    9056 settings.go:142] acquiring lock: {Name:mk2f48f1c2db86c45c5c20d13312e07e9c171d09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 19:19:16.992980    9056 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0314 19:19:16.995018    9056 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\kubeconfig: {Name:mk3b9816be6135fef42a073128e9ed356868417a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 19:19:16.996295    9056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0314 19:19:16.996423    9056 start.go:234] Will wait 6m0s for node &{Name: IP:172.17.86.124 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0314 19:19:16.998924    9056 out.go:177] * Verifying Kubernetes components...
	I0314 19:19:16.996476    9056 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0314 19:19:16.996828    9056 config.go:182] Loaded profile config "multinode-442000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0314 19:19:16.998988    9056 addons.go:69] Setting storage-provisioner=true in profile "multinode-442000"
	I0314 19:19:16.998988    9056 addons.go:69] Setting default-storageclass=true in profile "multinode-442000"
	I0314 19:19:17.003107    9056 addons.go:234] Setting addon storage-provisioner=true in "multinode-442000"
	I0314 19:19:17.003107    9056 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-442000"
	I0314 19:19:17.003107    9056 host.go:66] Checking if "multinode-442000" exists ...
	I0314 19:19:17.003759    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000 ).state
	I0314 19:19:17.004293    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000 ).state
	I0314 19:19:17.012093    9056 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 19:19:17.489407    9056 command_runner.go:130] > apiVersion: v1
	I0314 19:19:17.489472    9056 command_runner.go:130] > data:
	I0314 19:19:17.489472    9056 command_runner.go:130] >   Corefile: |
	I0314 19:19:17.489472    9056 command_runner.go:130] >     .:53 {
	I0314 19:19:17.489472    9056 command_runner.go:130] >         errors
	I0314 19:19:17.489558    9056 command_runner.go:130] >         health {
	I0314 19:19:17.489558    9056 command_runner.go:130] >            lameduck 5s
	I0314 19:19:17.489558    9056 command_runner.go:130] >         }
	I0314 19:19:17.489558    9056 command_runner.go:130] >         ready
	I0314 19:19:17.489619    9056 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0314 19:19:17.489619    9056 command_runner.go:130] >            pods insecure
	I0314 19:19:17.489619    9056 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0314 19:19:17.489619    9056 command_runner.go:130] >            ttl 30
	I0314 19:19:17.489619    9056 command_runner.go:130] >         }
	I0314 19:19:17.489704    9056 command_runner.go:130] >         prometheus :9153
	I0314 19:19:17.489752    9056 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0314 19:19:17.489752    9056 command_runner.go:130] >            max_concurrent 1000
	I0314 19:19:17.489752    9056 command_runner.go:130] >         }
	I0314 19:19:17.489793    9056 command_runner.go:130] >         cache 30
	I0314 19:19:17.489793    9056 command_runner.go:130] >         loop
	I0314 19:19:17.489793    9056 command_runner.go:130] >         reload
	I0314 19:19:17.489793    9056 command_runner.go:130] >         loadbalance
	I0314 19:19:17.489793    9056 command_runner.go:130] >     }
	I0314 19:19:17.489793    9056 command_runner.go:130] > kind: ConfigMap
	I0314 19:19:17.489793    9056 command_runner.go:130] > metadata:
	I0314 19:19:17.489793    9056 command_runner.go:130] >   creationTimestamp: "2024-03-14T19:19:04Z"
	I0314 19:19:17.489793    9056 command_runner.go:130] >   name: coredns
	I0314 19:19:17.489793    9056 command_runner.go:130] >   namespace: kube-system
	I0314 19:19:17.489929    9056 command_runner.go:130] >   resourceVersion: "266"
	I0314 19:19:17.489929    9056 command_runner.go:130] >   uid: 01b5c7b7-d3d3-4522-bf6f-df10e46139e7
	I0314 19:19:17.490211    9056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.17.80.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0314 19:19:17.503235    9056 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0314 19:19:18.064002    9056 command_runner.go:130] > configmap/coredns replaced
	I0314 19:19:18.064002    9056 start.go:948] {"host.minikube.internal": 172.17.80.1} host record injected into CoreDNS's ConfigMap
	I0314 19:19:18.065322    9056 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0314 19:19:18.065322    9056 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0314 19:19:18.066017    9056 kapi.go:59] client config for multinode-442000: &rest.Config{Host:"https://172.17.86.124:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\multinode-442000\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\multinode-442000\\client.key", CAFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1ec9180), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0314 19:19:18.066017    9056 kapi.go:59] client config for multinode-442000: &rest.Config{Host:"https://172.17.86.124:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\multinode-442000\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\multinode-442000\\client.key", CAFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1ec9180), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0314 19:19:18.066766    9056 cert_rotation.go:137] Starting client certificate rotation controller
	I0314 19:19:18.067371    9056 node_ready.go:35] waiting up to 6m0s for node "multinode-442000" to be "Ready" ...
	I0314 19:19:18.067371    9056 round_trippers.go:463] GET https://172.17.86.124:8443/api/v1/nodes/multinode-442000
	I0314 19:19:18.067371    9056 round_trippers.go:469] Request Headers:
	I0314 19:19:18.067371    9056 round_trippers.go:463] GET https://172.17.86.124:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0314 19:19:18.067371    9056 round_trippers.go:469] Request Headers:
	I0314 19:19:18.067371    9056 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:19:18.067371    9056 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:19:18.067371    9056 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:19:18.067987    9056 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:19:18.102163    9056 round_trippers.go:574] Response Status: 200 OK in 33 milliseconds
	I0314 19:19:18.102163    9056 round_trippers.go:577] Response Headers:
	I0314 19:19:18.102240    9056 round_trippers.go:580]     Audit-Id: 6b5b8f7e-ea5d-4f15-81b1-69de6a223cc0
	I0314 19:19:18.102240    9056 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:19:18.102240    9056 round_trippers.go:580]     Content-Type: application/json
	I0314 19:19:18.102283    9056 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:19:18.102283    9056 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:19:18.102283    9056 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:19:18 GMT
	I0314 19:19:18.102283    9056 round_trippers.go:574] Response Status: 200 OK in 34 milliseconds
	I0314 19:19:18.102349    9056 round_trippers.go:577] Response Headers:
	I0314 19:19:18.102349    9056 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:19:18.102389    9056 round_trippers.go:580]     Content-Length: 291
	I0314 19:19:18.102389    9056 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:19:18 GMT
	I0314 19:19:18.102389    9056 round_trippers.go:580]     Audit-Id: e790f2b7-a1ab-4564-ac6d-af208af54880
	I0314 19:19:18.102509    9056 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:19:18.102509    9056 round_trippers.go:580]     Content-Type: application/json
	I0314 19:19:18.102571    9056 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:19:18.102571    9056 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"334","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0314 19:19:18.102626    9056 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"c7a59f7e-5968-4b64-8f4a-c66c9223a024","resourceVersion":"386","creationTimestamp":"2024-03-14T19:19:04Z"},"spec":{"replicas":2},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0314 19:19:18.103422    9056 request.go:1212] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"c7a59f7e-5968-4b64-8f4a-c66c9223a024","resourceVersion":"386","creationTimestamp":"2024-03-14T19:19:04Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0314 19:19:18.103588    9056 round_trippers.go:463] PUT https://172.17.86.124:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0314 19:19:18.103588    9056 round_trippers.go:469] Request Headers:
	I0314 19:19:18.103588    9056 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:19:18.103588    9056 round_trippers.go:473]     Content-Type: application/json
	I0314 19:19:18.103588    9056 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:19:18.118184    9056 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0314 19:19:18.118553    9056 round_trippers.go:577] Response Headers:
	I0314 19:19:18.118553    9056 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:19:18.118553    9056 round_trippers.go:580]     Content-Type: application/json
	I0314 19:19:18.118553    9056 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:19:18.118553    9056 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:19:18.118553    9056 round_trippers.go:580]     Content-Length: 291
	I0314 19:19:18.118659    9056 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:19:18 GMT
	I0314 19:19:18.118659    9056 round_trippers.go:580]     Audit-Id: a40d401a-800f-4667-9857-58f7fa0a2917
	I0314 19:19:18.118715    9056 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"c7a59f7e-5968-4b64-8f4a-c66c9223a024","resourceVersion":"392","creationTimestamp":"2024-03-14T19:19:04Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0314 19:19:18.570743    9056 round_trippers.go:463] GET https://172.17.86.124:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0314 19:19:18.570923    9056 round_trippers.go:463] GET https://172.17.86.124:8443/api/v1/nodes/multinode-442000
	I0314 19:19:18.571169    9056 round_trippers.go:469] Request Headers:
	I0314 19:19:18.570923    9056 round_trippers.go:469] Request Headers:
	I0314 19:19:18.571169    9056 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:19:18.571258    9056 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:19:18.571258    9056 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:19:18.571449    9056 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:19:18.575062    9056 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:19:18.575062    9056 round_trippers.go:577] Response Headers:
	I0314 19:19:18.575062    9056 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:19:18.575062    9056 round_trippers.go:580]     Content-Type: application/json
	I0314 19:19:18.575062    9056 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:19:18.575062    9056 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:19:18.575062    9056 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:19:18.575062    9056 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:19:18 GMT
	I0314 19:19:18.575062    9056 round_trippers.go:577] Response Headers:
	I0314 19:19:18.575194    9056 round_trippers.go:580]     Content-Length: 291
	I0314 19:19:18.575062    9056 round_trippers.go:580]     Audit-Id: bba6e845-9964-4e12-9503-a3b58ec97b45
	I0314 19:19:18.575194    9056 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:19:18 GMT
	I0314 19:19:18.575250    9056 round_trippers.go:580]     Audit-Id: f7a97983-5907-45a6-950c-6a73c04c318b
	I0314 19:19:18.575250    9056 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:19:18.575304    9056 round_trippers.go:580]     Content-Type: application/json
	I0314 19:19:18.575304    9056 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:19:18.575304    9056 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:19:18.575362    9056 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"c7a59f7e-5968-4b64-8f4a-c66c9223a024","resourceVersion":"404","creationTimestamp":"2024-03-14T19:19:04Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0314 19:19:18.575489    9056 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-442000" context rescaled to 1 replicas
	I0314 19:19:18.575585    9056 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"334","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0314 19:19:19.080675    9056 round_trippers.go:463] GET https://172.17.86.124:8443/api/v1/nodes/multinode-442000
	I0314 19:19:19.080675    9056 round_trippers.go:469] Request Headers:
	I0314 19:19:19.080675    9056 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:19:19.080675    9056 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:19:19.082653    9056 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:19:19.082653    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:19:19.083432    9056 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0314 19:19:19.084684    9056 kapi.go:59] client config for multinode-442000: &rest.Config{Host:"https://172.17.86.124:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\multinode-442000\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\multinode-442000\\client.key", CAFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1ec9180), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0314 19:19:19.084765    9056 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:19:19.084765    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:19:19.088194    9056 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0314 19:19:19.085561    9056 addons.go:234] Setting addon default-storageclass=true in "multinode-442000"
	I0314 19:19:19.088194    9056 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0314 19:19:19.088194    9056 round_trippers.go:577] Response Headers:
	I0314 19:19:19.088194    9056 host.go:66] Checking if "multinode-442000" exists ...
	I0314 19:19:19.090763    9056 round_trippers.go:580]     Audit-Id: 217b6e06-1491-4dcd-8ae5-925dafa30ec6
	I0314 19:19:19.090878    9056 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0314 19:19:19.090930    9056 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0314 19:19:19.090878    9056 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:19:19.090993    9056 round_trippers.go:580]     Content-Type: application/json
	I0314 19:19:19.091058    9056 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:19:19.091058    9056 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:19:19.091058    9056 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:19:19 GMT
	I0314 19:19:19.091058    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000 ).state
	I0314 19:19:19.091321    9056 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"334","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0314 19:19:19.091972    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000 ).state
	I0314 19:19:19.573919    9056 round_trippers.go:463] GET https://172.17.86.124:8443/api/v1/nodes/multinode-442000
	I0314 19:19:19.573919    9056 round_trippers.go:469] Request Headers:
	I0314 19:19:19.573919    9056 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:19:19.573919    9056 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:19:19.577239    9056 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:19:19.578145    9056 round_trippers.go:577] Response Headers:
	I0314 19:19:19.578145    9056 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:19:19.578210    9056 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:19:19 GMT
	I0314 19:19:19.578210    9056 round_trippers.go:580]     Audit-Id: 04b17c99-5c07-4abb-8cee-35bb32126611
	I0314 19:19:19.578210    9056 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:19:19.578210    9056 round_trippers.go:580]     Content-Type: application/json
	I0314 19:19:19.578210    9056 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:19:19.578581    9056 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"334","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0314 19:19:20.067880    9056 round_trippers.go:463] GET https://172.17.86.124:8443/api/v1/nodes/multinode-442000
	I0314 19:19:20.068108    9056 round_trippers.go:469] Request Headers:
	I0314 19:19:20.068108    9056 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:19:20.068108    9056 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:19:20.072269    9056 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 19:19:20.072356    9056 round_trippers.go:577] Response Headers:
	I0314 19:19:20.072356    9056 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:19:20.072356    9056 round_trippers.go:580]     Content-Type: application/json
	I0314 19:19:20.072356    9056 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:19:20.072356    9056 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:19:20.072356    9056 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:19:20 GMT
	I0314 19:19:20.072356    9056 round_trippers.go:580]     Audit-Id: a8f9b68f-edca-4171-ab4e-64ba16530ae8
	I0314 19:19:20.072725    9056 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"334","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0314 19:19:20.073491    9056 node_ready.go:53] node "multinode-442000" has status "Ready":"False"
	I0314 19:19:20.572963    9056 round_trippers.go:463] GET https://172.17.86.124:8443/api/v1/nodes/multinode-442000
	I0314 19:19:20.572963    9056 round_trippers.go:469] Request Headers:
	I0314 19:19:20.572963    9056 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:19:20.572963    9056 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:19:20.575562    9056 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0314 19:19:20.576319    9056 round_trippers.go:577] Response Headers:
	I0314 19:19:20.576319    9056 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:19:20 GMT
	I0314 19:19:20.576319    9056 round_trippers.go:580]     Audit-Id: c2e3a888-bb56-4e91-b136-4831aa035ed6
	I0314 19:19:20.576319    9056 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:19:20.576319    9056 round_trippers.go:580]     Content-Type: application/json
	I0314 19:19:20.576393    9056 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:19:20.576393    9056 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:19:20.576393    9056 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"334","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0314 19:19:21.079484    9056 round_trippers.go:463] GET https://172.17.86.124:8443/api/v1/nodes/multinode-442000
	I0314 19:19:21.079566    9056 round_trippers.go:469] Request Headers:
	I0314 19:19:21.079566    9056 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:19:21.079566    9056 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:19:21.083774    9056 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:19:21.083774    9056 round_trippers.go:577] Response Headers:
	I0314 19:19:21.083842    9056 round_trippers.go:580]     Audit-Id: f9f5c998-56e0-4f4a-baa7-1b1e10037146
	I0314 19:19:21.083842    9056 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:19:21.083842    9056 round_trippers.go:580]     Content-Type: application/json
	I0314 19:19:21.083842    9056 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:19:21.083842    9056 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:19:21.083842    9056 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:19:21 GMT
	I0314 19:19:21.083842    9056 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"334","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0314 19:19:21.239225    9056 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:19:21.239225    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:19:21.239313    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-442000 ).networkadapters[0]).ipaddresses[0]
	I0314 19:19:21.240101    9056 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:19:21.240101    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:19:21.240286    9056 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0314 19:19:21.240310    9056 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0314 19:19:21.240362    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000 ).state
	I0314 19:19:21.574085    9056 round_trippers.go:463] GET https://172.17.86.124:8443/api/v1/nodes/multinode-442000
	I0314 19:19:21.574202    9056 round_trippers.go:469] Request Headers:
	I0314 19:19:21.574202    9056 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:19:21.574202    9056 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:19:21.577508    9056 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:19:21.577945    9056 round_trippers.go:577] Response Headers:
	I0314 19:19:21.577945    9056 round_trippers.go:580]     Content-Type: application/json
	I0314 19:19:21.577945    9056 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:19:21.577945    9056 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:19:21.577945    9056 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:19:21 GMT
	I0314 19:19:21.577945    9056 round_trippers.go:580]     Audit-Id: 9ef3bf50-3514-4e53-8fff-c722d8758457
	I0314 19:19:21.577945    9056 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:19:21.578156    9056 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"334","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0314 19:19:22.081626    9056 round_trippers.go:463] GET https://172.17.86.124:8443/api/v1/nodes/multinode-442000
	I0314 19:19:22.081825    9056 round_trippers.go:469] Request Headers:
	I0314 19:19:22.081825    9056 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:19:22.081901    9056 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:19:22.086387    9056 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:19:22.086483    9056 round_trippers.go:577] Response Headers:
	I0314 19:19:22.086483    9056 round_trippers.go:580]     Audit-Id: 70bae9d3-e84f-4009-a4ac-3d3918ffbce2
	I0314 19:19:22.086610    9056 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:19:22.086610    9056 round_trippers.go:580]     Content-Type: application/json
	I0314 19:19:22.086807    9056 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:19:22.086886    9056 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:19:22.086886    9056 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:19:22 GMT
	I0314 19:19:22.087118    9056 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"334","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0314 19:19:22.088009    9056 node_ready.go:53] node "multinode-442000" has status "Ready":"False"
	I0314 19:19:22.576077    9056 round_trippers.go:463] GET https://172.17.86.124:8443/api/v1/nodes/multinode-442000
	I0314 19:19:22.576077    9056 round_trippers.go:469] Request Headers:
	I0314 19:19:22.576077    9056 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:19:22.576236    9056 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:19:22.579577    9056 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:19:22.579577    9056 round_trippers.go:577] Response Headers:
	I0314 19:19:22.579577    9056 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:19:22.579577    9056 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:19:22.579577    9056 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:19:22 GMT
	I0314 19:19:22.579577    9056 round_trippers.go:580]     Audit-Id: 28eec736-3ccf-4db3-a5cc-eec0c355525f
	I0314 19:19:22.579577    9056 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:19:22.579577    9056 round_trippers.go:580]     Content-Type: application/json
	I0314 19:19:22.580332    9056 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"334","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0314 19:19:23.074395    9056 round_trippers.go:463] GET https://172.17.86.124:8443/api/v1/nodes/multinode-442000
	I0314 19:19:23.074395    9056 round_trippers.go:469] Request Headers:
	I0314 19:19:23.074395    9056 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:19:23.074395    9056 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:19:23.078393    9056 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:19:23.078769    9056 round_trippers.go:577] Response Headers:
	I0314 19:19:23.078769    9056 round_trippers.go:580]     Content-Type: application/json
	I0314 19:19:23.078769    9056 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:19:23.078769    9056 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:19:23.078769    9056 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:19:23 GMT
	I0314 19:19:23.078863    9056 round_trippers.go:580]     Audit-Id: e5c6dad1-07c4-4e67-baa8-722587f714ac
	I0314 19:19:23.078863    9056 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:19:23.079243    9056 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"334","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0314 19:19:23.355775    9056 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:19:23.355775    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:19:23.355775    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-442000 ).networkadapters[0]).ipaddresses[0]
	I0314 19:19:23.579204    9056 round_trippers.go:463] GET https://172.17.86.124:8443/api/v1/nodes/multinode-442000
	I0314 19:19:23.579204    9056 round_trippers.go:469] Request Headers:
	I0314 19:19:23.579427    9056 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:19:23.579427    9056 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:19:23.582705    9056 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:19:23.582705    9056 round_trippers.go:577] Response Headers:
	I0314 19:19:23.582705    9056 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:19:23.582705    9056 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:19:23 GMT
	I0314 19:19:23.582705    9056 round_trippers.go:580]     Audit-Id: b4b05861-b09a-4607-8b5f-e743206f9532
	I0314 19:19:23.582705    9056 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:19:23.582705    9056 round_trippers.go:580]     Content-Type: application/json
	I0314 19:19:23.582705    9056 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:19:23.583200    9056 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"334","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0314 19:19:23.730982    9056 main.go:141] libmachine: [stdout =====>] : 172.17.86.124
	
	I0314 19:19:23.730982    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:19:23.730982    9056 sshutil.go:53] new ssh client: &{IP:172.17.86.124 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-442000\id_rsa Username:docker}
	I0314 19:19:23.881768    9056 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0314 19:19:24.069140    9056 round_trippers.go:463] GET https://172.17.86.124:8443/api/v1/nodes/multinode-442000
	I0314 19:19:24.069209    9056 round_trippers.go:469] Request Headers:
	I0314 19:19:24.069209    9056 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:19:24.069209    9056 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:19:24.282192    9056 round_trippers.go:574] Response Status: 200 OK in 212 milliseconds
	I0314 19:19:24.282329    9056 round_trippers.go:577] Response Headers:
	I0314 19:19:24.282329    9056 round_trippers.go:580]     Audit-Id: 9634d9b3-55bd-4d0f-9fc8-15dc9fd03b54
	I0314 19:19:24.282329    9056 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:19:24.282329    9056 round_trippers.go:580]     Content-Type: application/json
	I0314 19:19:24.282329    9056 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:19:24.282329    9056 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:19:24.282329    9056 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:19:24 GMT
	I0314 19:19:24.282329    9056 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"334","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0314 19:19:24.283102    9056 node_ready.go:53] node "multinode-442000" has status "Ready":"False"
	I0314 19:19:24.578860    9056 round_trippers.go:463] GET https://172.17.86.124:8443/api/v1/nodes/multinode-442000
	I0314 19:19:24.578912    9056 round_trippers.go:469] Request Headers:
	I0314 19:19:24.578912    9056 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:19:24.578912    9056 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:19:24.589586    9056 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0314 19:19:24.589586    9056 round_trippers.go:577] Response Headers:
	I0314 19:19:24.589586    9056 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:19:24.589586    9056 round_trippers.go:580]     Content-Type: application/json
	I0314 19:19:24.589586    9056 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:19:24.589586    9056 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:19:24.589586    9056 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:19:24 GMT
	I0314 19:19:24.589586    9056 round_trippers.go:580]     Audit-Id: f1d9beb4-c387-4100-8f73-e9538013e3b7
	I0314 19:19:24.589983    9056 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"334","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0314 19:19:24.679222    9056 command_runner.go:130] > serviceaccount/storage-provisioner created
	I0314 19:19:24.679222    9056 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I0314 19:19:24.679222    9056 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0314 19:19:24.679222    9056 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0314 19:19:24.679222    9056 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I0314 19:19:24.679222    9056 command_runner.go:130] > pod/storage-provisioner created
	I0314 19:19:25.072422    9056 round_trippers.go:463] GET https://172.17.86.124:8443/api/v1/nodes/multinode-442000
	I0314 19:19:25.072422    9056 round_trippers.go:469] Request Headers:
	I0314 19:19:25.072422    9056 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:19:25.072422    9056 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:19:25.077365    9056 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 19:19:25.077365    9056 round_trippers.go:577] Response Headers:
	I0314 19:19:25.077365    9056 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:19:25.077365    9056 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:19:25 GMT
	I0314 19:19:25.077365    9056 round_trippers.go:580]     Audit-Id: b6732572-c3e7-4fd3-a8ca-6f8c97ae221a
	I0314 19:19:25.077365    9056 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:19:25.077470    9056 round_trippers.go:580]     Content-Type: application/json
	I0314 19:19:25.077470    9056 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:19:25.077470    9056 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"334","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0314 19:19:25.579950    9056 round_trippers.go:463] GET https://172.17.86.124:8443/api/v1/nodes/multinode-442000
	I0314 19:19:25.579950    9056 round_trippers.go:469] Request Headers:
	I0314 19:19:25.580026    9056 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:19:25.580026    9056 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:19:25.583285    9056 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:19:25.583285    9056 round_trippers.go:577] Response Headers:
	I0314 19:19:25.583285    9056 round_trippers.go:580]     Audit-Id: 886eaf90-de63-4e1a-980e-9a6b40bcb0cd
	I0314 19:19:25.583285    9056 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:19:25.583285    9056 round_trippers.go:580]     Content-Type: application/json
	I0314 19:19:25.583693    9056 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:19:25.583693    9056 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:19:25.583693    9056 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:19:25 GMT
	I0314 19:19:25.583822    9056 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"334","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0314 19:19:25.777662    9056 main.go:141] libmachine: [stdout =====>] : 172.17.86.124
	
	I0314 19:19:25.777748    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:19:25.778067    9056 sshutil.go:53] new ssh client: &{IP:172.17.86.124 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-442000\id_rsa Username:docker}
	I0314 19:19:25.909740    9056 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0314 19:19:26.071150    9056 round_trippers.go:463] GET https://172.17.86.124:8443/api/v1/nodes/multinode-442000
	I0314 19:19:26.071150    9056 round_trippers.go:469] Request Headers:
	I0314 19:19:26.071150    9056 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:19:26.071150    9056 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:19:26.075100    9056 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:19:26.075100    9056 round_trippers.go:577] Response Headers:
	I0314 19:19:26.075100    9056 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:19:26.075100    9056 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:19:26.075100    9056 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:19:26 GMT
	I0314 19:19:26.075100    9056 round_trippers.go:580]     Audit-Id: c773c41c-c27f-43de-b667-6465f05093ab
	I0314 19:19:26.075100    9056 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:19:26.075100    9056 round_trippers.go:580]     Content-Type: application/json
	I0314 19:19:26.076040    9056 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"334","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0314 19:19:26.242307    9056 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I0314 19:19:26.242307    9056 round_trippers.go:463] GET https://172.17.86.124:8443/apis/storage.k8s.io/v1/storageclasses
	I0314 19:19:26.242307    9056 round_trippers.go:469] Request Headers:
	I0314 19:19:26.242307    9056 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:19:26.242307    9056 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:19:26.247663    9056 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0314 19:19:26.247663    9056 round_trippers.go:577] Response Headers:
	I0314 19:19:26.247663    9056 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:19:26 GMT
	I0314 19:19:26.247663    9056 round_trippers.go:580]     Audit-Id: 4b77ca35-2c04-4e7a-bcc8-6a2c2ed571ad
	I0314 19:19:26.247663    9056 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:19:26.247663    9056 round_trippers.go:580]     Content-Type: application/json
	I0314 19:19:26.247663    9056 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:19:26.247663    9056 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:19:26.247663    9056 round_trippers.go:580]     Content-Length: 1273
	I0314 19:19:26.247663    9056 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"435"},"items":[{"metadata":{"name":"standard","uid":"0da12f5d-9716-49a5-a75b-38054817c24c","resourceVersion":"433","creationTimestamp":"2024-03-14T19:19:26Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-03-14T19:19:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I0314 19:19:26.248280    9056 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"0da12f5d-9716-49a5-a75b-38054817c24c","resourceVersion":"433","creationTimestamp":"2024-03-14T19:19:26Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-03-14T19:19:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0314 19:19:26.248350    9056 round_trippers.go:463] PUT https://172.17.86.124:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0314 19:19:26.248350    9056 round_trippers.go:469] Request Headers:
	I0314 19:19:26.248350    9056 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:19:26.248350    9056 round_trippers.go:473]     Content-Type: application/json
	I0314 19:19:26.248350    9056 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:19:26.253999    9056 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0314 19:19:26.253999    9056 round_trippers.go:577] Response Headers:
	I0314 19:19:26.253999    9056 round_trippers.go:580]     Content-Length: 1220
	I0314 19:19:26.254991    9056 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:19:26 GMT
	I0314 19:19:26.254991    9056 round_trippers.go:580]     Audit-Id: 86735c06-ec35-4db7-b3e6-af50d1f8fe08
	I0314 19:19:26.254991    9056 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:19:26.254991    9056 round_trippers.go:580]     Content-Type: application/json
	I0314 19:19:26.254991    9056 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:19:26.254991    9056 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:19:26.255046    9056 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"0da12f5d-9716-49a5-a75b-38054817c24c","resourceVersion":"433","creationTimestamp":"2024-03-14T19:19:26Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-03-14T19:19:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0314 19:19:26.264842    9056 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0314 19:19:26.269836    9056 addons.go:505] duration metric: took 9.2727917s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0314 19:19:26.577351    9056 round_trippers.go:463] GET https://172.17.86.124:8443/api/v1/nodes/multinode-442000
	I0314 19:19:26.577351    9056 round_trippers.go:469] Request Headers:
	I0314 19:19:26.577351    9056 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:19:26.577351    9056 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:19:26.580750    9056 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0314 19:19:26.580820    9056 round_trippers.go:577] Response Headers:
	I0314 19:19:26.580820    9056 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:19:26.580820    9056 round_trippers.go:580]     Content-Type: application/json
	I0314 19:19:26.580820    9056 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:19:26.580820    9056 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:19:26.580907    9056 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:19:26 GMT
	I0314 19:19:26.580907    9056 round_trippers.go:580]     Audit-Id: ebd9f036-858a-4b44-b7ac-f9af510d8329
	I0314 19:19:26.581136    9056 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"429","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I0314 19:19:26.581719    9056 node_ready.go:49] node "multinode-442000" has status "Ready":"True"
	I0314 19:19:26.581816    9056 node_ready.go:38] duration metric: took 8.513805s for node "multinode-442000" to be "Ready" ...
	I0314 19:19:26.581816    9056 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0314 19:19:26.581982    9056 round_trippers.go:463] GET https://172.17.86.124:8443/api/v1/namespaces/kube-system/pods
	I0314 19:19:26.582051    9056 round_trippers.go:469] Request Headers:
	I0314 19:19:26.582051    9056 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:19:26.582051    9056 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:19:26.586801    9056 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 19:19:26.586801    9056 round_trippers.go:577] Response Headers:
	I0314 19:19:26.586801    9056 round_trippers.go:580]     Audit-Id: 70c2dec2-adaa-4fdf-8a05-393f5f99d4bd
	I0314 19:19:26.586801    9056 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:19:26.586801    9056 round_trippers.go:580]     Content-Type: application/json
	I0314 19:19:26.586801    9056 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:19:26.586801    9056 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:19:26.586801    9056 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:19:26 GMT
	I0314 19:19:26.587727    9056 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"436"},"items":[{"metadata":{"name":"coredns-5dd5756b68-d22jc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2a563b3f-a175-4dc2-9f0b-67dbaefbfaac","resourceVersion":"435","creationTimestamp":"2024-03-14T19:19:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"65689a14-ed43-48af-8104-53ea2e3991f3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:19:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"65689a14-ed43-48af-8104-53ea2e3991f3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 53972 chars]
	I0314 19:19:26.592227    9056 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-d22jc" in "kube-system" namespace to be "Ready" ...
	I0314 19:19:26.592445    9056 round_trippers.go:463] GET https://172.17.86.124:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-d22jc
	I0314 19:19:26.592445    9056 round_trippers.go:469] Request Headers:
	I0314 19:19:26.592445    9056 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:19:26.592445    9056 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:19:26.596093    9056 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:19:26.596093    9056 round_trippers.go:577] Response Headers:
	I0314 19:19:26.596093    9056 round_trippers.go:580]     Audit-Id: 2f0985a9-63f5-4fde-9d0c-dc842c55e137
	I0314 19:19:26.596093    9056 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:19:26.596093    9056 round_trippers.go:580]     Content-Type: application/json
	I0314 19:19:26.596093    9056 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:19:26.596093    9056 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:19:26.596093    9056 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:19:26 GMT
	I0314 19:19:26.596093    9056 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-d22jc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2a563b3f-a175-4dc2-9f0b-67dbaefbfaac","resourceVersion":"435","creationTimestamp":"2024-03-14T19:19:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"65689a14-ed43-48af-8104-53ea2e3991f3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:19:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"65689a14-ed43-48af-8104-53ea2e3991f3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6151 chars]
	I0314 19:19:26.596872    9056 round_trippers.go:463] GET https://172.17.86.124:8443/api/v1/nodes/multinode-442000
	I0314 19:19:26.596945    9056 round_trippers.go:469] Request Headers:
	I0314 19:19:26.596945    9056 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:19:26.596945    9056 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:19:26.599700    9056 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0314 19:19:26.599700    9056 round_trippers.go:577] Response Headers:
	I0314 19:19:26.599700    9056 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:19:26.599700    9056 round_trippers.go:580]     Content-Type: application/json
	I0314 19:19:26.599700    9056 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:19:26.599700    9056 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:19:26.599700    9056 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:19:26 GMT
	I0314 19:19:26.599700    9056 round_trippers.go:580]     Audit-Id: 663fe3fe-9b93-4aa5-b5e6-3fa42945907f
	I0314 19:19:26.601887    9056 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"429","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I0314 19:19:27.095517    9056 round_trippers.go:463] GET https://172.17.86.124:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-d22jc
	I0314 19:19:27.095517    9056 round_trippers.go:469] Request Headers:
	I0314 19:19:27.095517    9056 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:19:27.095517    9056 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:19:27.108224    9056 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0314 19:19:27.108582    9056 round_trippers.go:577] Response Headers:
	I0314 19:19:27.108582    9056 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:19:27.108582    9056 round_trippers.go:580]     Content-Type: application/json
	I0314 19:19:27.108582    9056 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:19:27.108582    9056 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:19:27.108582    9056 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:19:27 GMT
	I0314 19:19:27.108582    9056 round_trippers.go:580]     Audit-Id: 29520785-1034-46c2-8216-b93e3b7b7a5c
	I0314 19:19:27.108582    9056 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-d22jc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2a563b3f-a175-4dc2-9f0b-67dbaefbfaac","resourceVersion":"435","creationTimestamp":"2024-03-14T19:19:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"65689a14-ed43-48af-8104-53ea2e3991f3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:19:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"65689a14-ed43-48af-8104-53ea2e3991f3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6151 chars]
	I0314 19:19:27.109708    9056 round_trippers.go:463] GET https://172.17.86.124:8443/api/v1/nodes/multinode-442000
	I0314 19:19:27.109740    9056 round_trippers.go:469] Request Headers:
	I0314 19:19:27.109740    9056 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:19:27.109740    9056 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:19:27.117976    9056 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0314 19:19:27.117976    9056 round_trippers.go:577] Response Headers:
	I0314 19:19:27.117976    9056 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:19:27.117976    9056 round_trippers.go:580]     Content-Type: application/json
	I0314 19:19:27.117976    9056 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:19:27.117976    9056 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:19:27.117976    9056 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:19:27 GMT
	I0314 19:19:27.117976    9056 round_trippers.go:580]     Audit-Id: 6f39131d-91ef-4a1e-a5bc-5b5dd792667b
	I0314 19:19:27.119861    9056 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"429","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I0314 19:19:27.603482    9056 round_trippers.go:463] GET https://172.17.86.124:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-d22jc
	I0314 19:19:27.603482    9056 round_trippers.go:469] Request Headers:
	I0314 19:19:27.603570    9056 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:19:27.603570    9056 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:19:27.608853    9056 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0314 19:19:27.609172    9056 round_trippers.go:577] Response Headers:
	I0314 19:19:27.609172    9056 round_trippers.go:580]     Audit-Id: a79608fe-60ec-4414-80e3-34f22731cf25
	I0314 19:19:27.609172    9056 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:19:27.609172    9056 round_trippers.go:580]     Content-Type: application/json
	I0314 19:19:27.609172    9056 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:19:27.609172    9056 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:19:27.609172    9056 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:19:27 GMT
	I0314 19:19:27.609251    9056 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-d22jc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2a563b3f-a175-4dc2-9f0b-67dbaefbfaac","resourceVersion":"435","creationTimestamp":"2024-03-14T19:19:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"65689a14-ed43-48af-8104-53ea2e3991f3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:19:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"65689a14-ed43-48af-8104-53ea2e3991f3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6151 chars]
	I0314 19:19:27.610194    9056 round_trippers.go:463] GET https://172.17.86.124:8443/api/v1/nodes/multinode-442000
	I0314 19:19:27.610194    9056 round_trippers.go:469] Request Headers:
	I0314 19:19:27.610194    9056 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:19:27.610194    9056 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:19:27.613402    9056 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:19:27.613448    9056 round_trippers.go:577] Response Headers:
	I0314 19:19:27.613448    9056 round_trippers.go:580]     Audit-Id: 467714c9-a164-4b84-a220-39d749a114e8
	I0314 19:19:27.613448    9056 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:19:27.613448    9056 round_trippers.go:580]     Content-Type: application/json
	I0314 19:19:27.613448    9056 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:19:27.613448    9056 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:19:27.613448    9056 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:19:27 GMT
	I0314 19:19:27.613676    9056 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"429","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I0314 19:19:28.107517    9056 round_trippers.go:463] GET https://172.17.86.124:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-d22jc
	I0314 19:19:28.107517    9056 round_trippers.go:469] Request Headers:
	I0314 19:19:28.107517    9056 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:19:28.107517    9056 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:19:28.117664    9056 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0314 19:19:28.117664    9056 round_trippers.go:577] Response Headers:
	I0314 19:19:28.117664    9056 round_trippers.go:580]     Audit-Id: c0159bb5-7a2f-4323-8030-aae7dcb595eb
	I0314 19:19:28.117664    9056 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:19:28.117664    9056 round_trippers.go:580]     Content-Type: application/json
	I0314 19:19:28.117664    9056 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:19:28.117664    9056 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:19:28.117664    9056 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:19:28 GMT
	I0314 19:19:28.118328    9056 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-d22jc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2a563b3f-a175-4dc2-9f0b-67dbaefbfaac","resourceVersion":"435","creationTimestamp":"2024-03-14T19:19:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"65689a14-ed43-48af-8104-53ea2e3991f3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:19:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"65689a14-ed43-48af-8104-53ea2e3991f3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6151 chars]
	I0314 19:19:28.119080    9056 round_trippers.go:463] GET https://172.17.86.124:8443/api/v1/nodes/multinode-442000
	I0314 19:19:28.119080    9056 round_trippers.go:469] Request Headers:
	I0314 19:19:28.119130    9056 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:19:28.119130    9056 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:19:28.124348    9056 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0314 19:19:28.124348    9056 round_trippers.go:577] Response Headers:
	I0314 19:19:28.124348    9056 round_trippers.go:580]     Content-Type: application/json
	I0314 19:19:28.124348    9056 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:19:28.124348    9056 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:19:28.124348    9056 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:19:28 GMT
	I0314 19:19:28.124348    9056 round_trippers.go:580]     Audit-Id: c56da031-e102-4f94-b5d2-5928052fb656
	I0314 19:19:28.124348    9056 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:19:28.126250    9056 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"429","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I0314 19:19:28.606513    9056 round_trippers.go:463] GET https://172.17.86.124:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-d22jc
	I0314 19:19:28.606513    9056 round_trippers.go:469] Request Headers:
	I0314 19:19:28.606513    9056 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:19:28.606513    9056 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:19:28.610079    9056 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:19:28.610079    9056 round_trippers.go:577] Response Headers:
	I0314 19:19:28.610079    9056 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:19:28.610079    9056 round_trippers.go:580]     Content-Type: application/json
	I0314 19:19:28.610079    9056 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:19:28.610079    9056 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:19:28.610079    9056 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:19:28 GMT
	I0314 19:19:28.610079    9056 round_trippers.go:580]     Audit-Id: febd4365-d3b2-45c6-8711-c2b7992be1fd
	I0314 19:19:28.611080    9056 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-d22jc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2a563b3f-a175-4dc2-9f0b-67dbaefbfaac","resourceVersion":"446","creationTimestamp":"2024-03-14T19:19:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"65689a14-ed43-48af-8104-53ea2e3991f3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:19:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"65689a14-ed43-48af-8104-53ea2e3991f3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6283 chars]
	I0314 19:19:28.611788    9056 round_trippers.go:463] GET https://172.17.86.124:8443/api/v1/nodes/multinode-442000
	I0314 19:19:28.611788    9056 round_trippers.go:469] Request Headers:
	I0314 19:19:28.611788    9056 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:19:28.611788    9056 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:19:28.615013    9056 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:19:28.615013    9056 round_trippers.go:577] Response Headers:
	I0314 19:19:28.615013    9056 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:19:28.615013    9056 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:19:28 GMT
	I0314 19:19:28.615013    9056 round_trippers.go:580]     Audit-Id: 1cd0f839-3e9d-463a-8648-e2e40d2a06d2
	I0314 19:19:28.615013    9056 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:19:28.615013    9056 round_trippers.go:580]     Content-Type: application/json
	I0314 19:19:28.615013    9056 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:19:28.615239    9056 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"429","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I0314 19:19:28.615685    9056 pod_ready.go:92] pod "coredns-5dd5756b68-d22jc" in "kube-system" namespace has status "Ready":"True"
	I0314 19:19:28.615685    9056 pod_ready.go:81] duration metric: took 2.0232146s for pod "coredns-5dd5756b68-d22jc" in "kube-system" namespace to be "Ready" ...
	I0314 19:19:28.615734    9056 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-442000" in "kube-system" namespace to be "Ready" ...
	I0314 19:19:28.615849    9056 round_trippers.go:463] GET https://172.17.86.124:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-442000
	I0314 19:19:28.615849    9056 round_trippers.go:469] Request Headers:
	I0314 19:19:28.615894    9056 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:19:28.615918    9056 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:19:28.619166    9056 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:19:28.619166    9056 round_trippers.go:577] Response Headers:
	I0314 19:19:28.619166    9056 round_trippers.go:580]     Content-Type: application/json
	I0314 19:19:28.619166    9056 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:19:28.619166    9056 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:19:28.619166    9056 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:19:28 GMT
	I0314 19:19:28.619166    9056 round_trippers.go:580]     Audit-Id: ae5b51ce-e966-4749-aeeb-072936e71530
	I0314 19:19:28.619166    9056 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:19:28.619166    9056 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-442000","namespace":"kube-system","uid":"8974ad44-5d36-48f0-bc6b-9115bab5fb5e","resourceVersion":"410","creationTimestamp":"2024-03-14T19:19:03Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.17.86.124:2379","kubernetes.io/config.hash":"92e70beb375f9f247f5f8395dc065033","kubernetes.io/config.mirror":"92e70beb375f9f247f5f8395dc065033","kubernetes.io/config.seen":"2024-03-14T19:18:55.420198507Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:19:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 5862 chars]
	I0314 19:19:28.619888    9056 round_trippers.go:463] GET https://172.17.86.124:8443/api/v1/nodes/multinode-442000
	I0314 19:19:28.619888    9056 round_trippers.go:469] Request Headers:
	I0314 19:19:28.619888    9056 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:19:28.619888    9056 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:19:28.623757    9056 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:19:28.624293    9056 round_trippers.go:577] Response Headers:
	I0314 19:19:28.624293    9056 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:19:28.624293    9056 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:19:28 GMT
	I0314 19:19:28.624293    9056 round_trippers.go:580]     Audit-Id: 4fc6b5a0-1dd1-4cfc-a312-cd69198b73b9
	I0314 19:19:28.624293    9056 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:19:28.624293    9056 round_trippers.go:580]     Content-Type: application/json
	I0314 19:19:28.624293    9056 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:19:28.624402    9056 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"429","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I0314 19:19:28.624402    9056 pod_ready.go:92] pod "etcd-multinode-442000" in "kube-system" namespace has status "Ready":"True"
	I0314 19:19:28.624928    9056 pod_ready.go:81] duration metric: took 9.1675ms for pod "etcd-multinode-442000" in "kube-system" namespace to be "Ready" ...
	I0314 19:19:28.624928    9056 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-442000" in "kube-system" namespace to be "Ready" ...
	I0314 19:19:28.625045    9056 round_trippers.go:463] GET https://172.17.86.124:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-442000
	I0314 19:19:28.625045    9056 round_trippers.go:469] Request Headers:
	I0314 19:19:28.625045    9056 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:19:28.625045    9056 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:19:28.627611    9056 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0314 19:19:28.627611    9056 round_trippers.go:577] Response Headers:
	I0314 19:19:28.627611    9056 round_trippers.go:580]     Audit-Id: 6deaf0bc-c9ce-43b3-b341-7a98dee354b8
	I0314 19:19:28.627611    9056 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:19:28.627611    9056 round_trippers.go:580]     Content-Type: application/json
	I0314 19:19:28.627611    9056 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:19:28.627611    9056 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:19:28.627611    9056 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:19:28 GMT
	I0314 19:19:28.627611    9056 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-442000","namespace":"kube-system","uid":"02a2d011-5f4c-451c-9698-a88e42e4b6c9","resourceVersion":"414","creationTimestamp":"2024-03-14T19:19:02Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.17.86.124:8443","kubernetes.io/config.hash":"81fdcd9740169a0b72b7c7316eeac39f","kubernetes.io/config.mirror":"81fdcd9740169a0b72b7c7316eeac39f","kubernetes.io/config.seen":"2024-03-14T19:18:55.420203908Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:19:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7399 chars]
	I0314 19:19:28.628608    9056 round_trippers.go:463] GET https://172.17.86.124:8443/api/v1/nodes/multinode-442000
	I0314 19:19:28.628608    9056 round_trippers.go:469] Request Headers:
	I0314 19:19:28.628608    9056 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:19:28.628608    9056 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:19:28.631458    9056 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0314 19:19:28.631458    9056 round_trippers.go:577] Response Headers:
	I0314 19:19:28.631458    9056 round_trippers.go:580]     Audit-Id: cbe46535-fa1c-4a7f-b453-433ec118195e
	I0314 19:19:28.631458    9056 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:19:28.631458    9056 round_trippers.go:580]     Content-Type: application/json
	I0314 19:19:28.631458    9056 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:19:28.631458    9056 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:19:28.631458    9056 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:19:28 GMT
	I0314 19:19:28.631458    9056 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"429","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I0314 19:19:28.631458    9056 pod_ready.go:92] pod "kube-apiserver-multinode-442000" in "kube-system" namespace has status "Ready":"True"
	I0314 19:19:28.631458    9056 pod_ready.go:81] duration metric: took 6.5295ms for pod "kube-apiserver-multinode-442000" in "kube-system" namespace to be "Ready" ...
	I0314 19:19:28.631458    9056 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-442000" in "kube-system" namespace to be "Ready" ...
	I0314 19:19:28.631458    9056 round_trippers.go:463] GET https://172.17.86.124:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-442000
	I0314 19:19:28.631458    9056 round_trippers.go:469] Request Headers:
	I0314 19:19:28.631458    9056 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:19:28.631458    9056 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:19:28.634824    9056 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:19:28.634824    9056 round_trippers.go:577] Response Headers:
	I0314 19:19:28.634824    9056 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:19:28.634824    9056 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:19:28.634824    9056 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:19:28 GMT
	I0314 19:19:28.634824    9056 round_trippers.go:580]     Audit-Id: e39cb887-a74b-42e0-bc5b-c87877cc8897
	I0314 19:19:28.635058    9056 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:19:28.635058    9056 round_trippers.go:580]     Content-Type: application/json
	I0314 19:19:28.635119    9056 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-442000","namespace":"kube-system","uid":"b16fc874-ef74-44ca-a54f-bb678bf982df","resourceVersion":"413","creationTimestamp":"2024-03-14T19:19:01Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"a7ee530f2bd843eddeace8cd6ec0d204","kubernetes.io/config.mirror":"a7ee530f2bd843eddeace8cd6ec0d204","kubernetes.io/config.seen":"2024-03-14T19:18:55.420205308Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:19:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6969 chars]
	I0314 19:19:28.635701    9056 round_trippers.go:463] GET https://172.17.86.124:8443/api/v1/nodes/multinode-442000
	I0314 19:19:28.635701    9056 round_trippers.go:469] Request Headers:
	I0314 19:19:28.635701    9056 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:19:28.635701    9056 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:19:28.638043    9056 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0314 19:19:28.638776    9056 round_trippers.go:577] Response Headers:
	I0314 19:19:28.638776    9056 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:19:28 GMT
	I0314 19:19:28.638901    9056 round_trippers.go:580]     Audit-Id: 227bba82-a6bf-47a2-b336-443e679fcd28
	I0314 19:19:28.638901    9056 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:19:28.638901    9056 round_trippers.go:580]     Content-Type: application/json
	I0314 19:19:28.638901    9056 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:19:28.638901    9056 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:19:28.638901    9056 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"429","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I0314 19:19:28.638901    9056 pod_ready.go:92] pod "kube-controller-manager-multinode-442000" in "kube-system" namespace has status "Ready":"True"
	I0314 19:19:28.638901    9056 pod_ready.go:81] duration metric: took 7.4424ms for pod "kube-controller-manager-multinode-442000" in "kube-system" namespace to be "Ready" ...
	I0314 19:19:28.638901    9056 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-cg28g" in "kube-system" namespace to be "Ready" ...
	I0314 19:19:28.639565    9056 round_trippers.go:463] GET https://172.17.86.124:8443/api/v1/namespaces/kube-system/pods/kube-proxy-cg28g
	I0314 19:19:28.639565    9056 round_trippers.go:469] Request Headers:
	I0314 19:19:28.639565    9056 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:19:28.639565    9056 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:19:28.642543    9056 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0314 19:19:28.642543    9056 round_trippers.go:577] Response Headers:
	I0314 19:19:28.642543    9056 round_trippers.go:580]     Audit-Id: 91eb30bf-4a08-40e9-aa4c-144d5583fcac
	I0314 19:19:28.642543    9056 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:19:28.642543    9056 round_trippers.go:580]     Content-Type: application/json
	I0314 19:19:28.642543    9056 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:19:28.642543    9056 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:19:28.642543    9056 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:19:28 GMT
	I0314 19:19:28.642543    9056 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-cg28g","generateName":"kube-proxy-","namespace":"kube-system","uid":"c7f798bf-6722-4731-af8d-ccd5703d116e","resourceVersion":"405","creationTimestamp":"2024-03-14T19:19:16Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"6fc4cc4b-ef3f-4f16-8df5-a146058b364e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:19:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6fc4cc4b-ef3f-4f16-8df5-a146058b364e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5533 chars]
	I0314 19:19:28.643203    9056 round_trippers.go:463] GET https://172.17.86.124:8443/api/v1/nodes/multinode-442000
	I0314 19:19:28.643203    9056 round_trippers.go:469] Request Headers:
	I0314 19:19:28.643203    9056 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:19:28.643203    9056 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:19:28.645771    9056 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0314 19:19:28.646128    9056 round_trippers.go:577] Response Headers:
	I0314 19:19:28.646128    9056 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:19:28 GMT
	I0314 19:19:28.646128    9056 round_trippers.go:580]     Audit-Id: 493e59b8-258e-4143-973c-18bbcffd5098
	I0314 19:19:28.646128    9056 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:19:28.646128    9056 round_trippers.go:580]     Content-Type: application/json
	I0314 19:19:28.646128    9056 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:19:28.646128    9056 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:19:28.646356    9056 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"429","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I0314 19:19:28.646781    9056 pod_ready.go:92] pod "kube-proxy-cg28g" in "kube-system" namespace has status "Ready":"True"
	I0314 19:19:28.646781    9056 pod_ready.go:81] duration metric: took 7.8795ms for pod "kube-proxy-cg28g" in "kube-system" namespace to be "Ready" ...
	I0314 19:19:28.646781    9056 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-442000" in "kube-system" namespace to be "Ready" ...
	I0314 19:19:28.807635    9056 request.go:629] Waited for 160.4273ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.86.124:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-442000
	I0314 19:19:28.807635    9056 round_trippers.go:463] GET https://172.17.86.124:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-442000
	I0314 19:19:28.807635    9056 round_trippers.go:469] Request Headers:
	I0314 19:19:28.807635    9056 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:19:28.807635    9056 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:19:28.811387    9056 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:19:28.811387    9056 round_trippers.go:577] Response Headers:
	I0314 19:19:28.811387    9056 round_trippers.go:580]     Audit-Id: 8b171a01-6b46-4930-ac8f-519674737471
	I0314 19:19:28.811387    9056 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:19:28.811387    9056 round_trippers.go:580]     Content-Type: application/json
	I0314 19:19:28.811387    9056 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:19:28.811387    9056 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:19:28.811387    9056 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:19:29 GMT
	I0314 19:19:28.812091    9056 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-442000","namespace":"kube-system","uid":"76b10598-fe0d-4a14-a8e4-a32221fbb68f","resourceVersion":"412","creationTimestamp":"2024-03-14T19:19:01Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"2b2434280023596d1e3c90125a7219ed","kubernetes.io/config.mirror":"2b2434280023596d1e3c90125a7219ed","kubernetes.io/config.seen":"2024-03-14T19:18:55.420206709Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:19:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4699 chars]
	I0314 19:19:29.011128    9056 request.go:629] Waited for 198.95ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.86.124:8443/api/v1/nodes/multinode-442000
	I0314 19:19:29.011456    9056 round_trippers.go:463] GET https://172.17.86.124:8443/api/v1/nodes/multinode-442000
	I0314 19:19:29.011456    9056 round_trippers.go:469] Request Headers:
	I0314 19:19:29.011510    9056 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:19:29.011528    9056 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:19:29.015314    9056 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:19:29.015359    9056 round_trippers.go:577] Response Headers:
	I0314 19:19:29.015359    9056 round_trippers.go:580]     Audit-Id: 1ee3436a-8578-4b28-a508-96ffbcd4afd2
	I0314 19:19:29.015359    9056 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:19:29.015359    9056 round_trippers.go:580]     Content-Type: application/json
	I0314 19:19:29.015359    9056 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:19:29.015359    9056 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:19:29.015359    9056 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:19:29 GMT
	I0314 19:19:29.015359    9056 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"429","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I0314 19:19:29.016063    9056 pod_ready.go:92] pod "kube-scheduler-multinode-442000" in "kube-system" namespace has status "Ready":"True"
	I0314 19:19:29.016135    9056 pod_ready.go:81] duration metric: took 369.2555ms for pod "kube-scheduler-multinode-442000" in "kube-system" namespace to be "Ready" ...
	I0314 19:19:29.016135    9056 pod_ready.go:38] duration metric: took 2.434136s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0314 19:19:29.016205    9056 api_server.go:52] waiting for apiserver process to appear ...
	I0314 19:19:29.025082    9056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:19:29.048969    9056 command_runner.go:130] > 2278
	I0314 19:19:29.049143    9056 api_server.go:72] duration metric: took 12.0517174s to wait for apiserver process to appear ...
	I0314 19:19:29.049143    9056 api_server.go:88] waiting for apiserver healthz status ...
	I0314 19:19:29.049196    9056 api_server.go:253] Checking apiserver healthz at https://172.17.86.124:8443/healthz ...
	I0314 19:19:29.055860    9056 api_server.go:279] https://172.17.86.124:8443/healthz returned 200:
	ok
	I0314 19:19:29.056707    9056 round_trippers.go:463] GET https://172.17.86.124:8443/version
	I0314 19:19:29.056707    9056 round_trippers.go:469] Request Headers:
	I0314 19:19:29.056707    9056 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:19:29.056707    9056 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:19:29.058268    9056 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0314 19:19:29.058659    9056 round_trippers.go:577] Response Headers:
	I0314 19:19:29.058659    9056 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:19:29.058659    9056 round_trippers.go:580]     Content-Length: 264
	I0314 19:19:29.058659    9056 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:19:29 GMT
	I0314 19:19:29.058659    9056 round_trippers.go:580]     Audit-Id: 06cde281-ccff-474d-84ff-7d8af37f794b
	I0314 19:19:29.058659    9056 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:19:29.058659    9056 round_trippers.go:580]     Content-Type: application/json
	I0314 19:19:29.058659    9056 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:19:29.058659    9056 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.4",
	  "gitCommit": "bae2c62678db2b5053817bc97181fcc2e8388103",
	  "gitTreeState": "clean",
	  "buildDate": "2023-11-15T16:48:54Z",
	  "goVersion": "go1.20.11",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0314 19:19:29.058659    9056 api_server.go:141] control plane version: v1.28.4
	I0314 19:19:29.058659    9056 api_server.go:131] duration metric: took 9.5156ms to wait for apiserver health ...
	I0314 19:19:29.058659    9056 system_pods.go:43] waiting for kube-system pods to appear ...
	I0314 19:19:29.212722    9056 request.go:629] Waited for 154.051ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.86.124:8443/api/v1/namespaces/kube-system/pods
	I0314 19:19:29.212722    9056 round_trippers.go:463] GET https://172.17.86.124:8443/api/v1/namespaces/kube-system/pods
	I0314 19:19:29.212880    9056 round_trippers.go:469] Request Headers:
	I0314 19:19:29.212880    9056 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:19:29.212880    9056 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:19:29.217513    9056 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 19:19:29.218316    9056 round_trippers.go:577] Response Headers:
	I0314 19:19:29.218316    9056 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:19:29.218316    9056 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:19:29 GMT
	I0314 19:19:29.218316    9056 round_trippers.go:580]     Audit-Id: 2e4cdeaf-dd17-4807-9f27-b1b96011307b
	I0314 19:19:29.218316    9056 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:19:29.218316    9056 round_trippers.go:580]     Content-Type: application/json
	I0314 19:19:29.218316    9056 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:19:29.220155    9056 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"450"},"items":[{"metadata":{"name":"coredns-5dd5756b68-d22jc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2a563b3f-a175-4dc2-9f0b-67dbaefbfaac","resourceVersion":"446","creationTimestamp":"2024-03-14T19:19:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"65689a14-ed43-48af-8104-53ea2e3991f3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:19:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"65689a14-ed43-48af-8104-53ea2e3991f3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 54088 chars]
	I0314 19:19:29.222628    9056 system_pods.go:59] 8 kube-system pods found
	I0314 19:19:29.222695    9056 system_pods.go:61] "coredns-5dd5756b68-d22jc" [2a563b3f-a175-4dc2-9f0b-67dbaefbfaac] Running
	I0314 19:19:29.222695    9056 system_pods.go:61] "etcd-multinode-442000" [8974ad44-5d36-48f0-bc6b-9115bab5fb5e] Running
	I0314 19:19:29.222695    9056 system_pods.go:61] "kindnet-7b9lf" [677b9084-0026-4b21-b041-445940624ed7] Running
	I0314 19:19:29.222695    9056 system_pods.go:61] "kube-apiserver-multinode-442000" [02a2d011-5f4c-451c-9698-a88e42e4b6c9] Running
	I0314 19:19:29.222695    9056 system_pods.go:61] "kube-controller-manager-multinode-442000" [b16fc874-ef74-44ca-a54f-bb678bf982df] Running
	I0314 19:19:29.222695    9056 system_pods.go:61] "kube-proxy-cg28g" [c7f798bf-6722-4731-af8d-ccd5703d116e] Running
	I0314 19:19:29.222762    9056 system_pods.go:61] "kube-scheduler-multinode-442000" [76b10598-fe0d-4a14-a8e4-a32221fbb68f] Running
	I0314 19:19:29.222762    9056 system_pods.go:61] "storage-provisioner" [65d76566-4401-4b28-8452-10ed98624901] Running
	I0314 19:19:29.222762    9056 system_pods.go:74] duration metric: took 164.0906ms to wait for pod list to return data ...
	I0314 19:19:29.222762    9056 default_sa.go:34] waiting for default service account to be created ...
	I0314 19:19:29.415379    9056 request.go:629] Waited for 192.5307ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.86.124:8443/api/v1/namespaces/default/serviceaccounts
	I0314 19:19:29.415682    9056 round_trippers.go:463] GET https://172.17.86.124:8443/api/v1/namespaces/default/serviceaccounts
	I0314 19:19:29.415837    9056 round_trippers.go:469] Request Headers:
	I0314 19:19:29.415837    9056 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:19:29.415837    9056 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:19:29.421374    9056 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0314 19:19:29.421374    9056 round_trippers.go:577] Response Headers:
	I0314 19:19:29.421374    9056 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:19:29.421374    9056 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:19:29.421374    9056 round_trippers.go:580]     Content-Length: 261
	I0314 19:19:29.421374    9056 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:19:29 GMT
	I0314 19:19:29.421374    9056 round_trippers.go:580]     Audit-Id: c52939c0-3784-4e4d-a3a8-2e2940593bd9
	I0314 19:19:29.421374    9056 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:19:29.421374    9056 round_trippers.go:580]     Content-Type: application/json
	I0314 19:19:29.421374    9056 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"450"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"31dfe296-58ba-4a37-a509-52c518a0c41a","resourceVersion":"365","creationTimestamp":"2024-03-14T19:19:16Z"}}]}
	I0314 19:19:29.421374    9056 default_sa.go:45] found service account: "default"
	I0314 19:19:29.421374    9056 default_sa.go:55] duration metric: took 198.5968ms for default service account to be created ...
	I0314 19:19:29.421374    9056 system_pods.go:116] waiting for k8s-apps to be running ...
	I0314 19:19:29.616427    9056 request.go:629] Waited for 194.5078ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.86.124:8443/api/v1/namespaces/kube-system/pods
	I0314 19:19:29.616698    9056 round_trippers.go:463] GET https://172.17.86.124:8443/api/v1/namespaces/kube-system/pods
	I0314 19:19:29.616864    9056 round_trippers.go:469] Request Headers:
	I0314 19:19:29.616931    9056 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:19:29.616931    9056 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:19:29.621730    9056 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 19:19:29.621730    9056 round_trippers.go:577] Response Headers:
	I0314 19:19:29.621730    9056 round_trippers.go:580]     Content-Type: application/json
	I0314 19:19:29.621730    9056 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:19:29.621730    9056 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:19:29.621730    9056 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:19:29 GMT
	I0314 19:19:29.621730    9056 round_trippers.go:580]     Audit-Id: 84d6813b-d6ef-44d2-aafe-7f08b1275379
	I0314 19:19:29.621730    9056 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:19:29.623963    9056 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"451"},"items":[{"metadata":{"name":"coredns-5dd5756b68-d22jc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2a563b3f-a175-4dc2-9f0b-67dbaefbfaac","resourceVersion":"446","creationTimestamp":"2024-03-14T19:19:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"65689a14-ed43-48af-8104-53ea2e3991f3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:19:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"65689a14-ed43-48af-8104-53ea2e3991f3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 54088 chars]
	I0314 19:19:29.626771    9056 system_pods.go:86] 8 kube-system pods found
	I0314 19:19:29.626850    9056 system_pods.go:89] "coredns-5dd5756b68-d22jc" [2a563b3f-a175-4dc2-9f0b-67dbaefbfaac] Running
	I0314 19:19:29.626850    9056 system_pods.go:89] "etcd-multinode-442000" [8974ad44-5d36-48f0-bc6b-9115bab5fb5e] Running
	I0314 19:19:29.626850    9056 system_pods.go:89] "kindnet-7b9lf" [677b9084-0026-4b21-b041-445940624ed7] Running
	I0314 19:19:29.626850    9056 system_pods.go:89] "kube-apiserver-multinode-442000" [02a2d011-5f4c-451c-9698-a88e42e4b6c9] Running
	I0314 19:19:29.626850    9056 system_pods.go:89] "kube-controller-manager-multinode-442000" [b16fc874-ef74-44ca-a54f-bb678bf982df] Running
	I0314 19:19:29.626850    9056 system_pods.go:89] "kube-proxy-cg28g" [c7f798bf-6722-4731-af8d-ccd5703d116e] Running
	I0314 19:19:29.626850    9056 system_pods.go:89] "kube-scheduler-multinode-442000" [76b10598-fe0d-4a14-a8e4-a32221fbb68f] Running
	I0314 19:19:29.626912    9056 system_pods.go:89] "storage-provisioner" [65d76566-4401-4b28-8452-10ed98624901] Running
	I0314 19:19:29.626912    9056 system_pods.go:126] duration metric: took 204.9915ms to wait for k8s-apps to be running ...
	I0314 19:19:29.626912    9056 system_svc.go:44] waiting for kubelet service to be running ....
	I0314 19:19:29.636242    9056 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0314 19:19:29.660009    9056 system_svc.go:56] duration metric: took 33.0946ms WaitForService to wait for kubelet
	I0314 19:19:29.660783    9056 kubeadm.go:576] duration metric: took 12.663312s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0314 19:19:29.660783    9056 node_conditions.go:102] verifying NodePressure condition ...
	I0314 19:19:29.817276    9056 request.go:629] Waited for 156.38ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.86.124:8443/api/v1/nodes
	I0314 19:19:29.817488    9056 round_trippers.go:463] GET https://172.17.86.124:8443/api/v1/nodes
	I0314 19:19:29.817488    9056 round_trippers.go:469] Request Headers:
	I0314 19:19:29.817488    9056 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:19:29.817488    9056 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:19:29.820848    9056 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:19:29.820848    9056 round_trippers.go:577] Response Headers:
	I0314 19:19:29.820848    9056 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:19:29.820848    9056 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:19:30 GMT
	I0314 19:19:29.820848    9056 round_trippers.go:580]     Audit-Id: a81117a1-cf3b-457b-829d-7b47d812850b
	I0314 19:19:29.820848    9056 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:19:29.820848    9056 round_trippers.go:580]     Content-Type: application/json
	I0314 19:19:29.820848    9056 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:19:29.821916    9056 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"451"},"items":[{"metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"429","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 4835 chars]
	I0314 19:19:29.823281    9056 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0314 19:19:29.823461    9056 node_conditions.go:123] node cpu capacity is 2
	I0314 19:19:29.823461    9056 node_conditions.go:105] duration metric: took 162.6649ms to run NodePressure ...
	I0314 19:19:29.823537    9056 start.go:240] waiting for startup goroutines ...
	I0314 19:19:29.823537    9056 start.go:245] waiting for cluster config update ...
	I0314 19:19:29.823537    9056 start.go:254] writing updated cluster config ...
	I0314 19:19:29.827345    9056 out.go:177] 
	I0314 19:19:29.830579    9056 config.go:182] Loaded profile config "ha-832100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0314 19:19:29.837307    9056 config.go:182] Loaded profile config "multinode-442000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0314 19:19:29.837307    9056 profile.go:142] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-442000\config.json ...
	I0314 19:19:29.843283    9056 out.go:177] * Starting "multinode-442000-m02" worker node in "multinode-442000" cluster
	I0314 19:19:29.859795    9056 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0314 19:19:29.860538    9056 cache.go:56] Caching tarball of preloaded images
	I0314 19:19:29.860538    9056 preload.go:173] Found C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0314 19:19:29.860538    9056 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0314 19:19:29.861067    9056 profile.go:142] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-442000\config.json ...
	I0314 19:19:29.863037    9056 start.go:360] acquireMachinesLock for multinode-442000-m02: {Name:mk814f158b6187cc9297257c36fdbe0d2871c950 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0314 19:19:29.863037    9056 start.go:364] duration metric: took 0s to acquireMachinesLock for "multinode-442000-m02"
	I0314 19:19:29.863037    9056 start.go:93] Provisioning new machine with config: &{Name:multinode-442000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.28.4 ClusterName:multinode-442000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.86.124 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDis
ks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0314 19:19:29.863037    9056 start.go:125] createHost starting for "m02" (driver="hyperv")
	I0314 19:19:29.866149    9056 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0314 19:19:29.866149    9056 start.go:159] libmachine.API.Create for "multinode-442000" (driver="hyperv")
	I0314 19:19:29.866149    9056 client.go:168] LocalClient.Create starting
	I0314 19:19:29.866789    9056 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem
	I0314 19:19:29.866789    9056 main.go:141] libmachine: Decoding PEM data...
	I0314 19:19:29.866789    9056 main.go:141] libmachine: Parsing certificate...
	I0314 19:19:29.866789    9056 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem
	I0314 19:19:29.866789    9056 main.go:141] libmachine: Decoding PEM data...
	I0314 19:19:29.866789    9056 main.go:141] libmachine: Parsing certificate...
	I0314 19:19:29.866789    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0314 19:19:31.659906    9056 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0314 19:19:31.659906    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:19:31.660084    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0314 19:19:33.302323    9056 main.go:141] libmachine: [stdout =====>] : False
	
	I0314 19:19:33.302323    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:19:33.302323    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0314 19:19:34.727469    9056 main.go:141] libmachine: [stdout =====>] : True
	
	I0314 19:19:34.727469    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:19:34.727753    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0314 19:19:38.175287    9056 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0314 19:19:38.175287    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:19:38.177129    9056 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube7/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.32.1-1710348681-18375-amd64.iso...
	I0314 19:19:38.498644    9056 main.go:141] libmachine: Creating SSH key...
	I0314 19:19:38.835451    9056 main.go:141] libmachine: Creating VM...
	I0314 19:19:38.835451    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0314 19:19:41.547593    9056 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0314 19:19:41.547593    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:19:41.547872    9056 main.go:141] libmachine: Using switch "Default Switch"
	I0314 19:19:41.547927    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0314 19:19:43.233638    9056 main.go:141] libmachine: [stdout =====>] : True
	
	I0314 19:19:43.233638    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:19:43.233868    9056 main.go:141] libmachine: Creating VHD
	I0314 19:19:43.234051    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-442000-m02\fixed.vhd' -SizeBytes 10MB -Fixed
	I0314 19:19:46.847049    9056 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube7
	Path                    : C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-442000-m02\fixed
	                          .vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : D63B9E0B-C829-4A9A-BBFD-3DC3AB7DCAC0
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0314 19:19:46.847049    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:19:46.847049    9056 main.go:141] libmachine: Writing magic tar header
	I0314 19:19:46.847858    9056 main.go:141] libmachine: Writing SSH key tar header
	I0314 19:19:46.856092    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-442000-m02\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-442000-m02\disk.vhd' -VHDType Dynamic -DeleteSource
	I0314 19:19:49.893165    9056 main.go:141] libmachine: [stdout =====>] : 
	I0314 19:19:49.893165    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:19:49.893165    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-442000-m02\disk.vhd' -SizeBytes 20000MB
	I0314 19:19:52.297877    9056 main.go:141] libmachine: [stdout =====>] : 
	I0314 19:19:52.297877    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:19:52.297877    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM multinode-442000-m02 -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-442000-m02' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0314 19:19:55.741836    9056 main.go:141] libmachine: [stdout =====>] : 
	Name                 State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----                 ----- ----------- ----------------- ------   ------             -------
	multinode-442000-m02 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0314 19:19:55.741836    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:19:55.742229    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName multinode-442000-m02 -DynamicMemoryEnabled $false
	I0314 19:19:57.851551    9056 main.go:141] libmachine: [stdout =====>] : 
	I0314 19:19:57.851551    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:19:57.851978    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor multinode-442000-m02 -Count 2
	I0314 19:19:59.887962    9056 main.go:141] libmachine: [stdout =====>] : 
	I0314 19:19:59.887962    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:19:59.888826    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName multinode-442000-m02 -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-442000-m02\boot2docker.iso'
	I0314 19:20:02.298644    9056 main.go:141] libmachine: [stdout =====>] : 
	I0314 19:20:02.299184    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:20:02.299351    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName multinode-442000-m02 -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-442000-m02\disk.vhd'
	I0314 19:20:04.791298    9056 main.go:141] libmachine: [stdout =====>] : 
	I0314 19:20:04.791298    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:20:04.791298    9056 main.go:141] libmachine: Starting VM...
	I0314 19:20:04.791298    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-442000-m02
	I0314 19:20:07.680362    9056 main.go:141] libmachine: [stdout =====>] : 
	I0314 19:20:07.680362    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:20:07.680791    9056 main.go:141] libmachine: Waiting for host to start...
	I0314 19:20:07.680831    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000-m02 ).state
	I0314 19:20:09.771952    9056 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:20:09.771952    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:20:09.772891    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-442000-m02 ).networkadapters[0]).ipaddresses[0]
	I0314 19:20:12.113387    9056 main.go:141] libmachine: [stdout =====>] : 
	I0314 19:20:12.113387    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:20:13.122470    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000-m02 ).state
	I0314 19:20:15.121644    9056 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:20:15.121644    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:20:15.122092    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-442000-m02 ).networkadapters[0]).ipaddresses[0]
	I0314 19:20:17.469600    9056 main.go:141] libmachine: [stdout =====>] : 
	I0314 19:20:17.469600    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:20:18.477356    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000-m02 ).state
	I0314 19:20:20.487594    9056 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:20:20.487823    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:20:20.487901    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-442000-m02 ).networkadapters[0]).ipaddresses[0]
	I0314 19:20:22.780302    9056 main.go:141] libmachine: [stdout =====>] : 
	I0314 19:20:22.780628    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:20:23.789191    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000-m02 ).state
	I0314 19:20:25.820854    9056 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:20:25.821256    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:20:25.821337    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-442000-m02 ).networkadapters[0]).ipaddresses[0]
	I0314 19:20:28.216333    9056 main.go:141] libmachine: [stdout =====>] : 
	I0314 19:20:28.216333    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:20:29.222901    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000-m02 ).state
	I0314 19:20:31.251378    9056 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:20:31.251378    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:20:31.251606    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-442000-m02 ).networkadapters[0]).ipaddresses[0]
	I0314 19:20:33.669107    9056 main.go:141] libmachine: [stdout =====>] : 172.17.80.135
	
	I0314 19:20:33.669107    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:20:33.669107    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000-m02 ).state
	I0314 19:20:35.653960    9056 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:20:35.653960    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:20:35.653960    9056 machine.go:94] provisionDockerMachine start ...
	I0314 19:20:35.653960    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000-m02 ).state
	I0314 19:20:37.683088    9056 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:20:37.683785    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:20:37.683864    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-442000-m02 ).networkadapters[0]).ipaddresses[0]
	I0314 19:20:40.056495    9056 main.go:141] libmachine: [stdout =====>] : 172.17.80.135
	
	I0314 19:20:40.057274    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:20:40.063027    9056 main.go:141] libmachine: Using SSH client type: native
	I0314 19:20:40.074901    9056 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xac9f80] 0xaccb60 <nil>  [] 0s} 172.17.80.135 22 <nil> <nil>}
	I0314 19:20:40.074901    9056 main.go:141] libmachine: About to run SSH command:
	hostname
	I0314 19:20:40.218416    9056 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0314 19:20:40.218525    9056 buildroot.go:166] provisioning hostname "multinode-442000-m02"
	I0314 19:20:40.218525    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000-m02 ).state
	I0314 19:20:42.180793    9056 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:20:42.180793    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:20:42.180793    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-442000-m02 ).networkadapters[0]).ipaddresses[0]
	I0314 19:20:44.573143    9056 main.go:141] libmachine: [stdout =====>] : 172.17.80.135
	
	I0314 19:20:44.573326    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:20:44.577302    9056 main.go:141] libmachine: Using SSH client type: native
	I0314 19:20:44.577784    9056 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xac9f80] 0xaccb60 <nil>  [] 0s} 172.17.80.135 22 <nil> <nil>}
	I0314 19:20:44.577862    9056 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-442000-m02 && echo "multinode-442000-m02" | sudo tee /etc/hostname
	I0314 19:20:44.744224    9056 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-442000-m02
	
	I0314 19:20:44.744272    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000-m02 ).state
	I0314 19:20:46.716300    9056 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:20:46.716300    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:20:46.716300    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-442000-m02 ).networkadapters[0]).ipaddresses[0]
	I0314 19:20:49.079867    9056 main.go:141] libmachine: [stdout =====>] : 172.17.80.135
	
	I0314 19:20:49.080620    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:20:49.086355    9056 main.go:141] libmachine: Using SSH client type: native
	I0314 19:20:49.086962    9056 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xac9f80] 0xaccb60 <nil>  [] 0s} 172.17.80.135 22 <nil> <nil>}
	I0314 19:20:49.086962    9056 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-442000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-442000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-442000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0314 19:20:49.243342    9056 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0314 19:20:49.243393    9056 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube7\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube7\minikube-integration\.minikube}
	I0314 19:20:49.243393    9056 buildroot.go:174] setting up certificates
	I0314 19:20:49.243446    9056 provision.go:84] configureAuth start
	I0314 19:20:49.243502    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000-m02 ).state
	I0314 19:20:51.238283    9056 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:20:51.238283    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:20:51.238797    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-442000-m02 ).networkadapters[0]).ipaddresses[0]
	I0314 19:20:53.609845    9056 main.go:141] libmachine: [stdout =====>] : 172.17.80.135
	
	I0314 19:20:53.609902    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:20:53.609981    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000-m02 ).state
	I0314 19:20:55.574166    9056 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:20:55.574203    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:20:55.574259    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-442000-m02 ).networkadapters[0]).ipaddresses[0]
	I0314 19:20:57.946938    9056 main.go:141] libmachine: [stdout =====>] : 172.17.80.135
	
	I0314 19:20:57.946938    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:20:57.946938    9056 provision.go:143] copyHostCerts
	I0314 19:20:57.947635    9056 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem
	I0314 19:20:57.947635    9056 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem, removing ...
	I0314 19:20:57.947635    9056 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.pem
	I0314 19:20:57.948171    9056 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0314 19:20:57.949079    9056 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem
	I0314 19:20:57.949079    9056 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem, removing ...
	I0314 19:20:57.949079    9056 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cert.pem
	I0314 19:20:57.949079    9056 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0314 19:20:57.950258    9056 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem
	I0314 19:20:57.950402    9056 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem, removing ...
	I0314 19:20:57.950402    9056 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\key.pem
	I0314 19:20:57.950402    9056 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem (1679 bytes)
	I0314 19:20:57.951403    9056 provision.go:117] generating server cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-442000-m02 san=[127.0.0.1 172.17.80.135 localhost minikube multinode-442000-m02]
	I0314 19:20:58.197687    9056 provision.go:177] copyRemoteCerts
	I0314 19:20:58.207106    9056 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0314 19:20:58.207189    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000-m02 ).state
	I0314 19:21:00.161737    9056 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:21:00.162451    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:21:00.162451    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-442000-m02 ).networkadapters[0]).ipaddresses[0]
	I0314 19:21:02.564717    9056 main.go:141] libmachine: [stdout =====>] : 172.17.80.135
	
	I0314 19:21:02.564717    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:21:02.564717    9056 sshutil.go:53] new ssh client: &{IP:172.17.80.135 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-442000-m02\id_rsa Username:docker}
	I0314 19:21:02.679645    9056 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.4720629s)
	I0314 19:21:02.679735    9056 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0314 19:21:02.680163    9056 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0314 19:21:02.725298    9056 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0314 19:21:02.725347    9056 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1229 bytes)
	I0314 19:21:02.787571    9056 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0314 19:21:02.787571    9056 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0314 19:21:02.831387    9056 provision.go:87] duration metric: took 13.5869115s to configureAuth
	I0314 19:21:02.831387    9056 buildroot.go:189] setting minikube options for container-runtime
	I0314 19:21:02.832599    9056 config.go:182] Loaded profile config "multinode-442000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0314 19:21:02.832599    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000-m02 ).state
	I0314 19:21:04.797262    9056 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:21:04.797262    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:21:04.797905    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-442000-m02 ).networkadapters[0]).ipaddresses[0]
	I0314 19:21:07.216836    9056 main.go:141] libmachine: [stdout =====>] : 172.17.80.135
	
	I0314 19:21:07.216836    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:21:07.223011    9056 main.go:141] libmachine: Using SSH client type: native
	I0314 19:21:07.223707    9056 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xac9f80] 0xaccb60 <nil>  [] 0s} 172.17.80.135 22 <nil> <nil>}
	I0314 19:21:07.223707    9056 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0314 19:21:07.365033    9056 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0314 19:21:07.365127    9056 buildroot.go:70] root file system type: tmpfs
	I0314 19:21:07.365337    9056 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0314 19:21:07.365337    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000-m02 ).state
	I0314 19:21:09.323507    9056 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:21:09.323507    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:21:09.323586    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-442000-m02 ).networkadapters[0]).ipaddresses[0]
	I0314 19:21:11.686958    9056 main.go:141] libmachine: [stdout =====>] : 172.17.80.135
	
	I0314 19:21:11.686958    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:21:11.691277    9056 main.go:141] libmachine: Using SSH client type: native
	I0314 19:21:11.691660    9056 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xac9f80] 0xaccb60 <nil>  [] 0s} 172.17.80.135 22 <nil> <nil>}
	I0314 19:21:11.691842    9056 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.17.86.124"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0314 19:21:11.853818    9056 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.17.86.124
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0314 19:21:11.853973    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000-m02 ).state
	I0314 19:21:13.865969    9056 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:21:13.865969    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:21:13.866075    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-442000-m02 ).networkadapters[0]).ipaddresses[0]
	I0314 19:21:16.305917    9056 main.go:141] libmachine: [stdout =====>] : 172.17.80.135
	
	I0314 19:21:16.305917    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:21:16.310061    9056 main.go:141] libmachine: Using SSH client type: native
	I0314 19:21:16.310418    9056 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xac9f80] 0xaccb60 <nil>  [] 0s} 172.17.80.135 22 <nil> <nil>}
	I0314 19:21:16.310501    9056 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0314 19:21:18.385118    9056 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0314 19:21:18.385178    9056 machine.go:97] duration metric: took 42.7279793s to provisionDockerMachine
	I0314 19:21:18.385232    9056 client.go:171] duration metric: took 1m48.5108818s to LocalClient.Create
	I0314 19:21:18.385288    9056 start.go:167] duration metric: took 1m48.5108818s to libmachine.API.Create "multinode-442000"
	I0314 19:21:18.385288    9056 start.go:293] postStartSetup for "multinode-442000-m02" (driver="hyperv")
	I0314 19:21:18.385344    9056 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0314 19:21:18.395022    9056 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0314 19:21:18.395022    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000-m02 ).state
	I0314 19:21:20.354008    9056 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:21:20.354008    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:21:20.354008    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-442000-m02 ).networkadapters[0]).ipaddresses[0]
	I0314 19:21:22.764602    9056 main.go:141] libmachine: [stdout =====>] : 172.17.80.135
	
	I0314 19:21:22.764602    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:21:22.765154    9056 sshutil.go:53] new ssh client: &{IP:172.17.80.135 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-442000-m02\id_rsa Username:docker}
	I0314 19:21:22.883855    9056 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.4884924s)
	I0314 19:21:22.892668    9056 ssh_runner.go:195] Run: cat /etc/os-release
	I0314 19:21:22.899999    9056 command_runner.go:130] > NAME=Buildroot
	I0314 19:21:22.900231    9056 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0314 19:21:22.900231    9056 command_runner.go:130] > ID=buildroot
	I0314 19:21:22.900231    9056 command_runner.go:130] > VERSION_ID=2023.02.9
	I0314 19:21:22.900231    9056 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0314 19:21:22.900322    9056 info.go:137] Remote host: Buildroot 2023.02.9
	I0314 19:21:22.900359    9056 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\addons for local assets ...
	I0314 19:21:22.900679    9056 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\files for local assets ...
	I0314 19:21:22.901298    9056 filesync.go:149] local asset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\110522.pem -> 110522.pem in /etc/ssl/certs
	I0314 19:21:22.901298    9056 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\110522.pem -> /etc/ssl/certs/110522.pem
	I0314 19:21:22.910000    9056 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0314 19:21:22.927545    9056 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\110522.pem --> /etc/ssl/certs/110522.pem (1708 bytes)
	I0314 19:21:22.971497    9056 start.go:296] duration metric: took 4.5858613s for postStartSetup
	I0314 19:21:22.973361    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000-m02 ).state
	I0314 19:21:24.959862    9056 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:21:24.959862    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:21:24.960360    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-442000-m02 ).networkadapters[0]).ipaddresses[0]
	I0314 19:21:27.332594    9056 main.go:141] libmachine: [stdout =====>] : 172.17.80.135
	
	I0314 19:21:27.333279    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:21:27.333565    9056 profile.go:142] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-442000\config.json ...
	I0314 19:21:27.336383    9056 start.go:128] duration metric: took 1m57.4644652s to createHost
	I0314 19:21:27.336516    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000-m02 ).state
	I0314 19:21:29.321236    9056 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:21:29.321236    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:21:29.321236    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-442000-m02 ).networkadapters[0]).ipaddresses[0]
	I0314 19:21:31.748593    9056 main.go:141] libmachine: [stdout =====>] : 172.17.80.135
	
	I0314 19:21:31.748593    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:21:31.752681    9056 main.go:141] libmachine: Using SSH client type: native
	I0314 19:21:31.753205    9056 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xac9f80] 0xaccb60 <nil>  [] 0s} 172.17.80.135 22 <nil> <nil>}
	I0314 19:21:31.753284    9056 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0314 19:21:31.885214    9056 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710444092.146123639
	
	I0314 19:21:31.885214    9056 fix.go:216] guest clock: 1710444092.146123639
	I0314 19:21:31.885214    9056 fix.go:229] Guest: 2024-03-14 19:21:32.146123639 +0000 UTC Remote: 2024-03-14 19:21:27.3365167 +0000 UTC m=+322.176166501 (delta=4.809606939s)
	I0314 19:21:31.885214    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000-m02 ).state
	I0314 19:21:33.898724    9056 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:21:33.898808    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:21:33.898891    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-442000-m02 ).networkadapters[0]).ipaddresses[0]
	I0314 19:21:36.280671    9056 main.go:141] libmachine: [stdout =====>] : 172.17.80.135
	
	I0314 19:21:36.280671    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:21:36.285064    9056 main.go:141] libmachine: Using SSH client type: native
	I0314 19:21:36.285474    9056 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xac9f80] 0xaccb60 <nil>  [] 0s} 172.17.80.135 22 <nil> <nil>}
	I0314 19:21:36.285474    9056 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1710444091
	I0314 19:21:36.432820    9056 main.go:141] libmachine: SSH cmd err, output: <nil>: Thu Mar 14 19:21:31 UTC 2024
	
	I0314 19:21:36.432820    9056 fix.go:236] clock set: Thu Mar 14 19:21:31 UTC 2024
	 (err=<nil>)
	I0314 19:21:36.432820    9056 start.go:83] releasing machines lock for "multinode-442000-m02", held for 2m6.5602115s
	I0314 19:21:36.433167    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000-m02 ).state
	I0314 19:21:38.407447    9056 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:21:38.407447    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:21:38.407543    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-442000-m02 ).networkadapters[0]).ipaddresses[0]
	I0314 19:21:40.845857    9056 main.go:141] libmachine: [stdout =====>] : 172.17.80.135
	
	I0314 19:21:40.846801    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:21:40.849950    9056 out.go:177] * Found network options:
	I0314 19:21:40.852984    9056 out.go:177]   - NO_PROXY=172.17.86.124
	W0314 19:21:40.855570    9056 proxy.go:119] fail to check proxy env: Error ip not in block
	I0314 19:21:40.857868    9056 out.go:177]   - NO_PROXY=172.17.86.124
	W0314 19:21:40.860047    9056 proxy.go:119] fail to check proxy env: Error ip not in block
	W0314 19:21:40.863273    9056 proxy.go:119] fail to check proxy env: Error ip not in block
	I0314 19:21:40.865129    9056 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0314 19:21:40.865129    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000-m02 ).state
	I0314 19:21:40.874075    9056 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0314 19:21:40.874075    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000-m02 ).state
	I0314 19:21:42.898676    9056 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:21:42.898676    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:21:42.898676    9056 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:21:42.898676    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:21:42.898676    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-442000-m02 ).networkadapters[0]).ipaddresses[0]
	I0314 19:21:42.898676    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-442000-m02 ).networkadapters[0]).ipaddresses[0]
	I0314 19:21:45.291184    9056 main.go:141] libmachine: [stdout =====>] : 172.17.80.135
	
	I0314 19:21:45.291184    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:21:45.291184    9056 sshutil.go:53] new ssh client: &{IP:172.17.80.135 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-442000-m02\id_rsa Username:docker}
	I0314 19:21:45.312508    9056 main.go:141] libmachine: [stdout =====>] : 172.17.80.135
	
	I0314 19:21:45.313441    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:21:45.313744    9056 sshutil.go:53] new ssh client: &{IP:172.17.80.135 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-442000-m02\id_rsa Username:docker}
	I0314 19:21:45.395884    9056 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	I0314 19:21:45.396799    9056 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (4.522381s)
	W0314 19:21:45.396799    9056 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0314 19:21:45.409079    9056 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0314 19:21:45.470622    9056 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0314 19:21:45.470622    9056 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.6051434s)
	I0314 19:21:45.470622    9056 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0314 19:21:45.470622    9056 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0314 19:21:45.470622    9056 start.go:494] detecting cgroup driver to use...
	I0314 19:21:45.470622    9056 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0314 19:21:45.511384    9056 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0314 19:21:45.525850    9056 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0314 19:21:45.561648    9056 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0314 19:21:45.584362    9056 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0314 19:21:45.593786    9056 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0314 19:21:45.619790    9056 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0314 19:21:45.650853    9056 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0314 19:21:45.678487    9056 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0314 19:21:45.706306    9056 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0314 19:21:45.735839    9056 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0314 19:21:45.762305    9056 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0314 19:21:45.778979    9056 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0314 19:21:45.789246    9056 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0314 19:21:45.815846    9056 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 19:21:45.993165    9056 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0314 19:21:46.022215    9056 start.go:494] detecting cgroup driver to use...
	I0314 19:21:46.034901    9056 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0314 19:21:46.055701    9056 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0314 19:21:46.055701    9056 command_runner.go:130] > [Unit]
	I0314 19:21:46.055701    9056 command_runner.go:130] > Description=Docker Application Container Engine
	I0314 19:21:46.055701    9056 command_runner.go:130] > Documentation=https://docs.docker.com
	I0314 19:21:46.055701    9056 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0314 19:21:46.055701    9056 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0314 19:21:46.055701    9056 command_runner.go:130] > StartLimitBurst=3
	I0314 19:21:46.055701    9056 command_runner.go:130] > StartLimitIntervalSec=60
	I0314 19:21:46.055701    9056 command_runner.go:130] > [Service]
	I0314 19:21:46.055701    9056 command_runner.go:130] > Type=notify
	I0314 19:21:46.055701    9056 command_runner.go:130] > Restart=on-failure
	I0314 19:21:46.055701    9056 command_runner.go:130] > Environment=NO_PROXY=172.17.86.124
	I0314 19:21:46.055701    9056 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0314 19:21:46.055701    9056 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0314 19:21:46.055701    9056 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0314 19:21:46.055701    9056 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0314 19:21:46.055701    9056 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0314 19:21:46.055701    9056 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0314 19:21:46.055701    9056 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0314 19:21:46.055701    9056 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0314 19:21:46.055701    9056 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0314 19:21:46.055701    9056 command_runner.go:130] > ExecStart=
	I0314 19:21:46.055701    9056 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0314 19:21:46.055701    9056 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0314 19:21:46.055701    9056 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0314 19:21:46.055701    9056 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0314 19:21:46.055701    9056 command_runner.go:130] > LimitNOFILE=infinity
	I0314 19:21:46.055701    9056 command_runner.go:130] > LimitNPROC=infinity
	I0314 19:21:46.055701    9056 command_runner.go:130] > LimitCORE=infinity
	I0314 19:21:46.055701    9056 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0314 19:21:46.055701    9056 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0314 19:21:46.055701    9056 command_runner.go:130] > TasksMax=infinity
	I0314 19:21:46.055701    9056 command_runner.go:130] > TimeoutStartSec=0
	I0314 19:21:46.055701    9056 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0314 19:21:46.055701    9056 command_runner.go:130] > Delegate=yes
	I0314 19:21:46.055701    9056 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0314 19:21:46.055701    9056 command_runner.go:130] > KillMode=process
	I0314 19:21:46.055701    9056 command_runner.go:130] > [Install]
	I0314 19:21:46.055701    9056 command_runner.go:130] > WantedBy=multi-user.target
	I0314 19:21:46.065666    9056 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0314 19:21:46.095632    9056 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0314 19:21:46.133419    9056 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0314 19:21:46.163387    9056 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0314 19:21:46.195191    9056 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0314 19:21:46.254006    9056 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0314 19:21:46.276679    9056 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0314 19:21:46.307042    9056 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0314 19:21:46.320284    9056 ssh_runner.go:195] Run: which cri-dockerd
	I0314 19:21:46.326747    9056 command_runner.go:130] > /usr/bin/cri-dockerd
	I0314 19:21:46.337295    9056 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0314 19:21:46.354000    9056 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0314 19:21:46.394928    9056 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0314 19:21:46.580815    9056 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0314 19:21:46.780072    9056 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0314 19:21:46.780198    9056 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0314 19:21:46.826956    9056 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 19:21:47.019244    9056 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0314 19:21:49.510074    9056 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.4906407s)
	I0314 19:21:49.519993    9056 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0314 19:21:49.551729    9056 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0314 19:21:49.582489    9056 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0314 19:21:49.760362    9056 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0314 19:21:49.951816    9056 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 19:21:50.130926    9056 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0314 19:21:50.169259    9056 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0314 19:21:50.200452    9056 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 19:21:50.380350    9056 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0314 19:21:50.477785    9056 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0314 19:21:50.486557    9056 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0314 19:21:50.499715    9056 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0314 19:21:50.499755    9056 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0314 19:21:50.499755    9056 command_runner.go:130] > Device: 0,22	Inode: 887         Links: 1
	I0314 19:21:50.499755    9056 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0314 19:21:50.499755    9056 command_runner.go:130] > Access: 2024-03-14 19:21:50.665869576 +0000
	I0314 19:21:50.499755    9056 command_runner.go:130] > Modify: 2024-03-14 19:21:50.665869576 +0000
	I0314 19:21:50.499755    9056 command_runner.go:130] > Change: 2024-03-14 19:21:50.668869846 +0000
	I0314 19:21:50.499755    9056 command_runner.go:130] >  Birth: -
	I0314 19:21:50.499946    9056 start.go:562] Will wait 60s for crictl version
	I0314 19:21:50.510433    9056 ssh_runner.go:195] Run: which crictl
	I0314 19:21:50.516430    9056 command_runner.go:130] > /usr/bin/crictl
	I0314 19:21:50.525289    9056 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0314 19:21:50.590812    9056 command_runner.go:130] > Version:  0.1.0
	I0314 19:21:50.590812    9056 command_runner.go:130] > RuntimeName:  docker
	I0314 19:21:50.590812    9056 command_runner.go:130] > RuntimeVersion:  25.0.4
	I0314 19:21:50.590812    9056 command_runner.go:130] > RuntimeApiVersion:  v1
	I0314 19:21:50.590812    9056 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  25.0.4
	RuntimeApiVersion:  v1
	I0314 19:21:50.597895    9056 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0314 19:21:50.628972    9056 command_runner.go:130] > 25.0.4
	I0314 19:21:50.640419    9056 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0314 19:21:50.673001    9056 command_runner.go:130] > 25.0.4
	I0314 19:21:50.678165    9056 out.go:204] * Preparing Kubernetes v1.28.4 on Docker 25.0.4 ...
	I0314 19:21:50.681651    9056 out.go:177]   - env NO_PROXY=172.17.86.124
	I0314 19:21:50.684655    9056 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0314 19:21:50.689460    9056 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0314 19:21:50.689460    9056 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0314 19:21:50.689460    9056 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0314 19:21:50.689460    9056 ip.go:207] Found interface: {Index:7 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:82:e8:09 Flags:up|broadcast|multicast|running}
	I0314 19:21:50.691944    9056 ip.go:210] interface addr: fe80::e3be:cf7e:6bd2:b964/64
	I0314 19:21:50.691944    9056 ip.go:210] interface addr: 172.17.80.1/20
	I0314 19:21:50.702531    9056 ssh_runner.go:195] Run: grep 172.17.80.1	host.minikube.internal$ /etc/hosts
	I0314 19:21:50.709031    9056 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.17.80.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0314 19:21:50.731240    9056 mustload.go:65] Loading cluster: multinode-442000
	I0314 19:21:50.731990    9056 config.go:182] Loaded profile config "multinode-442000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0314 19:21:50.732778    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000 ).state
	I0314 19:21:52.695829    9056 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:21:52.695829    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:21:52.695829    9056 host.go:66] Checking if "multinode-442000" exists ...
	I0314 19:21:52.696455    9056 certs.go:68] Setting up C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-442000 for IP: 172.17.80.135
	I0314 19:21:52.696455    9056 certs.go:194] generating shared ca certs ...
	I0314 19:21:52.696530    9056 certs.go:226] acquiring lock for ca certs: {Name:mk3bd6e475aadf28590677101b36f61c132d4290 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 19:21:52.696676    9056 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key
	I0314 19:21:52.697203    9056 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key
	I0314 19:21:52.697397    9056 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0314 19:21:52.697397    9056 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0314 19:21:52.697397    9056 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0314 19:21:52.697397    9056 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0314 19:21:52.698111    9056 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\11052.pem (1338 bytes)
	W0314 19:21:52.698194    9056 certs.go:480] ignoring C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\11052_empty.pem, impossibly tiny 0 bytes
	I0314 19:21:52.698194    9056 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0314 19:21:52.698194    9056 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0314 19:21:52.698194    9056 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0314 19:21:52.698811    9056 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0314 19:21:52.698930    9056 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\110522.pem (1708 bytes)
	I0314 19:21:52.698930    9056 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\110522.pem -> /usr/share/ca-certificates/110522.pem
	I0314 19:21:52.698930    9056 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0314 19:21:52.699455    9056 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\11052.pem -> /usr/share/ca-certificates/11052.pem
	I0314 19:21:52.699612    9056 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0314 19:21:52.757346    9056 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0314 19:21:52.801667    9056 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0314 19:21:52.864261    9056 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0314 19:21:52.916136    9056 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\110522.pem --> /usr/share/ca-certificates/110522.pem (1708 bytes)
	I0314 19:21:52.958938    9056 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0314 19:21:53.000895    9056 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\11052.pem --> /usr/share/ca-certificates/11052.pem (1338 bytes)
	I0314 19:21:53.052980    9056 ssh_runner.go:195] Run: openssl version
	I0314 19:21:53.061295    9056 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0314 19:21:53.071511    9056 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11052.pem && ln -fs /usr/share/ca-certificates/11052.pem /etc/ssl/certs/11052.pem"
	I0314 19:21:53.126377    9056 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11052.pem
	I0314 19:21:53.134305    9056 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Mar 14 17:58 /usr/share/ca-certificates/11052.pem
	I0314 19:21:53.134426    9056 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 14 17:58 /usr/share/ca-certificates/11052.pem
	I0314 19:21:53.144633    9056 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11052.pem
	I0314 19:21:53.153091    9056 command_runner.go:130] > 51391683
	I0314 19:21:53.162050    9056 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11052.pem /etc/ssl/certs/51391683.0"
	I0314 19:21:53.190803    9056 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110522.pem && ln -fs /usr/share/ca-certificates/110522.pem /etc/ssl/certs/110522.pem"
	I0314 19:21:53.220117    9056 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/110522.pem
	I0314 19:21:53.227468    9056 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Mar 14 17:58 /usr/share/ca-certificates/110522.pem
	I0314 19:21:53.227468    9056 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 14 17:58 /usr/share/ca-certificates/110522.pem
	I0314 19:21:53.236730    9056 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110522.pem
	I0314 19:21:53.244889    9056 command_runner.go:130] > 3ec20f2e
	I0314 19:21:53.254484    9056 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110522.pem /etc/ssl/certs/3ec20f2e.0"
	I0314 19:21:53.281783    9056 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0314 19:21:53.308798    9056 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0314 19:21:53.316008    9056 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Mar 14 17:44 /usr/share/ca-certificates/minikubeCA.pem
	I0314 19:21:53.316101    9056 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 14 17:44 /usr/share/ca-certificates/minikubeCA.pem
	I0314 19:21:53.324560    9056 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0314 19:21:53.333421    9056 command_runner.go:130] > b5213941
	I0314 19:21:53.344525    9056 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0314 19:21:53.371355    9056 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0314 19:21:53.378104    9056 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0314 19:21:53.378104    9056 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0314 19:21:53.378369    9056 kubeadm.go:928] updating node {m02 172.17.80.135 8443 v1.28.4 docker false true} ...
	I0314 19:21:53.378523    9056 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-442000-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.17.80.135
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-442000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0314 19:21:53.387263    9056 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0314 19:21:53.403494    9056 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/binaries/v1.28.4': No such file or directory
	I0314 19:21:53.403737    9056 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.28.4: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.28.4': No such file or directory
	
	Initiating transfer...
	I0314 19:21:53.412504    9056 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.28.4
	I0314 19:21:53.429660    9056 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubeadm.sha256
	I0314 19:21:53.429774    9056 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubelet.sha256
	I0314 19:21:53.429660    9056 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl.sha256
	I0314 19:21:53.429899    9056 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubectl -> /var/lib/minikube/binaries/v1.28.4/kubectl
	I0314 19:21:53.429952    9056 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubeadm -> /var/lib/minikube/binaries/v1.28.4/kubeadm
	I0314 19:21:53.440897    9056 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0314 19:21:53.442009    9056 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubeadm
	I0314 19:21:53.444613    9056 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubectl
	I0314 19:21:53.462453    9056 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubelet -> /var/lib/minikube/binaries/v1.28.4/kubelet
	I0314 19:21:53.462510    9056 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubeadm': No such file or directory
	I0314 19:21:53.462593    9056 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubeadm': No such file or directory
	I0314 19:21:53.462632    9056 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubectl': No such file or directory
	I0314 19:21:53.462658    9056 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubectl': No such file or directory
	I0314 19:21:53.462658    9056 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubeadm --> /var/lib/minikube/binaries/v1.28.4/kubeadm (49102848 bytes)
	I0314 19:21:53.462658    9056 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubectl --> /var/lib/minikube/binaries/v1.28.4/kubectl (49885184 bytes)
	I0314 19:21:53.471915    9056 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubelet
	I0314 19:21:53.576412    9056 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubelet': No such file or directory
	I0314 19:21:53.576412    9056 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubelet': No such file or directory
	I0314 19:21:53.576730    9056 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubelet --> /var/lib/minikube/binaries/v1.28.4/kubelet (110850048 bytes)
	I0314 19:21:54.503377    9056 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0314 19:21:54.520475    9056 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (320 bytes)
	I0314 19:21:54.549132    9056 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0314 19:21:54.587668    9056 ssh_runner.go:195] Run: grep 172.17.86.124	control-plane.minikube.internal$ /etc/hosts
	I0314 19:21:54.593847    9056 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.17.86.124	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0314 19:21:54.623462    9056 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 19:21:54.814765    9056 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0314 19:21:54.843634    9056 host.go:66] Checking if "multinode-442000" exists ...
	I0314 19:21:54.843871    9056 start.go:316] joinCluster: &{Name:multinode-442000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.
4 ClusterName:multinode-442000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.86.124 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.17.80.135 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertEx
piration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 19:21:54.844400    9056 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0314 19:21:54.844400    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000 ).state
	I0314 19:21:56.802297    9056 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:21:56.802297    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:21:56.802505    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-442000 ).networkadapters[0]).ipaddresses[0]
	I0314 19:21:59.177515    9056 main.go:141] libmachine: [stdout =====>] : 172.17.86.124
	
	I0314 19:21:59.178040    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:21:59.178593    9056 sshutil.go:53] new ssh client: &{IP:172.17.86.124 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-442000\id_rsa Username:docker}
	I0314 19:21:59.364396    9056 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token pa31bj.d06vwfoo3c12dik2 --discovery-token-ca-cert-hash sha256:d6cca618a9b89cd38081689b02ff56bf93130e2cdd7cca4172dee33bbb1d34eb 
	I0314 19:21:59.364531    9056 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0": (4.5197105s)
	I0314 19:21:59.364702    9056 start.go:342] trying to join worker node "m02" to cluster: &{Name:m02 IP:172.17.80.135 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0314 19:21:59.364795    9056 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token pa31bj.d06vwfoo3c12dik2 --discovery-token-ca-cert-hash sha256:d6cca618a9b89cd38081689b02ff56bf93130e2cdd7cca4172dee33bbb1d34eb --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=multinode-442000-m02"
	I0314 19:21:59.587379    9056 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0314 19:22:02.378780    9056 command_runner.go:130] > [preflight] Running pre-flight checks
	I0314 19:22:02.378850    9056 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0314 19:22:02.378850    9056 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0314 19:22:02.378850    9056 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0314 19:22:02.378850    9056 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0314 19:22:02.378850    9056 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0314 19:22:02.378850    9056 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I0314 19:22:02.378850    9056 command_runner.go:130] > This node has joined the cluster:
	I0314 19:22:02.378850    9056 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0314 19:22:02.378850    9056 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0314 19:22:02.378850    9056 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0314 19:22:02.378850    9056 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token pa31bj.d06vwfoo3c12dik2 --discovery-token-ca-cert-hash sha256:d6cca618a9b89cd38081689b02ff56bf93130e2cdd7cca4172dee33bbb1d34eb --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=multinode-442000-m02": (3.0138259s)
	I0314 19:22:02.378850    9056 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0314 19:22:02.582221    9056 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
	I0314 19:22:02.810448    9056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-442000-m02 minikube.k8s.io/updated_at=2024_03_14T19_22_02_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=c6f78a3db54ac629870afb44fb5bc8be9e04a8c7 minikube.k8s.io/name=multinode-442000 minikube.k8s.io/primary=false
	I0314 19:22:02.926192    9056 command_runner.go:130] > node/multinode-442000-m02 labeled
	I0314 19:22:02.928378    9056 start.go:318] duration metric: took 8.0838931s to joinCluster
	I0314 19:22:02.928378    9056 start.go:234] Will wait 6m0s for node &{Name:m02 IP:172.17.80.135 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0314 19:22:02.929124    9056 config.go:182] Loaded profile config "multinode-442000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0314 19:22:02.933271    9056 out.go:177] * Verifying Kubernetes components...
	I0314 19:22:02.945308    9056 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 19:22:03.162242    9056 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0314 19:22:03.189553    9056 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0314 19:22:03.189708    9056 kapi.go:59] client config for multinode-442000: &rest.Config{Host:"https://172.17.86.124:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\multinode-442000\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\multinode-442000\\client.key", CAFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1ec9180), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0314 19:22:03.190392    9056 node_ready.go:35] waiting up to 6m0s for node "multinode-442000-m02" to be "Ready" ...
	I0314 19:22:03.190392    9056 round_trippers.go:463] GET https://172.17.86.124:8443/api/v1/nodes/multinode-442000-m02
	I0314 19:22:03.190392    9056 round_trippers.go:469] Request Headers:
	I0314 19:22:03.190392    9056 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:22:03.190392    9056 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:22:03.213671    9056 round_trippers.go:574] Response Status: 200 OK in 23 milliseconds
	I0314 19:22:03.213671    9056 round_trippers.go:577] Response Headers:
	I0314 19:22:03.213671    9056 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:22:03.213671    9056 round_trippers.go:580]     Content-Type: application/json
	I0314 19:22:03.213671    9056 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:22:03.213671    9056 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:22:03.213671    9056 round_trippers.go:580]     Content-Length: 4043
	I0314 19:22:03.213671    9056 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:22:03 GMT
	I0314 19:22:03.213671    9056 round_trippers.go:580]     Audit-Id: 76549ac2-c016-4094-82fa-31e656431630
	I0314 19:22:03.213671    9056 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000-m02","uid":"5f369d83-fce6-47fe-b14b-171ed626975b","resourceVersion":"600","creationTimestamp":"2024-03-14T19:22:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_14T19_22_02_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:22:02Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3019 chars]
	I0314 19:22:03.699529    9056 round_trippers.go:463] GET https://172.17.86.124:8443/api/v1/nodes/multinode-442000-m02
	I0314 19:22:03.699529    9056 round_trippers.go:469] Request Headers:
	I0314 19:22:03.699618    9056 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:22:03.699618    9056 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:22:03.706389    9056 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0314 19:22:03.706389    9056 round_trippers.go:577] Response Headers:
	I0314 19:22:03.706389    9056 round_trippers.go:580]     Content-Length: 4043
	I0314 19:22:03.706389    9056 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:22:03 GMT
	I0314 19:22:03.706389    9056 round_trippers.go:580]     Audit-Id: 1ef77955-8b11-4c22-9104-8f935db0dee8
	I0314 19:22:03.706389    9056 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:22:03.706389    9056 round_trippers.go:580]     Content-Type: application/json
	I0314 19:22:03.706389    9056 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:22:03.706389    9056 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:22:03.706389    9056 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000-m02","uid":"5f369d83-fce6-47fe-b14b-171ed626975b","resourceVersion":"600","creationTimestamp":"2024-03-14T19:22:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_14T19_22_02_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:22:02Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3019 chars]
	I0314 19:22:04.205843    9056 round_trippers.go:463] GET https://172.17.86.124:8443/api/v1/nodes/multinode-442000-m02
	I0314 19:22:04.206007    9056 round_trippers.go:469] Request Headers:
	I0314 19:22:04.206007    9056 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:22:04.206007    9056 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:22:04.209426    9056 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:22:04.210053    9056 round_trippers.go:577] Response Headers:
	I0314 19:22:04.210053    9056 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:22:04.210053    9056 round_trippers.go:580]     Content-Length: 4043
	I0314 19:22:04.210053    9056 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:22:04 GMT
	I0314 19:22:04.210053    9056 round_trippers.go:580]     Audit-Id: 1ca89c59-521d-4aee-803c-e11acbdd8349
	I0314 19:22:04.210053    9056 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:22:04.210053    9056 round_trippers.go:580]     Content-Type: application/json
	I0314 19:22:04.210161    9056 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:22:04.210267    9056 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000-m02","uid":"5f369d83-fce6-47fe-b14b-171ed626975b","resourceVersion":"600","creationTimestamp":"2024-03-14T19:22:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_14T19_22_02_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:22:02Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3019 chars]
	I0314 19:22:04.691464    9056 round_trippers.go:463] GET https://172.17.86.124:8443/api/v1/nodes/multinode-442000-m02
	I0314 19:22:04.691577    9056 round_trippers.go:469] Request Headers:
	I0314 19:22:04.691577    9056 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:22:04.691577    9056 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:22:04.695673    9056 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:22:04.695761    9056 round_trippers.go:577] Response Headers:
	I0314 19:22:04.695761    9056 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:22:04.695761    9056 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:22:04.695761    9056 round_trippers.go:580]     Content-Length: 4043
	I0314 19:22:04.695843    9056 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:22:04 GMT
	I0314 19:22:04.695843    9056 round_trippers.go:580]     Audit-Id: 3f50a233-11bf-4ac8-8b56-f38a3841487e
	I0314 19:22:04.695843    9056 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:22:04.695843    9056 round_trippers.go:580]     Content-Type: application/json
	I0314 19:22:04.696103    9056 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000-m02","uid":"5f369d83-fce6-47fe-b14b-171ed626975b","resourceVersion":"600","creationTimestamp":"2024-03-14T19:22:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_14T19_22_02_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:22:02Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3019 chars]
	I0314 19:22:05.191223    9056 round_trippers.go:463] GET https://172.17.86.124:8443/api/v1/nodes/multinode-442000-m02
	I0314 19:22:05.191348    9056 round_trippers.go:469] Request Headers:
	I0314 19:22:05.191348    9056 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:22:05.191348    9056 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:22:05.194400    9056 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:22:05.194400    9056 round_trippers.go:577] Response Headers:
	I0314 19:22:05.194400    9056 round_trippers.go:580]     Content-Type: application/json
	I0314 19:22:05.194400    9056 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:22:05.194400    9056 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:22:05.194400    9056 round_trippers.go:580]     Content-Length: 4043
	I0314 19:22:05.194400    9056 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:22:05 GMT
	I0314 19:22:05.195298    9056 round_trippers.go:580]     Audit-Id: ba6a682e-11a6-4ff6-ae98-26511c27ceb5
	I0314 19:22:05.195298    9056 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:22:05.195431    9056 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000-m02","uid":"5f369d83-fce6-47fe-b14b-171ed626975b","resourceVersion":"600","creationTimestamp":"2024-03-14T19:22:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_14T19_22_02_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:22:02Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3019 chars]
	I0314 19:22:05.195889    9056 node_ready.go:53] node "multinode-442000-m02" has status "Ready":"False"
	I0314 19:22:05.704574    9056 round_trippers.go:463] GET https://172.17.86.124:8443/api/v1/nodes/multinode-442000-m02
	I0314 19:22:05.704574    9056 round_trippers.go:469] Request Headers:
	I0314 19:22:05.704574    9056 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:22:05.704574    9056 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:22:05.708154    9056 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:22:05.708154    9056 round_trippers.go:577] Response Headers:
	I0314 19:22:05.708154    9056 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:22:05.708154    9056 round_trippers.go:580]     Content-Type: application/json
	I0314 19:22:05.708154    9056 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:22:05.708154    9056 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:22:05.708154    9056 round_trippers.go:580]     Content-Length: 4043
	I0314 19:22:05.708154    9056 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:22:05 GMT
	I0314 19:22:05.708154    9056 round_trippers.go:580]     Audit-Id: b5bdd9f7-684d-4d3f-902b-228ea890e4d1
	I0314 19:22:05.708154    9056 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000-m02","uid":"5f369d83-fce6-47fe-b14b-171ed626975b","resourceVersion":"600","creationTimestamp":"2024-03-14T19:22:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_14T19_22_02_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:22:02Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3019 chars]
	I0314 19:22:06.194903    9056 round_trippers.go:463] GET https://172.17.86.124:8443/api/v1/nodes/multinode-442000-m02
	I0314 19:22:06.194903    9056 round_trippers.go:469] Request Headers:
	I0314 19:22:06.194903    9056 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:22:06.194903    9056 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:22:06.478040    9056 round_trippers.go:574] Response Status: 200 OK in 283 milliseconds
	I0314 19:22:06.478425    9056 round_trippers.go:577] Response Headers:
	I0314 19:22:06.478425    9056 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:22:06.478425    9056 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:22:06.478425    9056 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:22:06 GMT
	I0314 19:22:06.478425    9056 round_trippers.go:580]     Audit-Id: f04f9474-d219-4014-af13-36a5dd1ea8c6
	I0314 19:22:06.478425    9056 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:22:06.478425    9056 round_trippers.go:580]     Content-Type: application/json
	I0314 19:22:06.478528    9056 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000-m02","uid":"5f369d83-fce6-47fe-b14b-171ed626975b","resourceVersion":"605","creationTimestamp":"2024-03-14T19:22:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_14T19_22_02_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:22:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3128 chars]
	I0314 19:22:06.695274    9056 round_trippers.go:463] GET https://172.17.86.124:8443/api/v1/nodes/multinode-442000-m02
	I0314 19:22:06.695274    9056 round_trippers.go:469] Request Headers:
	I0314 19:22:06.695274    9056 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:22:06.695274    9056 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:22:06.698842    9056 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:22:06.699065    9056 round_trippers.go:577] Response Headers:
	I0314 19:22:06.699065    9056 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:22:06 GMT
	I0314 19:22:06.699065    9056 round_trippers.go:580]     Audit-Id: d710c207-de59-48df-8844-d5be09fc9753
	I0314 19:22:06.699065    9056 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:22:06.699065    9056 round_trippers.go:580]     Content-Type: application/json
	I0314 19:22:06.699065    9056 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:22:06.699065    9056 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:22:06.699234    9056 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000-m02","uid":"5f369d83-fce6-47fe-b14b-171ed626975b","resourceVersion":"605","creationTimestamp":"2024-03-14T19:22:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_14T19_22_02_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:22:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3128 chars]
	I0314 19:22:07.200099    9056 round_trippers.go:463] GET https://172.17.86.124:8443/api/v1/nodes/multinode-442000-m02
	I0314 19:22:07.201042    9056 round_trippers.go:469] Request Headers:
	I0314 19:22:07.201042    9056 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:22:07.201042    9056 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:22:07.206187    9056 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0314 19:22:07.206187    9056 round_trippers.go:577] Response Headers:
	I0314 19:22:07.206187    9056 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:22:07.206187    9056 round_trippers.go:580]     Content-Type: application/json
	I0314 19:22:07.206187    9056 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:22:07.206187    9056 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:22:07.206187    9056 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:22:07 GMT
	I0314 19:22:07.206187    9056 round_trippers.go:580]     Audit-Id: c85cd24e-e783-44f4-9764-8e46d7990538
	I0314 19:22:07.207081    9056 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000-m02","uid":"5f369d83-fce6-47fe-b14b-171ed626975b","resourceVersion":"605","creationTimestamp":"2024-03-14T19:22:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_14T19_22_02_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:22:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3128 chars]
	I0314 19:22:07.207330    9056 node_ready.go:53] node "multinode-442000-m02" has status "Ready":"False"
	I0314 19:22:07.704755    9056 round_trippers.go:463] GET https://172.17.86.124:8443/api/v1/nodes/multinode-442000-m02
	I0314 19:22:07.704755    9056 round_trippers.go:469] Request Headers:
	I0314 19:22:07.704755    9056 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:22:07.704755    9056 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:22:07.708332    9056 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:22:07.708332    9056 round_trippers.go:577] Response Headers:
	I0314 19:22:07.708332    9056 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:22:07.708332    9056 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:22:07.709081    9056 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:22:07 GMT
	I0314 19:22:07.709081    9056 round_trippers.go:580]     Audit-Id: 2a0c3e36-fc9d-4fd1-a9ff-a980ab8a6c91
	I0314 19:22:07.709081    9056 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:22:07.709081    9056 round_trippers.go:580]     Content-Type: application/json
	I0314 19:22:07.709248    9056 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000-m02","uid":"5f369d83-fce6-47fe-b14b-171ed626975b","resourceVersion":"605","creationTimestamp":"2024-03-14T19:22:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_14T19_22_02_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:22:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3128 chars]
	I0314 19:22:08.191577    9056 round_trippers.go:463] GET https://172.17.86.124:8443/api/v1/nodes/multinode-442000-m02
	I0314 19:22:08.191649    9056 round_trippers.go:469] Request Headers:
	I0314 19:22:08.191649    9056 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:22:08.191649    9056 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:22:08.214579    9056 round_trippers.go:574] Response Status: 200 OK in 21 milliseconds
	I0314 19:22:08.214611    9056 round_trippers.go:577] Response Headers:
	I0314 19:22:08.214611    9056 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:22:08 GMT
	I0314 19:22:08.214611    9056 round_trippers.go:580]     Audit-Id: 324b6e3d-1f27-4fc8-9ba8-df4ac9e7c92f
	I0314 19:22:08.214611    9056 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:22:08.214611    9056 round_trippers.go:580]     Content-Type: application/json
	I0314 19:22:08.214611    9056 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:22:08.214689    9056 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:22:08.214750    9056 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000-m02","uid":"5f369d83-fce6-47fe-b14b-171ed626975b","resourceVersion":"605","creationTimestamp":"2024-03-14T19:22:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_14T19_22_02_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:22:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3128 chars]
	I0314 19:22:08.698189    9056 round_trippers.go:463] GET https://172.17.86.124:8443/api/v1/nodes/multinode-442000-m02
	I0314 19:22:08.698189    9056 round_trippers.go:469] Request Headers:
	I0314 19:22:08.698189    9056 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:22:08.698189    9056 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:22:08.702533    9056 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 19:22:08.702533    9056 round_trippers.go:577] Response Headers:
	I0314 19:22:08.703530    9056 round_trippers.go:580]     Audit-Id: bca58514-d761-4ccb-a17a-51c75a826e06
	I0314 19:22:08.703530    9056 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:22:08.703530    9056 round_trippers.go:580]     Content-Type: application/json
	I0314 19:22:08.703530    9056 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:22:08.703530    9056 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:22:08.703530    9056 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:22:08 GMT
	I0314 19:22:08.703530    9056 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000-m02","uid":"5f369d83-fce6-47fe-b14b-171ed626975b","resourceVersion":"605","creationTimestamp":"2024-03-14T19:22:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_14T19_22_02_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:22:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3128 chars]
	I0314 19:22:09.192702    9056 round_trippers.go:463] GET https://172.17.86.124:8443/api/v1/nodes/multinode-442000-m02
	I0314 19:22:09.192806    9056 round_trippers.go:469] Request Headers:
	I0314 19:22:09.192806    9056 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:22:09.192806    9056 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:22:09.200113    9056 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0314 19:22:09.200113    9056 round_trippers.go:577] Response Headers:
	I0314 19:22:09.200113    9056 round_trippers.go:580]     Audit-Id: 3d235f04-3eb3-4624-8726-764fbf5d0715
	I0314 19:22:09.200113    9056 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:22:09.200113    9056 round_trippers.go:580]     Content-Type: application/json
	I0314 19:22:09.200113    9056 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:22:09.200113    9056 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:22:09.200113    9056 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:22:09 GMT
	I0314 19:22:09.200113    9056 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000-m02","uid":"5f369d83-fce6-47fe-b14b-171ed626975b","resourceVersion":"605","creationTimestamp":"2024-03-14T19:22:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_14T19_22_02_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:22:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3128 chars]
	I0314 19:22:09.699265    9056 round_trippers.go:463] GET https://172.17.86.124:8443/api/v1/nodes/multinode-442000-m02
	I0314 19:22:09.699472    9056 round_trippers.go:469] Request Headers:
	I0314 19:22:09.699472    9056 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:22:09.699472    9056 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:22:09.703786    9056 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 19:22:09.703786    9056 round_trippers.go:577] Response Headers:
	I0314 19:22:09.703786    9056 round_trippers.go:580]     Audit-Id: f3626a08-c6da-4fb2-991a-2c149c864a1d
	I0314 19:22:09.703786    9056 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:22:09.703786    9056 round_trippers.go:580]     Content-Type: application/json
	I0314 19:22:09.703786    9056 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:22:09.703786    9056 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:22:09.703786    9056 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:22:09 GMT
	I0314 19:22:09.704551    9056 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000-m02","uid":"5f369d83-fce6-47fe-b14b-171ed626975b","resourceVersion":"605","creationTimestamp":"2024-03-14T19:22:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_14T19_22_02_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:22:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3128 chars]
	I0314 19:22:09.704953    9056 node_ready.go:53] node "multinode-442000-m02" has status "Ready":"False"
	I0314 19:22:10.192188    9056 round_trippers.go:463] GET https://172.17.86.124:8443/api/v1/nodes/multinode-442000-m02
	I0314 19:22:10.192292    9056 round_trippers.go:469] Request Headers:
	I0314 19:22:10.192292    9056 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:22:10.192292    9056 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:22:10.198533    9056 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0314 19:22:10.198533    9056 round_trippers.go:577] Response Headers:
	I0314 19:22:10.198533    9056 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:22:10.198533    9056 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:22:10.198533    9056 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:22:10 GMT
	I0314 19:22:10.198533    9056 round_trippers.go:580]     Audit-Id: e1729667-d3a5-409c-bd49-835930ffe97d
	I0314 19:22:10.198533    9056 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:22:10.198533    9056 round_trippers.go:580]     Content-Type: application/json
	I0314 19:22:10.198533    9056 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000-m02","uid":"5f369d83-fce6-47fe-b14b-171ed626975b","resourceVersion":"605","creationTimestamp":"2024-03-14T19:22:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_14T19_22_02_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:22:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3128 chars]
	I0314 19:22:10.695332    9056 round_trippers.go:463] GET https://172.17.86.124:8443/api/v1/nodes/multinode-442000-m02
	I0314 19:22:10.695406    9056 round_trippers.go:469] Request Headers:
	I0314 19:22:10.695406    9056 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:22:10.695406    9056 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:22:10.698685    9056 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:22:10.698685    9056 round_trippers.go:577] Response Headers:
	I0314 19:22:10.698685    9056 round_trippers.go:580]     Content-Type: application/json
	I0314 19:22:10.698685    9056 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:22:10.698685    9056 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:22:10.698685    9056 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:22:10 GMT
	I0314 19:22:10.698685    9056 round_trippers.go:580]     Audit-Id: f2c84a08-e923-433e-9b0c-8c00d6d25e4d
	I0314 19:22:10.698685    9056 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:22:10.699130    9056 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000-m02","uid":"5f369d83-fce6-47fe-b14b-171ed626975b","resourceVersion":"605","creationTimestamp":"2024-03-14T19:22:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_14T19_22_02_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:22:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3128 chars]
	I0314 19:22:11.199816    9056 round_trippers.go:463] GET https://172.17.86.124:8443/api/v1/nodes/multinode-442000-m02
	I0314 19:22:11.199816    9056 round_trippers.go:469] Request Headers:
	I0314 19:22:11.199816    9056 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:22:11.199894    9056 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:22:11.204119    9056 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 19:22:11.204119    9056 round_trippers.go:577] Response Headers:
	I0314 19:22:11.204119    9056 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:22:11 GMT
	I0314 19:22:11.204119    9056 round_trippers.go:580]     Audit-Id: f4a22172-bfc6-4941-9b42-0737a9dfab48
	I0314 19:22:11.204119    9056 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:22:11.204119    9056 round_trippers.go:580]     Content-Type: application/json
	I0314 19:22:11.204119    9056 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:22:11.204119    9056 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:22:11.204737    9056 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000-m02","uid":"5f369d83-fce6-47fe-b14b-171ed626975b","resourceVersion":"605","creationTimestamp":"2024-03-14T19:22:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_14T19_22_02_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:22:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3128 chars]
	I0314 19:22:11.691419    9056 round_trippers.go:463] GET https://172.17.86.124:8443/api/v1/nodes/multinode-442000-m02
	I0314 19:22:11.691474    9056 round_trippers.go:469] Request Headers:
	I0314 19:22:11.691474    9056 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:22:11.691540    9056 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:22:11.695388    9056 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:22:11.695388    9056 round_trippers.go:577] Response Headers:
	I0314 19:22:11.695479    9056 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:22:11.695479    9056 round_trippers.go:580]     Content-Type: application/json
	I0314 19:22:11.695479    9056 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:22:11.695479    9056 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:22:11.695479    9056 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:22:11 GMT
	I0314 19:22:11.695479    9056 round_trippers.go:580]     Audit-Id: 5eb1f044-c342-4eed-84a5-40ebe6c1dde8
	I0314 19:22:11.695544    9056 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000-m02","uid":"5f369d83-fce6-47fe-b14b-171ed626975b","resourceVersion":"605","creationTimestamp":"2024-03-14T19:22:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_14T19_22_02_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:22:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3128 chars]
	I0314 19:22:12.201094    9056 round_trippers.go:463] GET https://172.17.86.124:8443/api/v1/nodes/multinode-442000-m02
	I0314 19:22:12.201198    9056 round_trippers.go:469] Request Headers:
	I0314 19:22:12.201198    9056 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:22:12.201250    9056 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:22:12.283598    9056 round_trippers.go:574] Response Status: 200 OK in 82 milliseconds
	I0314 19:22:12.283598    9056 round_trippers.go:577] Response Headers:
	I0314 19:22:12.283598    9056 round_trippers.go:580]     Content-Type: application/json
	I0314 19:22:12.283598    9056 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:22:12.283598    9056 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:22:12.283598    9056 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:22:12 GMT
	I0314 19:22:12.283598    9056 round_trippers.go:580]     Audit-Id: af0c4f43-20cf-454d-88b9-3eb69b563512
	I0314 19:22:12.283598    9056 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:22:12.284506    9056 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000-m02","uid":"5f369d83-fce6-47fe-b14b-171ed626975b","resourceVersion":"605","creationTimestamp":"2024-03-14T19:22:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_14T19_22_02_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:22:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3128 chars]
	I0314 19:22:12.284862    9056 node_ready.go:53] node "multinode-442000-m02" has status "Ready":"False"
	I0314 19:22:12.705048    9056 round_trippers.go:463] GET https://172.17.86.124:8443/api/v1/nodes/multinode-442000-m02
	I0314 19:22:12.705048    9056 round_trippers.go:469] Request Headers:
	I0314 19:22:12.705048    9056 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:22:12.705048    9056 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:22:13.006310    9056 round_trippers.go:574] Response Status: 200 OK in 301 milliseconds
	I0314 19:22:13.006401    9056 round_trippers.go:577] Response Headers:
	I0314 19:22:13.006401    9056 round_trippers.go:580]     Content-Type: application/json
	I0314 19:22:13.006401    9056 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:22:13.006401    9056 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:22:13.006401    9056 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:22:13 GMT
	I0314 19:22:13.006464    9056 round_trippers.go:580]     Audit-Id: f01dde5e-0d5e-48d3-88c6-c56a2afb4236
	I0314 19:22:13.006464    9056 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:22:13.006654    9056 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000-m02","uid":"5f369d83-fce6-47fe-b14b-171ed626975b","resourceVersion":"613","creationTimestamp":"2024-03-14T19:22:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_14T19_22_02_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:22:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3397 chars]
	I0314 19:22:13.204365    9056 round_trippers.go:463] GET https://172.17.86.124:8443/api/v1/nodes/multinode-442000-m02
	I0314 19:22:13.204365    9056 round_trippers.go:469] Request Headers:
	I0314 19:22:13.204365    9056 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:22:13.204365    9056 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:22:13.207970    9056 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:22:13.207970    9056 round_trippers.go:577] Response Headers:
	I0314 19:22:13.207970    9056 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:22:13 GMT
	I0314 19:22:13.207970    9056 round_trippers.go:580]     Audit-Id: 9b9fd8c4-a310-4247-884e-3a050152c897
	I0314 19:22:13.207970    9056 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:22:13.207970    9056 round_trippers.go:580]     Content-Type: application/json
	I0314 19:22:13.208156    9056 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:22:13.208156    9056 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:22:13.208156    9056 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000-m02","uid":"5f369d83-fce6-47fe-b14b-171ed626975b","resourceVersion":"613","creationTimestamp":"2024-03-14T19:22:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_14T19_22_02_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:22:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3397 chars]
	I0314 19:22:13.695731    9056 round_trippers.go:463] GET https://172.17.86.124:8443/api/v1/nodes/multinode-442000-m02
	I0314 19:22:13.696036    9056 round_trippers.go:469] Request Headers:
	I0314 19:22:13.696036    9056 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:22:13.696036    9056 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:22:13.700122    9056 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 19:22:13.700122    9056 round_trippers.go:577] Response Headers:
	I0314 19:22:13.700122    9056 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:22:13.700122    9056 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:22:13 GMT
	I0314 19:22:13.700122    9056 round_trippers.go:580]     Audit-Id: 63041cf6-04c8-4687-9adb-0f73e0aee851
	I0314 19:22:13.700122    9056 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:22:13.700122    9056 round_trippers.go:580]     Content-Type: application/json
	I0314 19:22:13.700122    9056 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:22:13.700122    9056 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000-m02","uid":"5f369d83-fce6-47fe-b14b-171ed626975b","resourceVersion":"613","creationTimestamp":"2024-03-14T19:22:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_14T19_22_02_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:22:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3397 chars]
	I0314 19:22:14.200985    9056 round_trippers.go:463] GET https://172.17.86.124:8443/api/v1/nodes/multinode-442000-m02
	I0314 19:22:14.200985    9056 round_trippers.go:469] Request Headers:
	I0314 19:22:14.201059    9056 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:22:14.201059    9056 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:22:14.204361    9056 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:22:14.204763    9056 round_trippers.go:577] Response Headers:
	I0314 19:22:14.204819    9056 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:22:14.204819    9056 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:22:14 GMT
	I0314 19:22:14.204819    9056 round_trippers.go:580]     Audit-Id: 4252016d-ff63-4b71-8b86-53710940043e
	I0314 19:22:14.204819    9056 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:22:14.204819    9056 round_trippers.go:580]     Content-Type: application/json
	I0314 19:22:14.204819    9056 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:22:14.204878    9056 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000-m02","uid":"5f369d83-fce6-47fe-b14b-171ed626975b","resourceVersion":"613","creationTimestamp":"2024-03-14T19:22:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_14T19_22_02_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:22:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3397 chars]
	I0314 19:22:14.703803    9056 round_trippers.go:463] GET https://172.17.86.124:8443/api/v1/nodes/multinode-442000-m02
	I0314 19:22:14.703897    9056 round_trippers.go:469] Request Headers:
	I0314 19:22:14.703897    9056 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:22:14.703897    9056 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:22:14.708421    9056 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 19:22:14.708873    9056 round_trippers.go:577] Response Headers:
	I0314 19:22:14.708873    9056 round_trippers.go:580]     Content-Type: application/json
	I0314 19:22:14.708873    9056 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:22:14.708873    9056 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:22:14.708873    9056 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:22:14 GMT
	I0314 19:22:14.708873    9056 round_trippers.go:580]     Audit-Id: 31966180-888f-4b64-89dd-a93f05ef71ad
	I0314 19:22:14.708873    9056 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:22:14.708873    9056 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000-m02","uid":"5f369d83-fce6-47fe-b14b-171ed626975b","resourceVersion":"613","creationTimestamp":"2024-03-14T19:22:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_14T19_22_02_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:22:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3397 chars]
	I0314 19:22:14.708873    9056 node_ready.go:53] node "multinode-442000-m02" has status "Ready":"False"
	I0314 19:22:15.206320    9056 round_trippers.go:463] GET https://172.17.86.124:8443/api/v1/nodes/multinode-442000-m02
	I0314 19:22:15.206373    9056 round_trippers.go:469] Request Headers:
	I0314 19:22:15.206373    9056 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:22:15.206373    9056 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:22:15.211982    9056 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0314 19:22:15.211982    9056 round_trippers.go:577] Response Headers:
	I0314 19:22:15.211982    9056 round_trippers.go:580]     Audit-Id: 67729e53-8cea-4e7d-8a8e-8f950ce19db7
	I0314 19:22:15.211982    9056 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:22:15.211982    9056 round_trippers.go:580]     Content-Type: application/json
	I0314 19:22:15.211982    9056 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:22:15.211982    9056 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:22:15.211982    9056 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:22:15 GMT
	I0314 19:22:15.213224    9056 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000-m02","uid":"5f369d83-fce6-47fe-b14b-171ed626975b","resourceVersion":"613","creationTimestamp":"2024-03-14T19:22:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_14T19_22_02_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:22:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3397 chars]
	I0314 19:22:15.697654    9056 round_trippers.go:463] GET https://172.17.86.124:8443/api/v1/nodes/multinode-442000-m02
	I0314 19:22:15.697654    9056 round_trippers.go:469] Request Headers:
	I0314 19:22:15.697654    9056 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:22:15.697654    9056 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:22:15.701661    9056 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 19:22:15.701661    9056 round_trippers.go:577] Response Headers:
	I0314 19:22:15.701661    9056 round_trippers.go:580]     Audit-Id: aec38901-0be5-4bd9-977c-ed08a8358977
	I0314 19:22:15.701661    9056 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:22:15.701661    9056 round_trippers.go:580]     Content-Type: application/json
	I0314 19:22:15.701661    9056 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:22:15.701661    9056 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:22:15.701661    9056 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:22:15 GMT
	I0314 19:22:15.701990    9056 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000-m02","uid":"5f369d83-fce6-47fe-b14b-171ed626975b","resourceVersion":"613","creationTimestamp":"2024-03-14T19:22:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_14T19_22_02_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:22:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3397 chars]
	I0314 19:22:16.206045    9056 round_trippers.go:463] GET https://172.17.86.124:8443/api/v1/nodes/multinode-442000-m02
	I0314 19:22:16.206045    9056 round_trippers.go:469] Request Headers:
	I0314 19:22:16.206045    9056 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:22:16.206045    9056 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:22:16.209645    9056 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:22:16.210387    9056 round_trippers.go:577] Response Headers:
	I0314 19:22:16.210496    9056 round_trippers.go:580]     Audit-Id: a29dae83-c050-47e0-b7f9-e1c2f035c26c
	I0314 19:22:16.210599    9056 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:22:16.210645    9056 round_trippers.go:580]     Content-Type: application/json
	I0314 19:22:16.210645    9056 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:22:16.210645    9056 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:22:16.210744    9056 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:22:16 GMT
	I0314 19:22:16.210906    9056 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000-m02","uid":"5f369d83-fce6-47fe-b14b-171ed626975b","resourceVersion":"613","creationTimestamp":"2024-03-14T19:22:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_14T19_22_02_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:22:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3397 chars]
	I0314 19:22:16.696072    9056 round_trippers.go:463] GET https://172.17.86.124:8443/api/v1/nodes/multinode-442000-m02
	I0314 19:22:16.696291    9056 round_trippers.go:469] Request Headers:
	I0314 19:22:16.696291    9056 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:22:16.696291    9056 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:22:16.699756    9056 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:22:16.699756    9056 round_trippers.go:577] Response Headers:
	I0314 19:22:16.699756    9056 round_trippers.go:580]     Audit-Id: c0e6bfe0-98b7-4bb6-b340-5164d263f588
	I0314 19:22:16.699756    9056 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:22:16.699756    9056 round_trippers.go:580]     Content-Type: application/json
	I0314 19:22:16.699756    9056 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:22:16.699756    9056 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:22:16.699756    9056 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:22:16 GMT
	I0314 19:22:16.700618    9056 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000-m02","uid":"5f369d83-fce6-47fe-b14b-171ed626975b","resourceVersion":"613","creationTimestamp":"2024-03-14T19:22:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_14T19_22_02_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:22:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3397 chars]
	I0314 19:22:17.205365    9056 round_trippers.go:463] GET https://172.17.86.124:8443/api/v1/nodes/multinode-442000-m02
	I0314 19:22:17.205365    9056 round_trippers.go:469] Request Headers:
	I0314 19:22:17.205365    9056 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:22:17.205365    9056 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:22:17.208926    9056 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:22:17.208926    9056 round_trippers.go:577] Response Headers:
	I0314 19:22:17.208926    9056 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:22:17 GMT
	I0314 19:22:17.208926    9056 round_trippers.go:580]     Audit-Id: be4e5713-635d-403b-abda-f8ba44a75dba
	I0314 19:22:17.208926    9056 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:22:17.208926    9056 round_trippers.go:580]     Content-Type: application/json
	I0314 19:22:17.208926    9056 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:22:17.208926    9056 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:22:17.208926    9056 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000-m02","uid":"5f369d83-fce6-47fe-b14b-171ed626975b","resourceVersion":"613","creationTimestamp":"2024-03-14T19:22:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_14T19_22_02_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:22:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3397 chars]
	I0314 19:22:17.209747    9056 node_ready.go:53] node "multinode-442000-m02" has status "Ready":"False"
	I0314 19:22:17.695566    9056 round_trippers.go:463] GET https://172.17.86.124:8443/api/v1/nodes/multinode-442000-m02
	I0314 19:22:17.695566    9056 round_trippers.go:469] Request Headers:
	I0314 19:22:17.695566    9056 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:22:17.695566    9056 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:22:17.699849    9056 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 19:22:17.699849    9056 round_trippers.go:577] Response Headers:
	I0314 19:22:17.699927    9056 round_trippers.go:580]     Content-Type: application/json
	I0314 19:22:17.699927    9056 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:22:17.699927    9056 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:22:17.699927    9056 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:22:17 GMT
	I0314 19:22:17.699927    9056 round_trippers.go:580]     Audit-Id: 4dd4472f-af53-42ad-9df9-4e536c503cf9
	I0314 19:22:17.699927    9056 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:22:17.700114    9056 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000-m02","uid":"5f369d83-fce6-47fe-b14b-171ed626975b","resourceVersion":"613","creationTimestamp":"2024-03-14T19:22:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_14T19_22_02_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:22:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3397 chars]
	I0314 19:22:18.203651    9056 round_trippers.go:463] GET https://172.17.86.124:8443/api/v1/nodes/multinode-442000-m02
	I0314 19:22:18.203651    9056 round_trippers.go:469] Request Headers:
	I0314 19:22:18.203651    9056 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:22:18.203651    9056 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:22:18.208982    9056 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0314 19:22:18.208982    9056 round_trippers.go:577] Response Headers:
	I0314 19:22:18.208982    9056 round_trippers.go:580]     Audit-Id: df3eb8ec-76a3-44eb-9215-f84b82c0c9b0
	I0314 19:22:18.209096    9056 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:22:18.209096    9056 round_trippers.go:580]     Content-Type: application/json
	I0314 19:22:18.209096    9056 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:22:18.209096    9056 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:22:18.209096    9056 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:22:18 GMT
	I0314 19:22:18.210475    9056 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000-m02","uid":"5f369d83-fce6-47fe-b14b-171ed626975b","resourceVersion":"613","creationTimestamp":"2024-03-14T19:22:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_14T19_22_02_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:22:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3397 chars]
	I0314 19:22:18.707530    9056 round_trippers.go:463] GET https://172.17.86.124:8443/api/v1/nodes/multinode-442000-m02
	I0314 19:22:18.707530    9056 round_trippers.go:469] Request Headers:
	I0314 19:22:18.707530    9056 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:22:18.707530    9056 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:22:19.086661    9056 round_trippers.go:574] Response Status: 200 OK in 379 milliseconds
	I0314 19:22:19.086787    9056 round_trippers.go:577] Response Headers:
	I0314 19:22:19.086787    9056 round_trippers.go:580]     Content-Type: application/json
	I0314 19:22:19.086787    9056 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:22:19.086787    9056 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:22:19.086787    9056 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:22:19 GMT
	I0314 19:22:19.086787    9056 round_trippers.go:580]     Audit-Id: e47406ed-6228-4469-9514-553e877412d7
	I0314 19:22:19.086787    9056 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:22:19.087044    9056 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000-m02","uid":"5f369d83-fce6-47fe-b14b-171ed626975b","resourceVersion":"613","creationTimestamp":"2024-03-14T19:22:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_14T19_22_02_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:22:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3397 chars]
	I0314 19:22:19.205116    9056 round_trippers.go:463] GET https://172.17.86.124:8443/api/v1/nodes/multinode-442000-m02
	I0314 19:22:19.205116    9056 round_trippers.go:469] Request Headers:
	I0314 19:22:19.205116    9056 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:22:19.205116    9056 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:22:19.208682    9056 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:22:19.208682    9056 round_trippers.go:577] Response Headers:
	I0314 19:22:19.208682    9056 round_trippers.go:580]     Audit-Id: dbaa9b4b-2b58-4db4-9975-5579ad614abb
	I0314 19:22:19.208682    9056 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:22:19.208682    9056 round_trippers.go:580]     Content-Type: application/json
	I0314 19:22:19.208682    9056 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:22:19.208682    9056 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:22:19.208682    9056 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:22:19 GMT
	I0314 19:22:19.209045    9056 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000-m02","uid":"5f369d83-fce6-47fe-b14b-171ed626975b","resourceVersion":"613","creationTimestamp":"2024-03-14T19:22:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_14T19_22_02_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:22:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3397 chars]
	I0314 19:22:19.704742    9056 round_trippers.go:463] GET https://172.17.86.124:8443/api/v1/nodes/multinode-442000-m02
	I0314 19:22:19.705142    9056 round_trippers.go:469] Request Headers:
	I0314 19:22:19.705142    9056 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:22:19.705142    9056 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:22:19.707721    9056 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0314 19:22:19.707721    9056 round_trippers.go:577] Response Headers:
	I0314 19:22:19.707721    9056 round_trippers.go:580]     Audit-Id: 59a0cc58-9962-4d56-823f-8821a4a7942b
	I0314 19:22:19.707721    9056 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:22:19.708735    9056 round_trippers.go:580]     Content-Type: application/json
	I0314 19:22:19.708735    9056 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:22:19.708735    9056 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:22:19.708735    9056 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:22:19 GMT
	I0314 19:22:19.708735    9056 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000-m02","uid":"5f369d83-fce6-47fe-b14b-171ed626975b","resourceVersion":"613","creationTimestamp":"2024-03-14T19:22:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_14T19_22_02_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:22:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3397 chars]
	I0314 19:22:19.709266    9056 node_ready.go:53] node "multinode-442000-m02" has status "Ready":"False"
	I0314 19:22:20.192200    9056 round_trippers.go:463] GET https://172.17.86.124:8443/api/v1/nodes/multinode-442000-m02
	I0314 19:22:20.192200    9056 round_trippers.go:469] Request Headers:
	I0314 19:22:20.192285    9056 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:22:20.192285    9056 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:22:20.196918    9056 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 19:22:20.196966    9056 round_trippers.go:577] Response Headers:
	I0314 19:22:20.196966    9056 round_trippers.go:580]     Content-Type: application/json
	I0314 19:22:20.196966    9056 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:22:20.196966    9056 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:22:20.197072    9056 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:22:20 GMT
	I0314 19:22:20.197072    9056 round_trippers.go:580]     Audit-Id: 2aefd559-ed17-4a09-bdfd-bad5dda25a6f
	I0314 19:22:20.197072    9056 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:22:20.197330    9056 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000-m02","uid":"5f369d83-fce6-47fe-b14b-171ed626975b","resourceVersion":"613","creationTimestamp":"2024-03-14T19:22:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_14T19_22_02_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:22:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3397 chars]
	I0314 19:22:20.694089    9056 round_trippers.go:463] GET https://172.17.86.124:8443/api/v1/nodes/multinode-442000-m02
	I0314 19:22:20.694168    9056 round_trippers.go:469] Request Headers:
	I0314 19:22:20.694168    9056 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:22:20.694168    9056 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:22:20.697029    9056 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0314 19:22:20.697029    9056 round_trippers.go:577] Response Headers:
	I0314 19:22:20.697029    9056 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:22:20.697029    9056 round_trippers.go:580]     Content-Type: application/json
	I0314 19:22:20.697029    9056 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:22:20.697029    9056 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:22:20.698052    9056 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:22:20 GMT
	I0314 19:22:20.698052    9056 round_trippers.go:580]     Audit-Id: f35fe208-b0f9-488d-945f-63f9cce340c1
	I0314 19:22:20.698108    9056 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000-m02","uid":"5f369d83-fce6-47fe-b14b-171ed626975b","resourceVersion":"634","creationTimestamp":"2024-03-14T19:22:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_14T19_22_02_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:22:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3263 chars]
	I0314 19:22:20.698108    9056 node_ready.go:49] node "multinode-442000-m02" has status "Ready":"True"
	I0314 19:22:20.698108    9056 node_ready.go:38] duration metric: took 17.5063863s for node "multinode-442000-m02" to be "Ready" ...
	I0314 19:22:20.698108    9056 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0314 19:22:20.698108    9056 round_trippers.go:463] GET https://172.17.86.124:8443/api/v1/namespaces/kube-system/pods
	I0314 19:22:20.698640    9056 round_trippers.go:469] Request Headers:
	I0314 19:22:20.698640    9056 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:22:20.698640    9056 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:22:20.703825    9056 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0314 19:22:20.703825    9056 round_trippers.go:577] Response Headers:
	I0314 19:22:20.703825    9056 round_trippers.go:580]     Content-Type: application/json
	I0314 19:22:20.703825    9056 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:22:20.703825    9056 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:22:20.703825    9056 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:22:20 GMT
	I0314 19:22:20.703825    9056 round_trippers.go:580]     Audit-Id: c9cd5ed8-a0a4-466d-8028-0cb23df4e487
	I0314 19:22:20.704303    9056 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:22:20.705603    9056 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"634"},"items":[{"metadata":{"name":"coredns-5dd5756b68-d22jc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2a563b3f-a175-4dc2-9f0b-67dbaefbfaac","resourceVersion":"446","creationTimestamp":"2024-03-14T19:19:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"65689a14-ed43-48af-8104-53ea2e3991f3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:19:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"65689a14-ed43-48af-8104-53ea2e3991f3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 67474 chars]
	I0314 19:22:20.708556    9056 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-d22jc" in "kube-system" namespace to be "Ready" ...
	I0314 19:22:20.708608    9056 round_trippers.go:463] GET https://172.17.86.124:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-d22jc
	I0314 19:22:20.708769    9056 round_trippers.go:469] Request Headers:
	I0314 19:22:20.708769    9056 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:22:20.708769    9056 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:22:20.711822    9056 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:22:20.711822    9056 round_trippers.go:577] Response Headers:
	I0314 19:22:20.711822    9056 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:22:20 GMT
	I0314 19:22:20.711822    9056 round_trippers.go:580]     Audit-Id: f26291dd-274e-452d-a9c6-36e64c29afd0
	I0314 19:22:20.711822    9056 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:22:20.711822    9056 round_trippers.go:580]     Content-Type: application/json
	I0314 19:22:20.711822    9056 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:22:20.711822    9056 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:22:20.711822    9056 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-d22jc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2a563b3f-a175-4dc2-9f0b-67dbaefbfaac","resourceVersion":"446","creationTimestamp":"2024-03-14T19:19:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"65689a14-ed43-48af-8104-53ea2e3991f3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:19:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"65689a14-ed43-48af-8104-53ea2e3991f3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6283 chars]
	I0314 19:22:20.711822    9056 round_trippers.go:463] GET https://172.17.86.124:8443/api/v1/nodes/multinode-442000
	I0314 19:22:20.711822    9056 round_trippers.go:469] Request Headers:
	I0314 19:22:20.711822    9056 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:22:20.711822    9056 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:22:20.714644    9056 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0314 19:22:20.714644    9056 round_trippers.go:577] Response Headers:
	I0314 19:22:20.714644    9056 round_trippers.go:580]     Audit-Id: 18d4c718-19da-416d-a314-5434ece1c248
	I0314 19:22:20.714644    9056 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:22:20.714644    9056 round_trippers.go:580]     Content-Type: application/json
	I0314 19:22:20.715528    9056 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:22:20.715528    9056 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:22:20.715528    9056 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:22:20 GMT
	I0314 19:22:20.715746    9056 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"457","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0314 19:22:20.716092    9056 pod_ready.go:92] pod "coredns-5dd5756b68-d22jc" in "kube-system" namespace has status "Ready":"True"
	I0314 19:22:20.716185    9056 pod_ready.go:81] duration metric: took 7.4833ms for pod "coredns-5dd5756b68-d22jc" in "kube-system" namespace to be "Ready" ...
	I0314 19:22:20.716185    9056 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-442000" in "kube-system" namespace to be "Ready" ...
	I0314 19:22:20.716281    9056 round_trippers.go:463] GET https://172.17.86.124:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-442000
	I0314 19:22:20.716281    9056 round_trippers.go:469] Request Headers:
	I0314 19:22:20.716281    9056 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:22:20.716281    9056 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:22:20.718654    9056 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0314 19:22:20.718654    9056 round_trippers.go:577] Response Headers:
	I0314 19:22:20.718654    9056 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:22:20.718654    9056 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:22:20 GMT
	I0314 19:22:20.718654    9056 round_trippers.go:580]     Audit-Id: fa148d72-9d80-42ca-8b36-0b02a516e91f
	I0314 19:22:20.718654    9056 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:22:20.718654    9056 round_trippers.go:580]     Content-Type: application/json
	I0314 19:22:20.718654    9056 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:22:20.718654    9056 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-442000","namespace":"kube-system","uid":"8974ad44-5d36-48f0-bc6b-9115bab5fb5e","resourceVersion":"410","creationTimestamp":"2024-03-14T19:19:03Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.17.86.124:2379","kubernetes.io/config.hash":"92e70beb375f9f247f5f8395dc065033","kubernetes.io/config.mirror":"92e70beb375f9f247f5f8395dc065033","kubernetes.io/config.seen":"2024-03-14T19:18:55.420198507Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:19:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 5862 chars]
	I0314 19:22:20.719653    9056 round_trippers.go:463] GET https://172.17.86.124:8443/api/v1/nodes/multinode-442000
	I0314 19:22:20.719653    9056 round_trippers.go:469] Request Headers:
	I0314 19:22:20.719653    9056 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:22:20.719653    9056 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:22:20.722529    9056 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0314 19:22:20.722529    9056 round_trippers.go:577] Response Headers:
	I0314 19:22:20.722529    9056 round_trippers.go:580]     Audit-Id: 916a3673-f6ae-4ec5-b535-b4b7aa07ce24
	I0314 19:22:20.722529    9056 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:22:20.722529    9056 round_trippers.go:580]     Content-Type: application/json
	I0314 19:22:20.722529    9056 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:22:20.722529    9056 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:22:20.722529    9056 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:22:20 GMT
	I0314 19:22:20.723181    9056 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"457","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0314 19:22:20.723539    9056 pod_ready.go:92] pod "etcd-multinode-442000" in "kube-system" namespace has status "Ready":"True"
	I0314 19:22:20.723539    9056 pod_ready.go:81] duration metric: took 7.3534ms for pod "etcd-multinode-442000" in "kube-system" namespace to be "Ready" ...
	I0314 19:22:20.723539    9056 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-442000" in "kube-system" namespace to be "Ready" ...
	I0314 19:22:20.723539    9056 round_trippers.go:463] GET https://172.17.86.124:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-442000
	I0314 19:22:20.723539    9056 round_trippers.go:469] Request Headers:
	I0314 19:22:20.723539    9056 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:22:20.723539    9056 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:22:20.726214    9056 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0314 19:22:20.727156    9056 round_trippers.go:577] Response Headers:
	I0314 19:22:20.727156    9056 round_trippers.go:580]     Audit-Id: e3e15290-b739-481c-a676-a49d92fc7192
	I0314 19:22:20.727156    9056 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:22:20.727156    9056 round_trippers.go:580]     Content-Type: application/json
	I0314 19:22:20.727156    9056 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:22:20.727156    9056 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:22:20.727156    9056 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:22:20 GMT
	I0314 19:22:20.727437    9056 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-442000","namespace":"kube-system","uid":"02a2d011-5f4c-451c-9698-a88e42e4b6c9","resourceVersion":"414","creationTimestamp":"2024-03-14T19:19:02Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.17.86.124:8443","kubernetes.io/config.hash":"81fdcd9740169a0b72b7c7316eeac39f","kubernetes.io/config.mirror":"81fdcd9740169a0b72b7c7316eeac39f","kubernetes.io/config.seen":"2024-03-14T19:18:55.420203908Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:19:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7399 chars]
	I0314 19:22:20.728006    9056 round_trippers.go:463] GET https://172.17.86.124:8443/api/v1/nodes/multinode-442000
	I0314 19:22:20.728006    9056 round_trippers.go:469] Request Headers:
	I0314 19:22:20.728006    9056 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:22:20.728006    9056 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:22:20.730570    9056 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0314 19:22:20.730570    9056 round_trippers.go:577] Response Headers:
	I0314 19:22:20.730570    9056 round_trippers.go:580]     Audit-Id: d0c9ba65-6c43-4b80-afc1-76dbcd1c7eeb
	I0314 19:22:20.730570    9056 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:22:20.730570    9056 round_trippers.go:580]     Content-Type: application/json
	I0314 19:22:20.730570    9056 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:22:20.730570    9056 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:22:20.730762    9056 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:22:20 GMT
	I0314 19:22:20.730884    9056 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"457","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0314 19:22:20.731236    9056 pod_ready.go:92] pod "kube-apiserver-multinode-442000" in "kube-system" namespace has status "Ready":"True"
	I0314 19:22:20.731236    9056 pod_ready.go:81] duration metric: took 7.6966ms for pod "kube-apiserver-multinode-442000" in "kube-system" namespace to be "Ready" ...
	I0314 19:22:20.731236    9056 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-442000" in "kube-system" namespace to be "Ready" ...
	I0314 19:22:20.731236    9056 round_trippers.go:463] GET https://172.17.86.124:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-442000
	I0314 19:22:20.731236    9056 round_trippers.go:469] Request Headers:
	I0314 19:22:20.731236    9056 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:22:20.731236    9056 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:22:20.733785    9056 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0314 19:22:20.733785    9056 round_trippers.go:577] Response Headers:
	I0314 19:22:20.733785    9056 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:22:20.733785    9056 round_trippers.go:580]     Content-Type: application/json
	I0314 19:22:20.733785    9056 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:22:20.733785    9056 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:22:20.733785    9056 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:22:21 GMT
	I0314 19:22:20.734704    9056 round_trippers.go:580]     Audit-Id: 126dc5b6-78ff-4621-9c8f-62cba4e55a0c
	I0314 19:22:20.734863    9056 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-442000","namespace":"kube-system","uid":"b16fc874-ef74-44ca-a54f-bb678bf982df","resourceVersion":"413","creationTimestamp":"2024-03-14T19:19:01Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"a7ee530f2bd843eddeace8cd6ec0d204","kubernetes.io/config.mirror":"a7ee530f2bd843eddeace8cd6ec0d204","kubernetes.io/config.seen":"2024-03-14T19:18:55.420205308Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:19:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6969 chars]
	I0314 19:22:20.735446    9056 round_trippers.go:463] GET https://172.17.86.124:8443/api/v1/nodes/multinode-442000
	I0314 19:22:20.735446    9056 round_trippers.go:469] Request Headers:
	I0314 19:22:20.735446    9056 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:22:20.735529    9056 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:22:20.737656    9056 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0314 19:22:20.737656    9056 round_trippers.go:577] Response Headers:
	I0314 19:22:20.737656    9056 round_trippers.go:580]     Audit-Id: 2024caab-e501-49a1-bbc1-ed3bc967ebd6
	I0314 19:22:20.737656    9056 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:22:20.737656    9056 round_trippers.go:580]     Content-Type: application/json
	I0314 19:22:20.737924    9056 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:22:20.737924    9056 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:22:20.737924    9056 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:22:21 GMT
	I0314 19:22:20.738162    9056 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"457","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0314 19:22:20.738509    9056 pod_ready.go:92] pod "kube-controller-manager-multinode-442000" in "kube-system" namespace has status "Ready":"True"
	I0314 19:22:20.738563    9056 pod_ready.go:81] duration metric: took 7.3258ms for pod "kube-controller-manager-multinode-442000" in "kube-system" namespace to be "Ready" ...
	I0314 19:22:20.738563    9056 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-72dzs" in "kube-system" namespace to be "Ready" ...
	I0314 19:22:20.899228    9056 request.go:629] Waited for 160.5036ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.86.124:8443/api/v1/namespaces/kube-system/pods/kube-proxy-72dzs
	I0314 19:22:20.899439    9056 round_trippers.go:463] GET https://172.17.86.124:8443/api/v1/namespaces/kube-system/pods/kube-proxy-72dzs
	I0314 19:22:20.899439    9056 round_trippers.go:469] Request Headers:
	I0314 19:22:20.899439    9056 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:22:20.899439    9056 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:22:20.901746    9056 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0314 19:22:20.902757    9056 round_trippers.go:577] Response Headers:
	I0314 19:22:20.902757    9056 round_trippers.go:580]     Audit-Id: aebd7976-ca81-4e2f-930c-44e62d4143ec
	I0314 19:22:20.902757    9056 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:22:20.902757    9056 round_trippers.go:580]     Content-Type: application/json
	I0314 19:22:20.902757    9056 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:22:20.902757    9056 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:22:20.902757    9056 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:22:21 GMT
	I0314 19:22:20.902757    9056 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-72dzs","generateName":"kube-proxy-","namespace":"kube-system","uid":"80b840b0-3803-4102-a966-ea73aed74f49","resourceVersion":"621","creationTimestamp":"2024-03-14T19:22:02Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"6fc4cc4b-ef3f-4f16-8df5-a146058b364e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:22:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6fc4cc4b-ef3f-4f16-8df5-a146058b364e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5541 chars]
	I0314 19:22:21.103179    9056 request.go:629] Waited for 199.2119ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.86.124:8443/api/v1/nodes/multinode-442000-m02
	I0314 19:22:21.103276    9056 round_trippers.go:463] GET https://172.17.86.124:8443/api/v1/nodes/multinode-442000-m02
	I0314 19:22:21.103401    9056 round_trippers.go:469] Request Headers:
	I0314 19:22:21.103437    9056 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:22:21.103468    9056 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:22:21.106883    9056 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:22:21.106883    9056 round_trippers.go:577] Response Headers:
	I0314 19:22:21.106883    9056 round_trippers.go:580]     Audit-Id: 93e3aee3-376f-40c8-94a1-0ebc40f09d35
	I0314 19:22:21.106883    9056 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:22:21.106883    9056 round_trippers.go:580]     Content-Type: application/json
	I0314 19:22:21.106883    9056 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:22:21.106883    9056 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:22:21.107338    9056 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:22:21 GMT
	I0314 19:22:21.107478    9056 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000-m02","uid":"5f369d83-fce6-47fe-b14b-171ed626975b","resourceVersion":"635","creationTimestamp":"2024-03-14T19:22:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_14T19_22_02_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:22:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3143 chars]
	I0314 19:22:21.107863    9056 pod_ready.go:92] pod "kube-proxy-72dzs" in "kube-system" namespace has status "Ready":"True"
	I0314 19:22:21.107933    9056 pod_ready.go:81] duration metric: took 369.3428ms for pod "kube-proxy-72dzs" in "kube-system" namespace to be "Ready" ...
	I0314 19:22:21.107933    9056 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-cg28g" in "kube-system" namespace to be "Ready" ...
	I0314 19:22:21.306275    9056 request.go:629] Waited for 198.1726ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.86.124:8443/api/v1/namespaces/kube-system/pods/kube-proxy-cg28g
	I0314 19:22:21.306275    9056 round_trippers.go:463] GET https://172.17.86.124:8443/api/v1/namespaces/kube-system/pods/kube-proxy-cg28g
	I0314 19:22:21.306275    9056 round_trippers.go:469] Request Headers:
	I0314 19:22:21.306275    9056 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:22:21.306275    9056 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:22:21.310796    9056 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 19:22:21.311205    9056 round_trippers.go:577] Response Headers:
	I0314 19:22:21.311205    9056 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:22:21.311205    9056 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:22:21 GMT
	I0314 19:22:21.311205    9056 round_trippers.go:580]     Audit-Id: 4d2d17d8-9b9e-4bf8-ab4f-a7b42a77d699
	I0314 19:22:21.311205    9056 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:22:21.311205    9056 round_trippers.go:580]     Content-Type: application/json
	I0314 19:22:21.311277    9056 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:22:21.311311    9056 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-cg28g","generateName":"kube-proxy-","namespace":"kube-system","uid":"c7f798bf-6722-4731-af8d-ccd5703d116e","resourceVersion":"405","creationTimestamp":"2024-03-14T19:19:16Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"6fc4cc4b-ef3f-4f16-8df5-a146058b364e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:19:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6fc4cc4b-ef3f-4f16-8df5-a146058b364e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5533 chars]
	I0314 19:22:21.508993    9056 request.go:629] Waited for 196.8279ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.86.124:8443/api/v1/nodes/multinode-442000
	I0314 19:22:21.508993    9056 round_trippers.go:463] GET https://172.17.86.124:8443/api/v1/nodes/multinode-442000
	I0314 19:22:21.508993    9056 round_trippers.go:469] Request Headers:
	I0314 19:22:21.508993    9056 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:22:21.508993    9056 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:22:21.513559    9056 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 19:22:21.513559    9056 round_trippers.go:577] Response Headers:
	I0314 19:22:21.513559    9056 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:22:21.513559    9056 round_trippers.go:580]     Content-Type: application/json
	I0314 19:22:21.513559    9056 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:22:21.513559    9056 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:22:21.513559    9056 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:22:21 GMT
	I0314 19:22:21.513680    9056 round_trippers.go:580]     Audit-Id: 96c6cc8c-df8a-41cf-9928-0f49aa8beb2b
	I0314 19:22:21.513870    9056 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"457","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0314 19:22:21.514236    9056 pod_ready.go:92] pod "kube-proxy-cg28g" in "kube-system" namespace has status "Ready":"True"
	I0314 19:22:21.514236    9056 pod_ready.go:81] duration metric: took 406.2713ms for pod "kube-proxy-cg28g" in "kube-system" namespace to be "Ready" ...
	I0314 19:22:21.514236    9056 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-442000" in "kube-system" namespace to be "Ready" ...
	I0314 19:22:21.694699    9056 request.go:629] Waited for 180.45ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.86.124:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-442000
	I0314 19:22:21.695033    9056 round_trippers.go:463] GET https://172.17.86.124:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-442000
	I0314 19:22:21.695132    9056 round_trippers.go:469] Request Headers:
	I0314 19:22:21.695132    9056 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:22:21.695132    9056 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:22:21.698362    9056 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:22:21.698362    9056 round_trippers.go:577] Response Headers:
	I0314 19:22:21.698362    9056 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:22:21.698362    9056 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:22:21.698752    9056 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:22:21 GMT
	I0314 19:22:21.698752    9056 round_trippers.go:580]     Audit-Id: b34b4ac5-efc4-41e0-9884-1b5a3e0ead3e
	I0314 19:22:21.698792    9056 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:22:21.698792    9056 round_trippers.go:580]     Content-Type: application/json
	I0314 19:22:21.698914    9056 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-442000","namespace":"kube-system","uid":"76b10598-fe0d-4a14-a8e4-a32221fbb68f","resourceVersion":"412","creationTimestamp":"2024-03-14T19:19:01Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"2b2434280023596d1e3c90125a7219ed","kubernetes.io/config.mirror":"2b2434280023596d1e3c90125a7219ed","kubernetes.io/config.seen":"2024-03-14T19:18:55.420206709Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:19:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4699 chars]
	I0314 19:22:21.897293    9056 request.go:629] Waited for 197.8239ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.86.124:8443/api/v1/nodes/multinode-442000
	I0314 19:22:21.897293    9056 round_trippers.go:463] GET https://172.17.86.124:8443/api/v1/nodes/multinode-442000
	I0314 19:22:21.897293    9056 round_trippers.go:469] Request Headers:
	I0314 19:22:21.897293    9056 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:22:21.897293    9056 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:22:21.900882    9056 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:22:21.900882    9056 round_trippers.go:577] Response Headers:
	I0314 19:22:21.901742    9056 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:22:21.901742    9056 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:22:22 GMT
	I0314 19:22:21.901742    9056 round_trippers.go:580]     Audit-Id: 78cfc5a4-545f-40f2-af33-db5aa70afd65
	I0314 19:22:21.901742    9056 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:22:21.901742    9056 round_trippers.go:580]     Content-Type: application/json
	I0314 19:22:21.901742    9056 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:22:21.902072    9056 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"457","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0314 19:22:21.902904    9056 pod_ready.go:92] pod "kube-scheduler-multinode-442000" in "kube-system" namespace has status "Ready":"True"
	I0314 19:22:21.902904    9056 pod_ready.go:81] duration metric: took 388.6387ms for pod "kube-scheduler-multinode-442000" in "kube-system" namespace to be "Ready" ...
	I0314 19:22:21.902904    9056 pod_ready.go:38] duration metric: took 1.2047047s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0314 19:22:21.903009    9056 system_svc.go:44] waiting for kubelet service to be running ....
	I0314 19:22:21.915673    9056 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0314 19:22:21.939086    9056 system_svc.go:56] duration metric: took 36.1794ms WaitForService to wait for kubelet
	I0314 19:22:21.939086    9056 kubeadm.go:576] duration metric: took 19.0092651s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0314 19:22:21.939086    9056 node_conditions.go:102] verifying NodePressure condition ...
	I0314 19:22:22.106032    9056 request.go:629] Waited for 166.9331ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.86.124:8443/api/v1/nodes
	I0314 19:22:22.106032    9056 round_trippers.go:463] GET https://172.17.86.124:8443/api/v1/nodes
	I0314 19:22:22.106032    9056 round_trippers.go:469] Request Headers:
	I0314 19:22:22.106363    9056 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:22:22.106363    9056 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:22:22.111386    9056 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 19:22:22.111410    9056 round_trippers.go:577] Response Headers:
	I0314 19:22:22.111410    9056 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:22:22.111410    9056 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:22:22 GMT
	I0314 19:22:22.111410    9056 round_trippers.go:580]     Audit-Id: ac7c66ed-665d-4c77-8b4d-efbe1ec95106
	I0314 19:22:22.111410    9056 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:22:22.111410    9056 round_trippers.go:580]     Content-Type: application/json
	I0314 19:22:22.111410    9056 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:22:22.112442    9056 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"636"},"items":[{"metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"457","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 9146 chars]
	I0314 19:22:22.113250    9056 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0314 19:22:22.113250    9056 node_conditions.go:123] node cpu capacity is 2
	I0314 19:22:22.113332    9056 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0314 19:22:22.113332    9056 node_conditions.go:123] node cpu capacity is 2
	I0314 19:22:22.113332    9056 node_conditions.go:105] duration metric: took 174.2327ms to run NodePressure ...
	I0314 19:22:22.113332    9056 start.go:240] waiting for startup goroutines ...
	I0314 19:22:22.113416    9056 start.go:254] writing updated cluster config ...
	I0314 19:22:22.123664    9056 ssh_runner.go:195] Run: rm -f paused
	I0314 19:22:22.251525    9056 start.go:600] kubectl: 1.29.2, cluster: 1.28.4 (minor skew: 1)
	I0314 19:22:22.257451    9056 out.go:177] * Done! kubectl is now configured to use "multinode-442000" cluster and "default" namespace by default
	
	
	==> Docker <==
	Mar 14 19:19:27 multinode-442000 dockerd[1335]: time="2024-03-14T19:19:27.099333194Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 14 19:19:27 multinode-442000 dockerd[1335]: time="2024-03-14T19:19:27.122223374Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 14 19:19:27 multinode-442000 dockerd[1335]: time="2024-03-14T19:19:27.122359695Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 14 19:19:27 multinode-442000 dockerd[1335]: time="2024-03-14T19:19:27.122394000Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 14 19:19:27 multinode-442000 dockerd[1335]: time="2024-03-14T19:19:27.122594032Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 14 19:19:27 multinode-442000 cri-dockerd[1219]: time="2024-03-14T19:19:27Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/b179d157b6b2f71cc980c7ea5060a613be77e84e89947fbcb91a687ea7310eaf/resolv.conf as [nameserver 172.17.80.1]"
	Mar 14 19:19:27 multinode-442000 cri-dockerd[1219]: time="2024-03-14T19:19:27Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a3dba3fc54c01e7fb1675536e155d6b541ed5782f664675ccd953639013f50b0/resolv.conf as [nameserver 172.17.80.1]"
	Mar 14 19:19:27 multinode-442000 dockerd[1335]: time="2024-03-14T19:19:27.457252020Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 14 19:19:27 multinode-442000 dockerd[1335]: time="2024-03-14T19:19:27.457435646Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 14 19:19:27 multinode-442000 dockerd[1335]: time="2024-03-14T19:19:27.457523558Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 14 19:19:27 multinode-442000 dockerd[1335]: time="2024-03-14T19:19:27.457633273Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 14 19:19:27 multinode-442000 dockerd[1335]: time="2024-03-14T19:19:27.584396423Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 14 19:19:27 multinode-442000 dockerd[1335]: time="2024-03-14T19:19:27.584530241Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 14 19:19:27 multinode-442000 dockerd[1335]: time="2024-03-14T19:19:27.584550244Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 14 19:19:27 multinode-442000 dockerd[1335]: time="2024-03-14T19:19:27.584707966Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 14 19:22:46 multinode-442000 dockerd[1335]: time="2024-03-14T19:22:46.089104493Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 14 19:22:46 multinode-442000 dockerd[1335]: time="2024-03-14T19:22:46.089180603Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 14 19:22:46 multinode-442000 dockerd[1335]: time="2024-03-14T19:22:46.089197505Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 14 19:22:46 multinode-442000 dockerd[1335]: time="2024-03-14T19:22:46.089352524Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 14 19:22:46 multinode-442000 cri-dockerd[1219]: time="2024-03-14T19:22:46Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/fa0f2372c88eef3de0c7caa0041064157c314aff4c14bf6622f34dd89106f773/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Mar 14 19:22:47 multinode-442000 cri-dockerd[1219]: time="2024-03-14T19:22:47Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	Mar 14 19:22:47 multinode-442000 dockerd[1335]: time="2024-03-14T19:22:47.593294878Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 14 19:22:47 multinode-442000 dockerd[1335]: time="2024-03-14T19:22:47.593441790Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 14 19:22:47 multinode-442000 dockerd[1335]: time="2024-03-14T19:22:47.593456291Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 14 19:22:47 multinode-442000 dockerd[1335]: time="2024-03-14T19:22:47.594086740Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	0cd43cdaa31c9       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   46 seconds ago      Running             busybox                   0                   fa0f2372c88ee       busybox-5b5d89c9d6-7446n
	8899bc0038935       ead0a4a53df89                                                                                         4 minutes ago       Running             coredns                   0                   a3dba3fc54c01       coredns-5dd5756b68-d22jc
	07c2872c48eda       6e38f40d628db                                                                                         4 minutes ago       Running             storage-provisioner       0                   b179d157b6b2f       storage-provisioner
	1a321c0e89971       kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988              4 minutes ago       Running             kindnet-cni               0                   b046b896affe9       kindnet-7b9lf
	2a62baf3f1b46       83f6cc407eed8                                                                                         4 minutes ago       Running             kube-proxy                0                   9b3244b47278e       kube-proxy-cg28g
	cd640f130e429       7fe0e6f37db33                                                                                         4 minutes ago       Running             kube-apiserver            0                   ab390fc53b998       kube-apiserver-multinode-442000
	dbb603289bf16       e3db313c6dbc0                                                                                         4 minutes ago       Running             kube-scheduler            0                   54e39762d7a64       kube-scheduler-multinode-442000
	16b80f73683dc       d058aa5ab969c                                                                                         4 minutes ago       Running             kube-controller-manager   0                   102c907609a3a       kube-controller-manager-multinode-442000
	9585e3eb2ead2       73deb9a3f7025                                                                                         4 minutes ago       Running             etcd                      0                   af5b88117f99a       etcd-multinode-442000
	
	
	==> coredns [8899bc003893] <==
	[INFO] 10.244.0.3:45005 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000148512s
	[INFO] 10.244.1.2:51938 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000100608s
	[INFO] 10.244.1.2:46248 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.00024762s
	[INFO] 10.244.1.2:46501 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000100408s
	[INFO] 10.244.1.2:52414 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000056704s
	[INFO] 10.244.1.2:44908 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000121409s
	[INFO] 10.244.1.2:49578 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00011941s
	[INFO] 10.244.1.2:51057 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000060205s
	[INFO] 10.244.1.2:56240 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000055805s
	[INFO] 10.244.0.3:32901 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000172914s
	[INFO] 10.244.0.3:41115 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000149912s
	[INFO] 10.244.0.3:40494 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00013161s
	[INFO] 10.244.0.3:40575 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000077106s
	[INFO] 10.244.1.2:55307 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000194115s
	[INFO] 10.244.1.2:46435 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00025832s
	[INFO] 10.244.1.2:52095 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000156813s
	[INFO] 10.244.1.2:57849 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00012701s
	[INFO] 10.244.0.3:47270 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000244119s
	[INFO] 10.244.0.3:59009 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000411532s
	[INFO] 10.244.0.3:40925 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000108108s
	[INFO] 10.244.0.3:56417 - 5 "PTR IN 1.80.17.172.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000067706s
	[INFO] 10.244.1.2:36896 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000108409s
	[INFO] 10.244.1.2:38949 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000118209s
	[INFO] 10.244.1.2:56933 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000156413s
	[INFO] 10.244.1.2:35971 - 5 "PTR IN 1.80.17.172.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000072406s
	
	
	==> describe nodes <==
	Name:               multinode-442000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-442000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c6f78a3db54ac629870afb44fb5bc8be9e04a8c7
	                    minikube.k8s.io/name=multinode-442000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_14T19_19_05_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 14 Mar 2024 19:19:00 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-442000
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 14 Mar 2024 19:23:29 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 14 Mar 2024 19:23:10 +0000   Thu, 14 Mar 2024 19:18:58 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 14 Mar 2024 19:23:10 +0000   Thu, 14 Mar 2024 19:18:58 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 14 Mar 2024 19:23:10 +0000   Thu, 14 Mar 2024 19:18:58 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 14 Mar 2024 19:23:10 +0000   Thu, 14 Mar 2024 19:19:26 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.17.86.124
	  Hostname:    multinode-442000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164268Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164268Ki
	  pods:               110
	System Info:
	  Machine ID:                 6a631478f2504cf7a53faa0b685d7672
	  System UUID:                8469b663-ea90-da4f-856d-11034a8f65d8
	  Boot ID:                    a1b2bf56-435d-41c4-ac00-a53a4e6ba2b7
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://25.0.4
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-7446n                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         48s
	  kube-system                 coredns-5dd5756b68-d22jc                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     4m16s
	  kube-system                 etcd-multinode-442000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         4m30s
	  kube-system                 kindnet-7b9lf                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      4m17s
	  kube-system                 kube-apiserver-multinode-442000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m31s
	  kube-system                 kube-controller-manager-multinode-442000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m32s
	  kube-system                 kube-proxy-cg28g                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m17s
	  kube-system                 kube-scheduler-multinode-442000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m32s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m9s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m15s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  4m38s (x8 over 4m38s)  kubelet          Node multinode-442000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m38s (x8 over 4m38s)  kubelet          Node multinode-442000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m38s (x7 over 4m38s)  kubelet          Node multinode-442000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m38s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 4m29s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  4m29s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m29s                  kubelet          Node multinode-442000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m29s                  kubelet          Node multinode-442000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m29s                  kubelet          Node multinode-442000 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           4m17s                  node-controller  Node multinode-442000 event: Registered Node multinode-442000 in Controller
	  Normal  NodeReady                4m7s                   kubelet          Node multinode-442000 status is now: NodeReady
	
	
	Name:               multinode-442000-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-442000-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c6f78a3db54ac629870afb44fb5bc8be9e04a8c7
	                    minikube.k8s.io/name=multinode-442000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_03_14T19_22_02_0700
	                    minikube.k8s.io/version=v1.32.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 14 Mar 2024 19:22:02 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-442000-m02
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 14 Mar 2024 19:23:23 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 14 Mar 2024 19:23:04 +0000   Thu, 14 Mar 2024 19:22:02 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 14 Mar 2024 19:23:04 +0000   Thu, 14 Mar 2024 19:22:02 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 14 Mar 2024 19:23:04 +0000   Thu, 14 Mar 2024 19:22:02 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 14 Mar 2024 19:23:04 +0000   Thu, 14 Mar 2024 19:22:20 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.17.80.135
	  Hostname:    multinode-442000-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164268Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164268Ki
	  pods:               110
	System Info:
	  Machine ID:                 35b6f7da4d3943d99d8a5913cae1c8fb
	  System UUID:                0b9b8376-0767-f940-9973-d373e3dc050d
	  Boot ID:                    45d479cc-26e8-46a6-9431-50637071f586
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://25.0.4
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-8drpb    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         48s
	  kube-system                 kindnet-c7m4p               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      91s
	  kube-system                 kube-proxy-72dzs            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         91s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 80s                kube-proxy       
	  Normal  NodeHasSufficientMemory  91s (x5 over 93s)  kubelet          Node multinode-442000-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    91s (x5 over 93s)  kubelet          Node multinode-442000-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     91s (x5 over 93s)  kubelet          Node multinode-442000-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           87s                node-controller  Node multinode-442000-m02 event: Registered Node multinode-442000-m02 in Controller
	  Normal  NodeReady                73s                kubelet          Node multinode-442000-m02 status is now: NodeReady
	
	
	==> dmesg <==
	[  +5.966249] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +46.436645] systemd-fstab-generator[637]: Ignoring "noauto" option for root device
	[  +0.171687] systemd-fstab-generator[649]: Ignoring "noauto" option for root device
	[Mar14 19:18] systemd-fstab-generator[936]: Ignoring "noauto" option for root device
	[  +0.091270] kauditd_printk_skb: 59 callbacks suppressed
	[  +0.500402] systemd-fstab-generator[977]: Ignoring "noauto" option for root device
	[  +0.197011] systemd-fstab-generator[989]: Ignoring "noauto" option for root device
	[  +0.205731] systemd-fstab-generator[1003]: Ignoring "noauto" option for root device
	[  +2.775868] systemd-fstab-generator[1172]: Ignoring "noauto" option for root device
	[  +0.177460] systemd-fstab-generator[1184]: Ignoring "noauto" option for root device
	[  +0.203045] systemd-fstab-generator[1196]: Ignoring "noauto" option for root device
	[  +0.266065] systemd-fstab-generator[1211]: Ignoring "noauto" option for root device
	[ +13.055443] systemd-fstab-generator[1320]: Ignoring "noauto" option for root device
	[  +0.108732] kauditd_printk_skb: 205 callbacks suppressed
	[  +2.915011] systemd-fstab-generator[1516]: Ignoring "noauto" option for root device
	[  +7.510287] systemd-fstab-generator[1792]: Ignoring "noauto" option for root device
	[  +0.092469] kauditd_printk_skb: 73 callbacks suppressed
	[Mar14 19:19] systemd-fstab-generator[2793]: Ignoring "noauto" option for root device
	[  +0.126539] kauditd_printk_skb: 62 callbacks suppressed
	[ +13.125030] systemd-fstab-generator[4402]: Ignoring "noauto" option for root device
	[  +0.147537] kauditd_printk_skb: 12 callbacks suppressed
	[  +7.428947] kauditd_printk_skb: 51 callbacks suppressed
	[Mar14 19:22] kauditd_printk_skb: 21 callbacks suppressed
	
	
	==> etcd [9585e3eb2ead] <==
	{"level":"warn","ts":"2024-03-14T19:19:24.540796Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"208.815585ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-442000\" ","response":"range_response_count:1 size:4486"}
	{"level":"info","ts":"2024-03-14T19:19:24.540824Z","caller":"traceutil/trace.go:171","msg":"trace[1544661394] range","detail":"{range_begin:/registry/minions/multinode-442000; range_end:; response_count:1; response_revision:409; }","duration":"208.84879ms","start":"2024-03-14T19:19:24.331967Z","end":"2024-03-14T19:19:24.540815Z","steps":["trace[1544661394] 'range keys from in-memory index tree'  (duration: 208.680963ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-14T19:19:40.041168Z","caller":"traceutil/trace.go:171","msg":"trace[995516202] transaction","detail":"{read_only:false; response_revision:460; number_of_response:1; }","duration":"221.475136ms","start":"2024-03-14T19:19:39.819674Z","end":"2024-03-14T19:19:40.041149Z","steps":["trace[995516202] 'process raft request'  (duration: 221.284918ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-14T19:19:42.153385Z","caller":"traceutil/trace.go:171","msg":"trace[1032592412] transaction","detail":"{read_only:false; response_revision:462; number_of_response:1; }","duration":"103.79515ms","start":"2024-03-14T19:19:42.049572Z","end":"2024-03-14T19:19:42.153367Z","steps":["trace[1032592412] 'process raft request'  (duration: 103.644535ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-14T19:22:06.3645Z","caller":"traceutil/trace.go:171","msg":"trace[288245658] transaction","detail":"{read_only:false; response_revision:604; number_of_response:1; }","duration":"215.7896ms","start":"2024-03-14T19:22:06.148693Z","end":"2024-03-14T19:22:06.364482Z","steps":["trace[288245658] 'process raft request'  (duration: 215.698589ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-14T19:22:06.74293Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"185.27071ms","expected-duration":"100ms","prefix":"","request":"header:<ID:7798420628439897832 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/minions/multinode-442000-m02\" mod_revision:600 > success:<request_put:<key:\"/registry/minions/multinode-442000-m02\" value_size:2907 >> failure:<request_range:<key:\"/registry/minions/multinode-442000-m02\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-03-14T19:22:06.743106Z","caller":"traceutil/trace.go:171","msg":"trace[578880569] transaction","detail":"{read_only:false; response_revision:605; number_of_response:1; }","duration":"513.094524ms","start":"2024-03-14T19:22:06.23Z","end":"2024-03-14T19:22:06.743094Z","steps":["trace[578880569] 'process raft request'  (duration: 326.815989ms)","trace[578880569] 'compare'  (duration: 185.10689ms)"],"step_count":2}
	{"level":"warn","ts":"2024-03-14T19:22:06.743272Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-14T19:22:06.229978Z","time spent":"513.229741ms","remote":"127.0.0.1:37032","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":2953,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/minions/multinode-442000-m02\" mod_revision:600 > success:<request_put:<key:\"/registry/minions/multinode-442000-m02\" value_size:2907 >> failure:<request_range:<key:\"/registry/minions/multinode-442000-m02\" > >"}
	{"level":"warn","ts":"2024-03-14T19:22:06.743651Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"279.163671ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-442000-m02\" ","response":"range_response_count:1 size:2968"}
	{"level":"info","ts":"2024-03-14T19:22:06.743686Z","caller":"traceutil/trace.go:171","msg":"trace[1596834091] range","detail":"{range_begin:/registry/minions/multinode-442000-m02; range_end:; response_count:1; response_revision:605; }","duration":"279.198875ms","start":"2024-03-14T19:22:06.464475Z","end":"2024-03-14T19:22:06.743674Z","steps":["trace[1596834091] 'agreement among raft nodes before linearized reading'  (duration: 279.132267ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-14T19:22:06.743049Z","caller":"traceutil/trace.go:171","msg":"trace[257704657] linearizableReadLoop","detail":"{readStateIndex:657; appliedIndex:656; }","duration":"278.516991ms","start":"2024-03-14T19:22:06.464518Z","end":"2024-03-14T19:22:06.743035Z","steps":["trace[257704657] 'read index received'  (duration: 92.166047ms)","trace[257704657] 'applied index is now lower than readState.Index'  (duration: 186.349444ms)"],"step_count":2}
	{"level":"info","ts":"2024-03-14T19:22:12.549505Z","caller":"traceutil/trace.go:171","msg":"trace[200474226] transaction","detail":"{read_only:false; response_revision:612; number_of_response:1; }","duration":"140.069236ms","start":"2024-03-14T19:22:12.409419Z","end":"2024-03-14T19:22:12.549488Z","steps":["trace[200474226] 'process raft request'  (duration: 139.588476ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-14T19:22:13.268972Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"558.410824ms","expected-duration":"100ms","prefix":"","request":"header:<ID:7798420628439897875 username:\"kube-apiserver-etcd-client\" auth_revision:1 > lease_grant:<ttl:15-second id:6c398e3e673d1712>","response":"size:40"}
	{"level":"info","ts":"2024-03-14T19:22:13.269853Z","caller":"traceutil/trace.go:171","msg":"trace[473232601] linearizableReadLoop","detail":"{readStateIndex:667; appliedIndex:665; }","duration":"295.068642ms","start":"2024-03-14T19:22:12.974771Z","end":"2024-03-14T19:22:13.269839Z","steps":["trace[473232601] 'read index received'  (duration: 294.289945ms)","trace[473232601] 'applied index is now lower than readState.Index'  (duration: 777.997µs)"],"step_count":2}
	{"level":"warn","ts":"2024-03-14T19:22:13.269995Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-14T19:22:12.55154Z","time spent":"718.452745ms","remote":"127.0.0.1:36892","response type":"/etcdserverpb.Lease/LeaseGrant","request count":-1,"request size":-1,"response count":-1,"response size":-1,"request content":""}
	{"level":"info","ts":"2024-03-14T19:22:13.27031Z","caller":"traceutil/trace.go:171","msg":"trace[480633455] transaction","detail":"{read_only:false; response_revision:613; number_of_response:1; }","duration":"584.323549ms","start":"2024-03-14T19:22:12.685975Z","end":"2024-03-14T19:22:13.270299Z","steps":["trace[480633455] 'process raft request'  (duration: 583.728975ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-14T19:22:13.270702Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-14T19:22:12.685954Z","time spent":"584.618986ms","remote":"127.0.0.1:37032","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":3133,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/minions/multinode-442000-m02\" mod_revision:605 > success:<request_put:<key:\"/registry/minions/multinode-442000-m02\" value_size:3087 >> failure:<request_range:<key:\"/registry/minions/multinode-442000-m02\" > >"}
	{"level":"warn","ts":"2024-03-14T19:22:13.270917Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"296.268192ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-442000-m02\" ","response":"range_response_count:1 size:3148"}
	{"level":"info","ts":"2024-03-14T19:22:13.27099Z","caller":"traceutil/trace.go:171","msg":"trace[1469589902] range","detail":"{range_begin:/registry/minions/multinode-442000-m02; range_end:; response_count:1; response_revision:613; }","duration":"296.3407ms","start":"2024-03-14T19:22:12.974641Z","end":"2024-03-14T19:22:13.270982Z","steps":["trace[1469589902] 'agreement among raft nodes before linearized reading'  (duration: 296.215485ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-14T19:22:19.351347Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"405.964655ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/limitranges/\" range_end:\"/registry/limitranges0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-03-14T19:22:19.351557Z","caller":"traceutil/trace.go:171","msg":"trace[444889595] range","detail":"{range_begin:/registry/limitranges/; range_end:/registry/limitranges0; response_count:0; response_revision:626; }","duration":"406.184082ms","start":"2024-03-14T19:22:18.945355Z","end":"2024-03-14T19:22:19.351539Z","steps":["trace[444889595] 'count revisions from in-memory index tree'  (duration: 405.864942ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-14T19:22:19.351946Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-14T19:22:18.945336Z","time spent":"406.383407ms","remote":"127.0.0.1:36994","response type":"/etcdserverpb.KV/Range","request count":0,"request size":50,"response count":0,"response size":28,"request content":"key:\"/registry/limitranges/\" range_end:\"/registry/limitranges0\" count_only:true "}
	{"level":"warn","ts":"2024-03-14T19:22:19.352194Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"373.520407ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-442000-m02\" ","response":"range_response_count:1 size:3148"}
	{"level":"info","ts":"2024-03-14T19:22:19.352253Z","caller":"traceutil/trace.go:171","msg":"trace[630974862] range","detail":"{range_begin:/registry/minions/multinode-442000-m02; range_end:; response_count:1; response_revision:626; }","duration":"373.583316ms","start":"2024-03-14T19:22:18.978657Z","end":"2024-03-14T19:22:19.352241Z","steps":["trace[630974862] 'range keys from in-memory index tree'  (duration: 373.132759ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-14T19:22:19.352555Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-14T19:22:18.978578Z","time spent":"373.962762ms","remote":"127.0.0.1:37032","response type":"/etcdserverpb.KV/Range","request count":0,"request size":40,"response count":1,"response size":3171,"request content":"key:\"/registry/minions/multinode-442000-m02\" "}
	
	
	==> kernel <==
	 19:23:34 up 6 min,  0 users,  load average: 0.66, 0.51, 0.25
	Linux multinode-442000 5.10.207 #1 SMP Wed Mar 13 22:01:28 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [1a321c0e8997] <==
	I0314 19:22:26.022365       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:22:36.037234       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:22:36.037272       1 main.go:227] handling current node
	I0314 19:22:36.037284       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:22:36.037290       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:22:46.044150       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:22:46.044240       1 main.go:227] handling current node
	I0314 19:22:46.044255       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:22:46.044263       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:22:56.056109       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:22:56.056204       1 main.go:227] handling current node
	I0314 19:22:56.056219       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:22:56.056228       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:23:06.065261       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:23:06.065358       1 main.go:227] handling current node
	I0314 19:23:06.065371       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:23:06.065379       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:23:16.074039       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:23:16.074154       1 main.go:227] handling current node
	I0314 19:23:16.074175       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:23:16.074186       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:23:26.080267       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:23:26.080378       1 main.go:227] handling current node
	I0314 19:23:26.080395       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:23:26.080405       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [cd640f130e42] <==
	I0314 19:19:02.338109       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0314 19:19:02.515980       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0314 19:19:02.531592       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [172.17.86.124]
	I0314 19:19:02.533129       1 controller.go:624] quota admission added evaluator for: endpoints
	I0314 19:19:02.541303       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0314 19:19:03.233535       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0314 19:19:04.375127       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0314 19:19:04.404662       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0314 19:19:04.419364       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0314 19:19:16.278098       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	I0314 19:19:16.777362       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I0314 19:22:06.744902       1 trace.go:236] Trace[2066474087]: "Patch" accept:application/vnd.kubernetes.protobuf, */*,audit-id:81ead457-b6db-4a38-8f07-c91ac503f121,client:172.17.86.124,protocol:HTTP/2.0,resource:nodes,scope:resource,url:/api/v1/nodes/multinode-442000-m02,user-agent:kube-controller-manager/v1.28.4 (linux/amd64) kubernetes/bae2c62/system:serviceaccount:kube-system:node-controller,verb:PATCH (14-Mar-2024 19:22:06.227) (total time: 516ms):
	Trace[2066474087]: ["GuaranteedUpdate etcd3" audit-id:81ead457-b6db-4a38-8f07-c91ac503f121,key:/minions/multinode-442000-m02,type:*core.Node,resource:nodes 516ms (19:22:06.227)
	Trace[2066474087]:  ---"Txn call completed" 514ms (19:22:06.743)]
	Trace[2066474087]: ---"Object stored in database" 514ms (19:22:06.743)
	Trace[2066474087]: [516.448841ms] [516.448841ms] END
	I0314 19:22:13.272541       1 trace.go:236] Trace[889089018]: "Patch" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:b82fc3fc-7e3f-4e3e-bc0a-01ae982f3b56,client:172.17.80.135,protocol:HTTP/2.0,resource:nodes,scope:resource,url:/api/v1/nodes/multinode-442000-m02/status,user-agent:kubelet/v1.28.4 (linux/amd64) kubernetes/bae2c62,verb:PATCH (14-Mar-2024 19:22:12.681) (total time: 590ms):
	Trace[889089018]: ["GuaranteedUpdate etcd3" audit-id:b82fc3fc-7e3f-4e3e-bc0a-01ae982f3b56,key:/minions/multinode-442000-m02,type:*core.Node,resource:nodes 590ms (19:22:12.682)
	Trace[889089018]:  ---"Txn call completed" 587ms (19:22:13.272)]
	Trace[889089018]: ---"Object stored in database" 587ms (19:22:13.272)
	Trace[889089018]: [590.631134ms] [590.631134ms] END
	I0314 19:22:13.354500       1 trace.go:236] Trace[1511663482]: "GuaranteedUpdate etcd3" audit-id:,key:/masterleases/172.17.86.124,type:*v1.Endpoints,resource:apiServerIPInfo (14-Mar-2024 19:22:12.501) (total time: 853ms):
	Trace[1511663482]: ---"Transaction prepared" 720ms (19:22:13.271)
	Trace[1511663482]: ---"Txn call completed" 82ms (19:22:13.354)
	Trace[1511663482]: [853.309636ms] [853.309636ms] END
	
	
	==> kube-controller-manager [16b80f73683d] <==
	I0314 19:19:18.500300       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="17.716936ms"
	I0314 19:19:18.500887       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="99.317µs"
	I0314 19:19:26.475232       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="98.515µs"
	I0314 19:19:26.505160       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="58.309µs"
	I0314 19:19:28.423231       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="23.310782ms"
	I0314 19:19:28.423925       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="72.006µs"
	I0314 19:19:31.116802       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I0314 19:22:02.467925       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-442000-m02\" does not exist"
	I0314 19:22:02.479576       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-442000-m02" podCIDRs=["10.244.1.0/24"]
	I0314 19:22:02.507610       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-72dzs"
	I0314 19:22:02.511169       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-c7m4p"
	I0314 19:22:06.145908       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-442000-m02"
	I0314 19:22:06.146201       1 event.go:307] "Event occurred" object="multinode-442000-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-442000-m02 event: Registered Node multinode-442000-m02 in Controller"
	I0314 19:22:20.862710       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-442000-m02"
	I0314 19:22:45.188036       1 event.go:307] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-5b5d89c9d6 to 2"
	I0314 19:22:45.218022       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5b5d89c9d6-8drpb"
	I0314 19:22:45.241867       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5b5d89c9d6-7446n"
	I0314 19:22:45.267427       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="80.313691ms"
	I0314 19:22:45.292961       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="25.159362ms"
	I0314 19:22:45.311264       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="18.241692ms"
	I0314 19:22:45.311407       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="93.911µs"
	I0314 19:22:48.320252       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="21.515467ms"
	I0314 19:22:48.320403       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="46.303µs"
	I0314 19:22:48.344640       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="8.018521ms"
	I0314 19:22:48.344838       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="42.804µs"
	
	
	==> kube-proxy [2a62baf3f1b4] <==
	I0314 19:19:18.247796       1 server_others.go:69] "Using iptables proxy"
	I0314 19:19:18.275162       1 node.go:141] Successfully retrieved node IP: 172.17.86.124
	I0314 19:19:18.379821       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0314 19:19:18.379851       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0314 19:19:18.395429       1 server_others.go:152] "Using iptables Proxier"
	I0314 19:19:18.395506       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0314 19:19:18.395856       1 server.go:846] "Version info" version="v1.28.4"
	I0314 19:19:18.395890       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0314 19:19:18.417861       1 config.go:188] "Starting service config controller"
	I0314 19:19:18.417913       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0314 19:19:18.417950       1 config.go:97] "Starting endpoint slice config controller"
	I0314 19:19:18.420511       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0314 19:19:18.426566       1 config.go:315] "Starting node config controller"
	I0314 19:19:18.426600       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0314 19:19:18.519508       1 shared_informer.go:318] Caches are synced for service config
	I0314 19:19:18.524347       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0314 19:19:18.527360       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [dbb603289bf1] <==
	W0314 19:19:01.382148       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0314 19:19:01.382194       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0314 19:19:01.454259       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0314 19:19:01.454398       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0314 19:19:01.505982       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0314 19:19:01.506182       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0314 19:19:01.640521       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0314 19:19:01.640836       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0314 19:19:01.681052       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0314 19:19:01.681953       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0314 19:19:01.732243       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0314 19:19:01.732288       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0314 19:19:01.767241       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0314 19:19:01.767329       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0314 19:19:01.783665       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0314 19:19:01.783845       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0314 19:19:01.812936       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0314 19:19:01.813027       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0314 19:19:01.821109       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0314 19:19:01.821267       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0314 19:19:01.843311       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0314 19:19:01.843339       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0314 19:19:01.914649       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0314 19:19:01.914986       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0314 19:19:04.090863       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Mar 14 19:19:28 multinode-442000 kubelet[2820]: I0314 19:19:28.398954    2820 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=4.398911938 podCreationTimestamp="2024-03-14 19:19:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-03-14 19:19:28.380719513 +0000 UTC m=+24.062508483" watchObservedRunningTime="2024-03-14 19:19:28.398911938 +0000 UTC m=+24.080700808"
	Mar 14 19:20:04 multinode-442000 kubelet[2820]: E0314 19:20:04.689832    2820 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 14 19:20:04 multinode-442000 kubelet[2820]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 14 19:20:04 multinode-442000 kubelet[2820]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 14 19:20:04 multinode-442000 kubelet[2820]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 14 19:20:04 multinode-442000 kubelet[2820]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 14 19:21:04 multinode-442000 kubelet[2820]: E0314 19:21:04.689561    2820 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 14 19:21:04 multinode-442000 kubelet[2820]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 14 19:21:04 multinode-442000 kubelet[2820]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 14 19:21:04 multinode-442000 kubelet[2820]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 14 19:21:04 multinode-442000 kubelet[2820]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 14 19:22:04 multinode-442000 kubelet[2820]: E0314 19:22:04.689957    2820 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 14 19:22:04 multinode-442000 kubelet[2820]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 14 19:22:04 multinode-442000 kubelet[2820]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 14 19:22:04 multinode-442000 kubelet[2820]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 14 19:22:04 multinode-442000 kubelet[2820]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 14 19:22:45 multinode-442000 kubelet[2820]: I0314 19:22:45.257926    2820 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-d22jc" podStartSLOduration=208.257874349 podCreationTimestamp="2024-03-14 19:19:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-03-14 19:19:28.399869623 +0000 UTC m=+24.081658593" watchObservedRunningTime="2024-03-14 19:22:45.257874349 +0000 UTC m=+220.939663219"
	Mar 14 19:22:45 multinode-442000 kubelet[2820]: I0314 19:22:45.258606    2820 topology_manager.go:215] "Topology Admit Handler" podUID="6ca0ace6-596a-4504-80b5-0cc0cc11f9a2" podNamespace="default" podName="busybox-5b5d89c9d6-7446n"
	Mar 14 19:22:45 multinode-442000 kubelet[2820]: I0314 19:22:45.457197    2820 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6hh9s\" (UniqueName: \"kubernetes.io/projected/6ca0ace6-596a-4504-80b5-0cc0cc11f9a2-kube-api-access-6hh9s\") pod \"busybox-5b5d89c9d6-7446n\" (UID: \"6ca0ace6-596a-4504-80b5-0cc0cc11f9a2\") " pod="default/busybox-5b5d89c9d6-7446n"
	Mar 14 19:22:46 multinode-442000 kubelet[2820]: I0314 19:22:46.273144    2820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fa0f2372c88eef3de0c7caa0041064157c314aff4c14bf6622f34dd89106f773"
	Mar 14 19:23:04 multinode-442000 kubelet[2820]: E0314 19:23:04.695217    2820 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 14 19:23:04 multinode-442000 kubelet[2820]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 14 19:23:04 multinode-442000 kubelet[2820]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 14 19:23:04 multinode-442000 kubelet[2820]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 14 19:23:04 multinode-442000 kubelet[2820]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0314 19:23:26.364452    9192 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-442000 -n multinode-442000
E0314 19:23:38.463558   11052 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-866600\client.crt: The system cannot find the path specified.
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-442000 -n multinode-442000: (11.1380674s)
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-442000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (53.48s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (97.15s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-442000 node stop m03
E0314 19:33:01.484387   11052 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-953400\client.crt: The system cannot find the path specified.
E0314 19:33:18.262551   11052 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-953400\client.crt: The system cannot find the path specified.
multinode_test.go:248: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-442000 node stop m03: exit status 30 (17.5374044s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-442000-m03"  ...
	* Powering off "multinode-442000-m03" via SSH ...

                                                
                                                
-- /stdout --
** stderr ** 
	W0314 19:33:01.210231    3536 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E0314 19:33:18.609785    3536 main.go:137] libmachine: [stderr =====>] : Hyper-V\Stop-VM : 'multinode-442000-m03' failed to change state.
	The operation cannot be performed while the object is in its current state.
	At line:1 char:1
	+ Hyper-V\Stop-VM multinode-442000-m03
	+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
	    + CategoryInfo          : InvalidOperation: (:) [Stop-VM], VirtualizationException
	    + FullyQualifiedErrorId : InvalidState,Microsoft.HyperV.PowerShell.Commands.StopVM
	 
	
	X Failed to stop node m03: Temporary Error: stop: exit status 1

                                                
                                                
** /stderr **
multinode_test.go:250: node stop returned an error. args "out/minikube-windows-amd64.exe -p multinode-442000 node stop m03": exit status 30
multinode_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-442000 status
E0314 19:33:38.513495   11052 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-866600\client.crt: The system cannot find the path specified.
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-442000 status: exit status 7 (23.9937568s)

                                                
                                                
-- stdout --
	multinode-442000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-442000-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-442000-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0314 19:33:18.773888   12888 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
multinode_test.go:261: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-442000 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-442000 status --alsologtostderr: exit status 7 (23.9790654s)

                                                
                                                
-- stdout --
	multinode-442000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-442000-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-442000-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0314 19:33:42.779648    1548 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0314 19:33:42.834661    1548 out.go:291] Setting OutFile to fd 1412 ...
	I0314 19:33:42.835736    1548 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 19:33:42.835812    1548 out.go:304] Setting ErrFile to fd 1560...
	I0314 19:33:42.835812    1548 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 19:33:42.851378    1548 out.go:298] Setting JSON to false
	I0314 19:33:42.851378    1548 mustload.go:65] Loading cluster: multinode-442000
	I0314 19:33:42.851378    1548 notify.go:220] Checking for updates...
	I0314 19:33:42.852177    1548 config.go:182] Loaded profile config "multinode-442000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0314 19:33:42.852177    1548 status.go:255] checking status of multinode-442000 ...
	I0314 19:33:42.852177    1548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000 ).state
	I0314 19:33:44.883495    1548 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:33:44.883495    1548 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:33:44.883573    1548 status.go:330] multinode-442000 host status = "Running" (err=<nil>)
	I0314 19:33:44.883573    1548 host.go:66] Checking if "multinode-442000" exists ...
	I0314 19:33:44.884045    1548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000 ).state
	I0314 19:33:46.862398    1548 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:33:46.862398    1548 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:33:46.862398    1548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-442000 ).networkadapters[0]).ipaddresses[0]
	I0314 19:33:49.236863    1548 main.go:141] libmachine: [stdout =====>] : 172.17.86.124
	
	I0314 19:33:49.236863    1548 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:33:49.237133    1548 host.go:66] Checking if "multinode-442000" exists ...
	I0314 19:33:49.246233    1548 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0314 19:33:49.246233    1548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000 ).state
	I0314 19:33:51.210868    1548 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:33:51.210868    1548 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:33:51.210868    1548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-442000 ).networkadapters[0]).ipaddresses[0]
	I0314 19:33:53.575135    1548 main.go:141] libmachine: [stdout =====>] : 172.17.86.124
	
	I0314 19:33:53.575135    1548 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:33:53.576000    1548 sshutil.go:53] new ssh client: &{IP:172.17.86.124 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-442000\id_rsa Username:docker}
	I0314 19:33:53.666619    1548 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.4200676s)
	I0314 19:33:53.679460    1548 ssh_runner.go:195] Run: systemctl --version
	I0314 19:33:53.705311    1548 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0314 19:33:53.733019    1548 kubeconfig.go:125] found "multinode-442000" server: "https://172.17.86.124:8443"
	I0314 19:33:53.733019    1548 api_server.go:166] Checking apiserver status ...
	I0314 19:33:53.743179    1548 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:33:53.775002    1548 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2278/cgroup
	W0314 19:33:53.792616    1548 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2278/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0314 19:33:53.803781    1548 ssh_runner.go:195] Run: ls
	I0314 19:33:53.811146    1548 api_server.go:253] Checking apiserver healthz at https://172.17.86.124:8443/healthz ...
	I0314 19:33:53.817116    1548 api_server.go:279] https://172.17.86.124:8443/healthz returned 200:
	ok
	I0314 19:33:53.818107    1548 status.go:422] multinode-442000 apiserver status = Running (err=<nil>)
	I0314 19:33:53.818107    1548 status.go:257] multinode-442000 status: &{Name:multinode-442000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0314 19:33:53.818107    1548 status.go:255] checking status of multinode-442000-m02 ...
	I0314 19:33:53.818634    1548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000-m02 ).state
	I0314 19:33:55.798852    1548 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:33:55.798852    1548 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:33:55.798852    1548 status.go:330] multinode-442000-m02 host status = "Running" (err=<nil>)
	I0314 19:33:55.798852    1548 host.go:66] Checking if "multinode-442000-m02" exists ...
	I0314 19:33:55.799599    1548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000-m02 ).state
	I0314 19:33:57.799263    1548 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:33:57.799263    1548 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:33:57.799263    1548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-442000-m02 ).networkadapters[0]).ipaddresses[0]
	I0314 19:34:00.173746    1548 main.go:141] libmachine: [stdout =====>] : 172.17.80.135
	
	I0314 19:34:00.173746    1548 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:34:00.173746    1548 host.go:66] Checking if "multinode-442000-m02" exists ...
	I0314 19:34:00.183253    1548 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0314 19:34:00.183253    1548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000-m02 ).state
	I0314 19:34:02.134085    1548 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:34:02.134085    1548 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:34:02.135255    1548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-442000-m02 ).networkadapters[0]).ipaddresses[0]
	I0314 19:34:04.502572    1548 main.go:141] libmachine: [stdout =====>] : 172.17.80.135
	
	I0314 19:34:04.502780    1548 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:34:04.503099    1548 sshutil.go:53] new ssh client: &{IP:172.17.80.135 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-442000-m02\id_rsa Username:docker}
	I0314 19:34:04.609625    1548 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.4260514s)
	I0314 19:34:04.620344    1548 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0314 19:34:04.647743    1548 status.go:257] multinode-442000-m02 status: &{Name:multinode-442000-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0314 19:34:04.647743    1548 status.go:255] checking status of multinode-442000-m03 ...
	I0314 19:34:04.648462    1548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000-m03 ).state
	I0314 19:34:06.601378    1548 main.go:141] libmachine: [stdout =====>] : Off
	
	I0314 19:34:06.601378    1548 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:34:06.601456    1548 status.go:330] multinode-442000-m03 host status = "Stopped" (err=<nil>)
	I0314 19:34:06.601456    1548 status.go:343] host is not running, skipping remaining checks
	I0314 19:34:06.601456    1548 status.go:257] multinode-442000-m03 status: &{Name:multinode-442000-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-442000 -n multinode-442000
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-442000 -n multinode-442000: (11.1559382s)
helpers_test.go:244: <<< TestMultiNode/serial/StopNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/StopNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-442000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-442000 logs -n 25: (7.8530814s)
helpers_test.go:252: TestMultiNode/serial/StopNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------------------------------------------------------------------------|------------------|-------------------|---------|---------------------|---------------------|
	| Command |                                                           Args                                                           |     Profile      |       User        | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------------------------------------------------------|------------------|-------------------|---------|---------------------|---------------------|
	| ssh     | multinode-442000 ssh -n multinode-442000-m02 sudo cat                                                                    | multinode-442000 | minikube7\jenkins | v1.32.0 | 14 Mar 24 19:29 UTC | 14 Mar 24 19:29 UTC |
	|         | /home/docker/cp-test_multinode-442000_multinode-442000-m02.txt                                                           |                  |                   |         |                     |                     |
	| cp      | multinode-442000 cp multinode-442000:/home/docker/cp-test.txt                                                            | multinode-442000 | minikube7\jenkins | v1.32.0 | 14 Mar 24 19:29 UTC | 14 Mar 24 19:29 UTC |
	|         | multinode-442000-m03:/home/docker/cp-test_multinode-442000_multinode-442000-m03.txt                                      |                  |                   |         |                     |                     |
	| ssh     | multinode-442000 ssh -n                                                                                                  | multinode-442000 | minikube7\jenkins | v1.32.0 | 14 Mar 24 19:29 UTC | 14 Mar 24 19:29 UTC |
	|         | multinode-442000 sudo cat                                                                                                |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| ssh     | multinode-442000 ssh -n multinode-442000-m03 sudo cat                                                                    | multinode-442000 | minikube7\jenkins | v1.32.0 | 14 Mar 24 19:29 UTC | 14 Mar 24 19:29 UTC |
	|         | /home/docker/cp-test_multinode-442000_multinode-442000-m03.txt                                                           |                  |                   |         |                     |                     |
	| cp      | multinode-442000 cp testdata\cp-test.txt                                                                                 | multinode-442000 | minikube7\jenkins | v1.32.0 | 14 Mar 24 19:29 UTC | 14 Mar 24 19:29 UTC |
	|         | multinode-442000-m02:/home/docker/cp-test.txt                                                                            |                  |                   |         |                     |                     |
	| ssh     | multinode-442000 ssh -n                                                                                                  | multinode-442000 | minikube7\jenkins | v1.32.0 | 14 Mar 24 19:29 UTC | 14 Mar 24 19:29 UTC |
	|         | multinode-442000-m02 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| cp      | multinode-442000 cp multinode-442000-m02:/home/docker/cp-test.txt                                                        | multinode-442000 | minikube7\jenkins | v1.32.0 | 14 Mar 24 19:29 UTC | 14 Mar 24 19:30 UTC |
	|         | C:\Users\jenkins.minikube7\AppData\Local\Temp\TestMultiNodeserialCopyFile1678027892\001\cp-test_multinode-442000-m02.txt |                  |                   |         |                     |                     |
	| ssh     | multinode-442000 ssh -n                                                                                                  | multinode-442000 | minikube7\jenkins | v1.32.0 | 14 Mar 24 19:30 UTC | 14 Mar 24 19:30 UTC |
	|         | multinode-442000-m02 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| cp      | multinode-442000 cp multinode-442000-m02:/home/docker/cp-test.txt                                                        | multinode-442000 | minikube7\jenkins | v1.32.0 | 14 Mar 24 19:30 UTC | 14 Mar 24 19:30 UTC |
	|         | multinode-442000:/home/docker/cp-test_multinode-442000-m02_multinode-442000.txt                                          |                  |                   |         |                     |                     |
	| ssh     | multinode-442000 ssh -n                                                                                                  | multinode-442000 | minikube7\jenkins | v1.32.0 | 14 Mar 24 19:30 UTC | 14 Mar 24 19:30 UTC |
	|         | multinode-442000-m02 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| ssh     | multinode-442000 ssh -n multinode-442000 sudo cat                                                                        | multinode-442000 | minikube7\jenkins | v1.32.0 | 14 Mar 24 19:30 UTC | 14 Mar 24 19:30 UTC |
	|         | /home/docker/cp-test_multinode-442000-m02_multinode-442000.txt                                                           |                  |                   |         |                     |                     |
	| cp      | multinode-442000 cp multinode-442000-m02:/home/docker/cp-test.txt                                                        | multinode-442000 | minikube7\jenkins | v1.32.0 | 14 Mar 24 19:30 UTC | 14 Mar 24 19:31 UTC |
	|         | multinode-442000-m03:/home/docker/cp-test_multinode-442000-m02_multinode-442000-m03.txt                                  |                  |                   |         |                     |                     |
	| ssh     | multinode-442000 ssh -n                                                                                                  | multinode-442000 | minikube7\jenkins | v1.32.0 | 14 Mar 24 19:31 UTC | 14 Mar 24 19:31 UTC |
	|         | multinode-442000-m02 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| ssh     | multinode-442000 ssh -n multinode-442000-m03 sudo cat                                                                    | multinode-442000 | minikube7\jenkins | v1.32.0 | 14 Mar 24 19:31 UTC | 14 Mar 24 19:31 UTC |
	|         | /home/docker/cp-test_multinode-442000-m02_multinode-442000-m03.txt                                                       |                  |                   |         |                     |                     |
	| cp      | multinode-442000 cp testdata\cp-test.txt                                                                                 | multinode-442000 | minikube7\jenkins | v1.32.0 | 14 Mar 24 19:31 UTC | 14 Mar 24 19:31 UTC |
	|         | multinode-442000-m03:/home/docker/cp-test.txt                                                                            |                  |                   |         |                     |                     |
	| ssh     | multinode-442000 ssh -n                                                                                                  | multinode-442000 | minikube7\jenkins | v1.32.0 | 14 Mar 24 19:31 UTC | 14 Mar 24 19:31 UTC |
	|         | multinode-442000-m03 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| cp      | multinode-442000 cp multinode-442000-m03:/home/docker/cp-test.txt                                                        | multinode-442000 | minikube7\jenkins | v1.32.0 | 14 Mar 24 19:31 UTC | 14 Mar 24 19:31 UTC |
	|         | C:\Users\jenkins.minikube7\AppData\Local\Temp\TestMultiNodeserialCopyFile1678027892\001\cp-test_multinode-442000-m03.txt |                  |                   |         |                     |                     |
	| ssh     | multinode-442000 ssh -n                                                                                                  | multinode-442000 | minikube7\jenkins | v1.32.0 | 14 Mar 24 19:31 UTC | 14 Mar 24 19:31 UTC |
	|         | multinode-442000-m03 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| cp      | multinode-442000 cp multinode-442000-m03:/home/docker/cp-test.txt                                                        | multinode-442000 | minikube7\jenkins | v1.32.0 | 14 Mar 24 19:31 UTC | 14 Mar 24 19:32 UTC |
	|         | multinode-442000:/home/docker/cp-test_multinode-442000-m03_multinode-442000.txt                                          |                  |                   |         |                     |                     |
	| ssh     | multinode-442000 ssh -n                                                                                                  | multinode-442000 | minikube7\jenkins | v1.32.0 | 14 Mar 24 19:32 UTC | 14 Mar 24 19:32 UTC |
	|         | multinode-442000-m03 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| ssh     | multinode-442000 ssh -n multinode-442000 sudo cat                                                                        | multinode-442000 | minikube7\jenkins | v1.32.0 | 14 Mar 24 19:32 UTC | 14 Mar 24 19:32 UTC |
	|         | /home/docker/cp-test_multinode-442000-m03_multinode-442000.txt                                                           |                  |                   |         |                     |                     |
	| cp      | multinode-442000 cp multinode-442000-m03:/home/docker/cp-test.txt                                                        | multinode-442000 | minikube7\jenkins | v1.32.0 | 14 Mar 24 19:32 UTC | 14 Mar 24 19:32 UTC |
	|         | multinode-442000-m02:/home/docker/cp-test_multinode-442000-m03_multinode-442000-m02.txt                                  |                  |                   |         |                     |                     |
	| ssh     | multinode-442000 ssh -n                                                                                                  | multinode-442000 | minikube7\jenkins | v1.32.0 | 14 Mar 24 19:32 UTC | 14 Mar 24 19:32 UTC |
	|         | multinode-442000-m03 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| ssh     | multinode-442000 ssh -n multinode-442000-m02 sudo cat                                                                    | multinode-442000 | minikube7\jenkins | v1.32.0 | 14 Mar 24 19:32 UTC | 14 Mar 24 19:33 UTC |
	|         | /home/docker/cp-test_multinode-442000-m03_multinode-442000-m02.txt                                                       |                  |                   |         |                     |                     |
	| node    | multinode-442000 node stop m03                                                                                           | multinode-442000 | minikube7\jenkins | v1.32.0 | 14 Mar 24 19:33 UTC |                     |
	|---------|--------------------------------------------------------------------------------------------------------------------------|------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/14 19:16:05
	Running on machine: minikube7
	Binary: Built with gc go1.22.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0314 19:16:05.281792    9056 out.go:291] Setting OutFile to fd 1180 ...
	I0314 19:16:05.282780    9056 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 19:16:05.282780    9056 out.go:304] Setting ErrFile to fd 1292...
	I0314 19:16:05.282780    9056 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 19:16:05.310790    9056 out.go:298] Setting JSON to false
	I0314 19:16:05.314787    9056 start.go:129] hostinfo: {"hostname":"minikube7","uptime":65569,"bootTime":1710378195,"procs":197,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4046 Build 19045.4046","kernelVersion":"10.0.19045.4046 Build 19045.4046","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f66ed2ea-6c04-4a6b-8eea-b2fb0953e990"}
	W0314 19:16:05.315791    9056 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0314 19:16:05.323776    9056 out.go:177] * [multinode-442000] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4046 Build 19045.4046
	I0314 19:16:05.327782    9056 notify.go:220] Checking for updates...
	I0314 19:16:05.329779    9056 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0314 19:16:05.331779    9056 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0314 19:16:05.333789    9056 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	I0314 19:16:05.336840    9056 out.go:177]   - MINIKUBE_LOCATION=18384
	I0314 19:16:05.338784    9056 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0314 19:16:05.341780    9056 config.go:182] Loaded profile config "ha-832100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0314 19:16:05.341780    9056 driver.go:392] Setting default libvirt URI to qemu:///system
	I0314 19:16:10.372888    9056 out.go:177] * Using the hyperv driver based on user configuration
	I0314 19:16:10.376021    9056 start.go:297] selected driver: hyperv
	I0314 19:16:10.376652    9056 start.go:901] validating driver "hyperv" against <nil>
	I0314 19:16:10.376739    9056 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0314 19:16:10.435043    9056 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0314 19:16:10.436301    9056 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0314 19:16:10.436301    9056 cni.go:84] Creating CNI manager for ""
	I0314 19:16:10.436301    9056 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0314 19:16:10.436301    9056 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0314 19:16:10.437007    9056 start.go:340] cluster config:
	{Name:multinode-442000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-442000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Stat
icIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 19:16:10.437007    9056 iso.go:125] acquiring lock: {Name:mk1b3e73402180391a20a865a9454da445c269fc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0314 19:16:10.443166    9056 out.go:177] * Starting "multinode-442000" primary control-plane node in "multinode-442000" cluster
	I0314 19:16:10.445194    9056 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0314 19:16:10.445336    9056 preload.go:147] Found local preload: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I0314 19:16:10.445336    9056 cache.go:56] Caching tarball of preloaded images
	I0314 19:16:10.445336    9056 preload.go:173] Found C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0314 19:16:10.445336    9056 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0314 19:16:10.446000    9056 profile.go:142] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-442000\config.json ...
	I0314 19:16:10.446242    9056 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-442000\config.json: {Name:mka904f9f7523977aee93994c8b9f11b44f61fba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 19:16:10.447219    9056 start.go:360] acquireMachinesLock for multinode-442000: {Name:mk814f158b6187cc9297257c36fdbe0d2871c950 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0314 19:16:10.447386    9056 start.go:364] duration metric: took 53.5µs to acquireMachinesLock for "multinode-442000"
	I0314 19:16:10.447501    9056 start.go:93] Provisioning new machine with config: &{Name:multinode-442000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.28.4 ClusterName:multinode-442000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0314 19:16:10.447674    9056 start.go:125] createHost starting for "" (driver="hyperv")
	I0314 19:16:10.449637    9056 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0314 19:16:10.450253    9056 start.go:159] libmachine.API.Create for "multinode-442000" (driver="hyperv")
	I0314 19:16:10.450253    9056 client.go:168] LocalClient.Create starting
	I0314 19:16:10.450844    9056 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem
	I0314 19:16:10.451007    9056 main.go:141] libmachine: Decoding PEM data...
	I0314 19:16:10.451060    9056 main.go:141] libmachine: Parsing certificate...
	I0314 19:16:10.451276    9056 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem
	I0314 19:16:10.451439    9056 main.go:141] libmachine: Decoding PEM data...
	I0314 19:16:10.451439    9056 main.go:141] libmachine: Parsing certificate...
	I0314 19:16:10.451563    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0314 19:16:12.392205    9056 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0314 19:16:12.392729    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:16:12.392785    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0314 19:16:14.049936    9056 main.go:141] libmachine: [stdout =====>] : False
	
	I0314 19:16:14.050152    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:16:14.050152    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0314 19:16:15.465041    9056 main.go:141] libmachine: [stdout =====>] : True
	
	I0314 19:16:15.465041    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:16:15.465591    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0314 19:16:18.859602    9056 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0314 19:16:18.859602    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:16:18.861835    9056 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube7/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.32.1-1710348681-18375-amd64.iso...
	I0314 19:16:19.187583    9056 main.go:141] libmachine: Creating SSH key...
	I0314 19:16:19.321886    9056 main.go:141] libmachine: Creating VM...
	I0314 19:16:19.322884    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0314 19:16:22.031758    9056 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0314 19:16:22.031758    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:16:22.031848    9056 main.go:141] libmachine: Using switch "Default Switch"
	I0314 19:16:22.031908    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0314 19:16:23.704927    9056 main.go:141] libmachine: [stdout =====>] : True
	
	I0314 19:16:23.705236    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:16:23.705511    9056 main.go:141] libmachine: Creating VHD
	I0314 19:16:23.705721    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-442000\fixed.vhd' -SizeBytes 10MB -Fixed
	I0314 19:16:27.309624    9056 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube7
	Path                    : C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-442000\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 96D62F53-6B38-4253-BE69-5942B8815E3F
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0314 19:16:27.309717    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:16:27.309717    9056 main.go:141] libmachine: Writing magic tar header
	I0314 19:16:27.309717    9056 main.go:141] libmachine: Writing SSH key tar header
	I0314 19:16:27.319647    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-442000\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-442000\disk.vhd' -VHDType Dynamic -DeleteSource
	I0314 19:16:30.376802    9056 main.go:141] libmachine: [stdout =====>] : 
	I0314 19:16:30.376802    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:16:30.376802    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-442000\disk.vhd' -SizeBytes 20000MB
	I0314 19:16:32.759521    9056 main.go:141] libmachine: [stdout =====>] : 
	I0314 19:16:32.759521    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:16:32.759521    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM multinode-442000 -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-442000' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0314 19:16:36.207799    9056 main.go:141] libmachine: [stdout =====>] : 
	Name             State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----             ----- ----------- ----------------- ------   ------             -------
	multinode-442000 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0314 19:16:36.208609    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:16:36.208634    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName multinode-442000 -DynamicMemoryEnabled $false
	I0314 19:16:38.328355    9056 main.go:141] libmachine: [stdout =====>] : 
	I0314 19:16:38.329310    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:16:38.329310    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor multinode-442000 -Count 2
	I0314 19:16:40.363469    9056 main.go:141] libmachine: [stdout =====>] : 
	I0314 19:16:40.363469    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:16:40.363469    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName multinode-442000 -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-442000\boot2docker.iso'
	I0314 19:16:42.827513    9056 main.go:141] libmachine: [stdout =====>] : 
	I0314 19:16:42.827513    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:16:42.828225    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName multinode-442000 -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-442000\disk.vhd'
	I0314 19:16:45.290078    9056 main.go:141] libmachine: [stdout =====>] : 
	I0314 19:16:45.290828    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:16:45.290828    9056 main.go:141] libmachine: Starting VM...
	I0314 19:16:45.290904    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-442000
	I0314 19:16:48.205105    9056 main.go:141] libmachine: [stdout =====>] : 
	I0314 19:16:48.205105    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:16:48.205105    9056 main.go:141] libmachine: Waiting for host to start...
	I0314 19:16:48.205105    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000 ).state
	I0314 19:16:50.285585    9056 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:16:50.285585    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:16:50.285808    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-442000 ).networkadapters[0]).ipaddresses[0]
	I0314 19:16:52.663604    9056 main.go:141] libmachine: [stdout =====>] : 
	I0314 19:16:52.663604    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:16:53.667654    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000 ).state
	I0314 19:16:55.660156    9056 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:16:55.660923    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:16:55.660923    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-442000 ).networkadapters[0]).ipaddresses[0]
	I0314 19:16:57.982374    9056 main.go:141] libmachine: [stdout =====>] : 
	I0314 19:16:57.982374    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:16:58.998179    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000 ).state
	I0314 19:17:01.019679    9056 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:17:01.019679    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:17:01.019732    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-442000 ).networkadapters[0]).ipaddresses[0]
	I0314 19:17:03.361021    9056 main.go:141] libmachine: [stdout =====>] : 
	I0314 19:17:03.361021    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:17:04.364207    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000 ).state
	I0314 19:17:06.384129    9056 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:17:06.385136    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:17:06.385188    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-442000 ).networkadapters[0]).ipaddresses[0]
	I0314 19:17:08.673945    9056 main.go:141] libmachine: [stdout =====>] : 
	I0314 19:17:08.673945    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:17:09.678475    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000 ).state
	I0314 19:17:11.766860    9056 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:17:11.766940    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:17:11.766994    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-442000 ).networkadapters[0]).ipaddresses[0]
	I0314 19:17:14.165951    9056 main.go:141] libmachine: [stdout =====>] : 172.17.86.124
	
	I0314 19:17:14.166510    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:17:14.166510    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000 ).state
	I0314 19:17:16.155122    9056 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:17:16.155122    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:17:16.155242    9056 machine.go:94] provisionDockerMachine start ...
	I0314 19:17:16.155379    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000 ).state
	I0314 19:17:18.170935    9056 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:17:18.171786    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:17:18.171786    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-442000 ).networkadapters[0]).ipaddresses[0]
	I0314 19:17:20.570660    9056 main.go:141] libmachine: [stdout =====>] : 172.17.86.124
	
	I0314 19:17:20.571719    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:17:20.576141    9056 main.go:141] libmachine: Using SSH client type: native
	I0314 19:17:20.587624    9056 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xac9f80] 0xaccb60 <nil>  [] 0s} 172.17.86.124 22 <nil> <nil>}
	I0314 19:17:20.587624    9056 main.go:141] libmachine: About to run SSH command:
	hostname
	I0314 19:17:20.715077    9056 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0314 19:17:20.715077    9056 buildroot.go:166] provisioning hostname "multinode-442000"
	I0314 19:17:20.715077    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000 ).state
	I0314 19:17:22.673390    9056 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:17:22.673390    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:17:22.673390    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-442000 ).networkadapters[0]).ipaddresses[0]
	I0314 19:17:25.022102    9056 main.go:141] libmachine: [stdout =====>] : 172.17.86.124
	
	I0314 19:17:25.022605    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:17:25.026226    9056 main.go:141] libmachine: Using SSH client type: native
	I0314 19:17:25.026751    9056 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xac9f80] 0xaccb60 <nil>  [] 0s} 172.17.86.124 22 <nil> <nil>}
	I0314 19:17:25.026938    9056 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-442000 && echo "multinode-442000" | sudo tee /etc/hostname
	I0314 19:17:25.177771    9056 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-442000
	
	I0314 19:17:25.178006    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000 ).state
	I0314 19:17:27.156527    9056 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:17:27.156527    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:17:27.156527    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-442000 ).networkadapters[0]).ipaddresses[0]
	I0314 19:17:29.567430    9056 main.go:141] libmachine: [stdout =====>] : 172.17.86.124
	
	I0314 19:17:29.567914    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:17:29.571532    9056 main.go:141] libmachine: Using SSH client type: native
	I0314 19:17:29.572214    9056 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xac9f80] 0xaccb60 <nil>  [] 0s} 172.17.86.124 22 <nil> <nil>}
	I0314 19:17:29.572214    9056 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-442000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-442000/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-442000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0314 19:17:29.714523    9056 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0314 19:17:29.714645    9056 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube7\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube7\minikube-integration\.minikube}
	I0314 19:17:29.714780    9056 buildroot.go:174] setting up certificates
	I0314 19:17:29.714780    9056 provision.go:84] configureAuth start
	I0314 19:17:29.714841    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000 ).state
	I0314 19:17:31.692947    9056 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:17:31.692947    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:17:31.693426    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-442000 ).networkadapters[0]).ipaddresses[0]
	I0314 19:17:34.064508    9056 main.go:141] libmachine: [stdout =====>] : 172.17.86.124
	
	I0314 19:17:34.064508    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:17:34.064994    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000 ).state
	I0314 19:17:36.070069    9056 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:17:36.070069    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:17:36.070069    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-442000 ).networkadapters[0]).ipaddresses[0]
	I0314 19:17:38.448548    9056 main.go:141] libmachine: [stdout =====>] : 172.17.86.124
	
	I0314 19:17:38.448548    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:17:38.448624    9056 provision.go:143] copyHostCerts
	I0314 19:17:38.448756    9056 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem
	I0314 19:17:38.448756    9056 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem, removing ...
	I0314 19:17:38.448756    9056 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\key.pem
	I0314 19:17:38.449300    9056 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem (1679 bytes)
	I0314 19:17:38.450042    9056 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem
	I0314 19:17:38.450308    9056 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem, removing ...
	I0314 19:17:38.450308    9056 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.pem
	I0314 19:17:38.450308    9056 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0314 19:17:38.451291    9056 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem
	I0314 19:17:38.451368    9056 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem, removing ...
	I0314 19:17:38.451368    9056 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cert.pem
	I0314 19:17:38.451368    9056 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0314 19:17:38.452367    9056 provision.go:117] generating server cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-442000 san=[127.0.0.1 172.17.86.124 localhost minikube multinode-442000]
	I0314 19:17:39.012068    9056 provision.go:177] copyRemoteCerts
	I0314 19:17:39.020725    9056 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0314 19:17:39.020725    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000 ).state
	I0314 19:17:41.029685    9056 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:17:41.029685    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:17:41.030046    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-442000 ).networkadapters[0]).ipaddresses[0]
	I0314 19:17:43.409651    9056 main.go:141] libmachine: [stdout =====>] : 172.17.86.124
	
	I0314 19:17:43.410595    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:17:43.411030    9056 sshutil.go:53] new ssh client: &{IP:172.17.86.124 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-442000\id_rsa Username:docker}
	I0314 19:17:43.521102    9056 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.5000468s)
	I0314 19:17:43.521102    9056 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0314 19:17:43.521102    9056 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0314 19:17:43.562966    9056 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0314 19:17:43.562966    9056 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1216 bytes)
	I0314 19:17:43.602914    9056 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0314 19:17:43.602914    9056 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0314 19:17:43.646690    9056 provision.go:87] duration metric: took 13.9308279s to configureAuth
	I0314 19:17:43.646690    9056 buildroot.go:189] setting minikube options for container-runtime
	I0314 19:17:43.647333    9056 config.go:182] Loaded profile config "multinode-442000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0314 19:17:43.647425    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000 ).state
	I0314 19:17:45.603382    9056 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:17:45.603382    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:17:45.603382    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-442000 ).networkadapters[0]).ipaddresses[0]
	I0314 19:17:47.963636    9056 main.go:141] libmachine: [stdout =====>] : 172.17.86.124
	
	I0314 19:17:47.964123    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:17:47.970093    9056 main.go:141] libmachine: Using SSH client type: native
	I0314 19:17:47.970631    9056 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xac9f80] 0xaccb60 <nil>  [] 0s} 172.17.86.124 22 <nil> <nil>}
	I0314 19:17:47.970631    9056 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0314 19:17:48.095379    9056 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0314 19:17:48.095379    9056 buildroot.go:70] root file system type: tmpfs
	I0314 19:17:48.095379    9056 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0314 19:17:48.095379    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000 ).state
	I0314 19:17:50.077513    9056 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:17:50.077513    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:17:50.077513    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-442000 ).networkadapters[0]).ipaddresses[0]
	I0314 19:17:52.459944    9056 main.go:141] libmachine: [stdout =====>] : 172.17.86.124
	
	I0314 19:17:52.459944    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:17:52.465257    9056 main.go:141] libmachine: Using SSH client type: native
	I0314 19:17:52.466168    9056 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xac9f80] 0xaccb60 <nil>  [] 0s} 172.17.86.124 22 <nil> <nil>}
	I0314 19:17:52.466168    9056 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0314 19:17:52.623284    9056 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0314 19:17:52.623451    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000 ).state
	I0314 19:17:54.587664    9056 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:17:54.587664    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:17:54.588091    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-442000 ).networkadapters[0]).ipaddresses[0]
	I0314 19:17:56.955826    9056 main.go:141] libmachine: [stdout =====>] : 172.17.86.124
	
	I0314 19:17:56.956115    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:17:56.960039    9056 main.go:141] libmachine: Using SSH client type: native
	I0314 19:17:56.960260    9056 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xac9f80] 0xaccb60 <nil>  [] 0s} 172.17.86.124 22 <nil> <nil>}
	I0314 19:17:56.960260    9056 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0314 19:17:59.046295    9056 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0314 19:17:59.046340    9056 machine.go:97] duration metric: took 42.8879545s to provisionDockerMachine
	I0314 19:17:59.046388    9056 client.go:171] duration metric: took 1m48.5881831s to LocalClient.Create
	I0314 19:17:59.046388    9056 start.go:167] duration metric: took 1m48.5882532s to libmachine.API.Create "multinode-442000"
	I0314 19:17:59.046388    9056 start.go:293] postStartSetup for "multinode-442000" (driver="hyperv")
	I0314 19:17:59.046388    9056 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0314 19:17:59.055700    9056 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0314 19:17:59.055893    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000 ).state
	I0314 19:18:01.039352    9056 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:18:01.039352    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:18:01.039440    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-442000 ).networkadapters[0]).ipaddresses[0]
	I0314 19:18:03.420678    9056 main.go:141] libmachine: [stdout =====>] : 172.17.86.124
	
	I0314 19:18:03.420678    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:18:03.421148    9056 sshutil.go:53] new ssh client: &{IP:172.17.86.124 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-442000\id_rsa Username:docker}
	I0314 19:18:03.515040    9056 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.4589467s)
	I0314 19:18:03.524552    9056 ssh_runner.go:195] Run: cat /etc/os-release
	I0314 19:18:03.531208    9056 command_runner.go:130] > NAME=Buildroot
	I0314 19:18:03.531208    9056 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0314 19:18:03.531208    9056 command_runner.go:130] > ID=buildroot
	I0314 19:18:03.531208    9056 command_runner.go:130] > VERSION_ID=2023.02.9
	I0314 19:18:03.531208    9056 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0314 19:18:03.531208    9056 info.go:137] Remote host: Buildroot 2023.02.9
	I0314 19:18:03.531208    9056 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\addons for local assets ...
	I0314 19:18:03.531947    9056 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\files for local assets ...
	I0314 19:18:03.533011    9056 filesync.go:149] local asset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\110522.pem -> 110522.pem in /etc/ssl/certs
	I0314 19:18:03.533011    9056 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\110522.pem -> /etc/ssl/certs/110522.pem
	I0314 19:18:03.544080    9056 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0314 19:18:03.564507    9056 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\110522.pem --> /etc/ssl/certs/110522.pem (1708 bytes)
	I0314 19:18:03.614643    9056 start.go:296] duration metric: took 4.5679178s for postStartSetup
	I0314 19:18:03.616501    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000 ).state
	I0314 19:18:05.619285    9056 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:18:05.619285    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:18:05.619285    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-442000 ).networkadapters[0]).ipaddresses[0]
	I0314 19:18:07.980709    9056 main.go:141] libmachine: [stdout =====>] : 172.17.86.124
	
	I0314 19:18:07.980709    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:18:07.981508    9056 profile.go:142] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-442000\config.json ...
	I0314 19:18:07.983779    9056 start.go:128] duration metric: took 1m57.5275628s to createHost
	I0314 19:18:07.983885    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000 ).state
	I0314 19:18:09.975189    9056 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:18:09.975189    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:18:09.976274    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-442000 ).networkadapters[0]).ipaddresses[0]
	I0314 19:18:12.388249    9056 main.go:141] libmachine: [stdout =====>] : 172.17.86.124
	
	I0314 19:18:12.388671    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:18:12.394326    9056 main.go:141] libmachine: Using SSH client type: native
	I0314 19:18:12.394326    9056 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xac9f80] 0xaccb60 <nil>  [] 0s} 172.17.86.124 22 <nil> <nil>}
	I0314 19:18:12.394326    9056 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0314 19:18:12.508792    9056 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710443892.745218743
	
	I0314 19:18:12.508880    9056 fix.go:216] guest clock: 1710443892.745218743
	I0314 19:18:12.508880    9056 fix.go:229] Guest: 2024-03-14 19:18:12.745218743 +0000 UTC Remote: 2024-03-14 19:18:07.9838851 +0000 UTC m=+122.838526201 (delta=4.761333643s)
	I0314 19:18:12.508880    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000 ).state
	I0314 19:18:14.475037    9056 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:18:14.475037    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:18:14.475741    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-442000 ).networkadapters[0]).ipaddresses[0]
	I0314 19:18:16.822748    9056 main.go:141] libmachine: [stdout =====>] : 172.17.86.124
	
	I0314 19:18:16.822748    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:18:16.827285    9056 main.go:141] libmachine: Using SSH client type: native
	I0314 19:18:16.827872    9056 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xac9f80] 0xaccb60 <nil>  [] 0s} 172.17.86.124 22 <nil> <nil>}
	I0314 19:18:16.827872    9056 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1710443892
	I0314 19:18:16.959165    9056 main.go:141] libmachine: SSH cmd err, output: <nil>: Thu Mar 14 19:18:12 UTC 2024
	
	I0314 19:18:16.959165    9056 fix.go:236] clock set: Thu Mar 14 19:18:12 UTC 2024
	 (err=<nil>)
	I0314 19:18:16.959268    9056 start.go:83] releasing machines lock for "multinode-442000", held for 2m6.5026752s
	I0314 19:18:16.959454    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000 ).state
	I0314 19:18:18.968414    9056 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:18:18.968414    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:18:18.968598    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-442000 ).networkadapters[0]).ipaddresses[0]
	I0314 19:18:21.323046    9056 main.go:141] libmachine: [stdout =====>] : 172.17.86.124
	
	I0314 19:18:21.323046    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:18:21.328298    9056 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0314 19:18:21.328449    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000 ).state
	I0314 19:18:21.336498    9056 ssh_runner.go:195] Run: cat /version.json
	I0314 19:18:21.336498    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000 ).state
	I0314 19:18:23.374651    9056 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:18:23.374651    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:18:23.375272    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-442000 ).networkadapters[0]).ipaddresses[0]
	I0314 19:18:23.375573    9056 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:18:23.375678    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:18:23.375678    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-442000 ).networkadapters[0]).ipaddresses[0]
	I0314 19:18:25.778398    9056 main.go:141] libmachine: [stdout =====>] : 172.17.86.124
	
	I0314 19:18:25.778398    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:18:25.779076    9056 sshutil.go:53] new ssh client: &{IP:172.17.86.124 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-442000\id_rsa Username:docker}
	I0314 19:18:25.814356    9056 main.go:141] libmachine: [stdout =====>] : 172.17.86.124
	
	I0314 19:18:25.814356    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:18:25.814356    9056 sshutil.go:53] new ssh client: &{IP:172.17.86.124 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-442000\id_rsa Username:docker}
	I0314 19:18:25.954976    9056 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0314 19:18:25.955105    9056 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.6263344s)
	I0314 19:18:25.955105    9056 command_runner.go:130] > {"iso_version": "v1.32.1-1710348681-18375", "kicbase_version": "v0.0.42-1710284843-18375", "minikube_version": "v1.32.0", "commit": "fd5757a6603390a2c0efe3b1e5cdd797538203fd"}
	I0314 19:18:25.955219    9056 ssh_runner.go:235] Completed: cat /version.json: (4.6183785s)
	I0314 19:18:25.964367    9056 ssh_runner.go:195] Run: systemctl --version
	I0314 19:18:25.973042    9056 command_runner.go:130] > systemd 252 (252)
	I0314 19:18:25.974058    9056 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0314 19:18:25.983175    9056 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0314 19:18:25.991457    9056 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0314 19:18:25.992057    9056 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0314 19:18:26.000605    9056 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0314 19:18:26.031455    9056 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0314 19:18:26.031587    9056 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0314 19:18:26.031701    9056 start.go:494] detecting cgroup driver to use...
	I0314 19:18:26.032046    9056 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0314 19:18:26.067083    9056 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0314 19:18:26.076097    9056 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0314 19:18:26.104128    9056 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0314 19:18:26.124613    9056 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0314 19:18:26.135602    9056 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0314 19:18:26.162988    9056 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0314 19:18:26.192775    9056 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0314 19:18:26.219503    9056 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0314 19:18:26.246010    9056 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0314 19:18:26.277308    9056 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0314 19:18:26.304165    9056 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0314 19:18:26.321414    9056 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0314 19:18:26.330549    9056 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0314 19:18:26.357829    9056 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 19:18:26.534497    9056 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0314 19:18:26.562667    9056 start.go:494] detecting cgroup driver to use...
	I0314 19:18:26.571628    9056 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0314 19:18:26.593280    9056 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0314 19:18:26.593280    9056 command_runner.go:130] > [Unit]
	I0314 19:18:26.593280    9056 command_runner.go:130] > Description=Docker Application Container Engine
	I0314 19:18:26.593280    9056 command_runner.go:130] > Documentation=https://docs.docker.com
	I0314 19:18:26.593280    9056 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0314 19:18:26.593280    9056 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0314 19:18:26.593280    9056 command_runner.go:130] > StartLimitBurst=3
	I0314 19:18:26.593280    9056 command_runner.go:130] > StartLimitIntervalSec=60
	I0314 19:18:26.593280    9056 command_runner.go:130] > [Service]
	I0314 19:18:26.593280    9056 command_runner.go:130] > Type=notify
	I0314 19:18:26.593280    9056 command_runner.go:130] > Restart=on-failure
	I0314 19:18:26.593280    9056 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0314 19:18:26.593280    9056 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0314 19:18:26.593280    9056 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0314 19:18:26.593280    9056 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0314 19:18:26.593280    9056 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0314 19:18:26.593280    9056 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0314 19:18:26.593280    9056 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0314 19:18:26.593280    9056 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0314 19:18:26.593280    9056 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0314 19:18:26.593280    9056 command_runner.go:130] > ExecStart=
	I0314 19:18:26.593280    9056 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0314 19:18:26.593280    9056 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0314 19:18:26.593280    9056 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0314 19:18:26.593280    9056 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0314 19:18:26.593280    9056 command_runner.go:130] > LimitNOFILE=infinity
	I0314 19:18:26.593280    9056 command_runner.go:130] > LimitNPROC=infinity
	I0314 19:18:26.593280    9056 command_runner.go:130] > LimitCORE=infinity
	I0314 19:18:26.593280    9056 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0314 19:18:26.593280    9056 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0314 19:18:26.593280    9056 command_runner.go:130] > TasksMax=infinity
	I0314 19:18:26.593280    9056 command_runner.go:130] > TimeoutStartSec=0
	I0314 19:18:26.593280    9056 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0314 19:18:26.593280    9056 command_runner.go:130] > Delegate=yes
	I0314 19:18:26.593280    9056 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0314 19:18:26.593280    9056 command_runner.go:130] > KillMode=process
	I0314 19:18:26.593280    9056 command_runner.go:130] > [Install]
	I0314 19:18:26.593280    9056 command_runner.go:130] > WantedBy=multi-user.target
	I0314 19:18:26.605321    9056 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0314 19:18:26.636447    9056 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0314 19:18:26.682445    9056 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0314 19:18:26.713357    9056 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0314 19:18:26.746168    9056 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0314 19:18:26.802754    9056 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0314 19:18:26.824378    9056 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0314 19:18:26.860447    9056 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0314 19:18:26.870283    9056 ssh_runner.go:195] Run: which cri-dockerd
	I0314 19:18:26.876264    9056 command_runner.go:130] > /usr/bin/cri-dockerd
	I0314 19:18:26.885200    9056 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0314 19:18:26.902005    9056 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0314 19:18:26.939356    9056 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0314 19:18:27.142269    9056 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0314 19:18:27.320008    9056 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0314 19:18:27.320267    9056 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0314 19:18:27.362002    9056 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 19:18:27.549532    9056 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0314 19:18:30.042540    9056 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.492822s)
	I0314 19:18:30.054314    9056 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0314 19:18:30.088796    9056 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0314 19:18:30.124499    9056 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0314 19:18:30.308986    9056 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0314 19:18:30.496428    9056 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 19:18:30.695419    9056 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0314 19:18:30.734107    9056 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0314 19:18:30.772796    9056 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 19:18:30.969330    9056 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0314 19:18:31.068994    9056 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0314 19:18:31.080006    9056 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0314 19:18:31.088926    9056 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0314 19:18:31.088926    9056 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0314 19:18:31.088926    9056 command_runner.go:130] > Device: 0,22	Inode: 877         Links: 1
	I0314 19:18:31.089076    9056 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0314 19:18:31.089076    9056 command_runner.go:130] > Access: 2024-03-14 19:18:31.250275803 +0000
	I0314 19:18:31.089076    9056 command_runner.go:130] > Modify: 2024-03-14 19:18:31.250275803 +0000
	I0314 19:18:31.089076    9056 command_runner.go:130] > Change: 2024-03-14 19:18:31.254276381 +0000
	I0314 19:18:31.089076    9056 command_runner.go:130] >  Birth: -
	I0314 19:18:31.089131    9056 start.go:562] Will wait 60s for crictl version
	I0314 19:18:31.098643    9056 ssh_runner.go:195] Run: which crictl
	I0314 19:18:31.103542    9056 command_runner.go:130] > /usr/bin/crictl
	I0314 19:18:31.112319    9056 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0314 19:18:31.188753    9056 command_runner.go:130] > Version:  0.1.0
	I0314 19:18:31.188847    9056 command_runner.go:130] > RuntimeName:  docker
	I0314 19:18:31.188901    9056 command_runner.go:130] > RuntimeVersion:  25.0.4
	I0314 19:18:31.188901    9056 command_runner.go:130] > RuntimeApiVersion:  v1
	I0314 19:18:31.188950    9056 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  25.0.4
	RuntimeApiVersion:  v1
	I0314 19:18:31.198784    9056 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0314 19:18:31.231311    9056 command_runner.go:130] > 25.0.4
	I0314 19:18:31.239207    9056 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0314 19:18:31.269296    9056 command_runner.go:130] > 25.0.4
	I0314 19:18:31.275180    9056 out.go:204] * Preparing Kubernetes v1.28.4 on Docker 25.0.4 ...
	I0314 19:18:31.275413    9056 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0314 19:18:31.279142    9056 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0314 19:18:31.279142    9056 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0314 19:18:31.279142    9056 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0314 19:18:31.279142    9056 ip.go:207] Found interface: {Index:7 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:82:e8:09 Flags:up|broadcast|multicast|running}
	I0314 19:18:31.281293    9056 ip.go:210] interface addr: fe80::e3be:cf7e:6bd2:b964/64
	I0314 19:18:31.281293    9056 ip.go:210] interface addr: 172.17.80.1/20
	I0314 19:18:31.289921    9056 ssh_runner.go:195] Run: grep 172.17.80.1	host.minikube.internal$ /etc/hosts
	I0314 19:18:31.293554    9056 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.17.80.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0314 19:18:31.317809    9056 kubeadm.go:877] updating cluster {Name:multinode-442000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.28.4 ClusterName:multinode-442000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.86.124 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0314 19:18:31.318045    9056 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0314 19:18:31.325013    9056 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0314 19:18:31.347724    9056 docker.go:685] Got preloaded images: 
	I0314 19:18:31.347724    9056 docker.go:691] registry.k8s.io/kube-apiserver:v1.28.4 wasn't preloaded
	I0314 19:18:31.356761    9056 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0314 19:18:31.373706    9056 command_runner.go:139] > {"Repositories":{}}
	I0314 19:18:31.383003    9056 ssh_runner.go:195] Run: which lz4
	I0314 19:18:31.388759    9056 command_runner.go:130] > /usr/bin/lz4
	I0314 19:18:31.388759    9056 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0314 19:18:31.397934    9056 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0314 19:18:31.403378    9056 command_runner.go:130] ! stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0314 19:18:31.404263    9056 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0314 19:18:31.404435    9056 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (423165415 bytes)
	I0314 19:18:33.233446    9056 docker.go:649] duration metric: took 1.8445489s to copy over tarball
	I0314 19:18:33.242956    9056 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0314 19:18:43.700549    9056 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (10.4568137s)
	I0314 19:18:43.700549    9056 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0314 19:18:43.773175    9056 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0314 19:18:43.792166    9056 command_runner.go:139] > {"Repositories":{"gcr.io/k8s-minikube/storage-provisioner":{"gcr.io/k8s-minikube/storage-provisioner:v5":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562"},"registry.k8s.io/coredns/coredns":{"registry.k8s.io/coredns/coredns:v1.10.1":"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e":"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc"},"registry.k8s.io/etcd":{"registry.k8s.io/etcd:3.5.9-0":"sha256:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9","registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3":"sha256:73deb9a3f702532592a4167455f8
bf2e5f5d900bcc959ba2fd2d35c321de1af9"},"registry.k8s.io/kube-apiserver":{"registry.k8s.io/kube-apiserver:v1.28.4":"sha256:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257","registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb":"sha256:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257"},"registry.k8s.io/kube-controller-manager":{"registry.k8s.io/kube-controller-manager:v1.28.4":"sha256:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591","registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c":"sha256:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591"},"registry.k8s.io/kube-proxy":{"registry.k8s.io/kube-proxy:v1.28.4":"sha256:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e","registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532":"sha256:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021
a3a2899304398e"},"registry.k8s.io/kube-scheduler":{"registry.k8s.io/kube-scheduler:v1.28.4":"sha256:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1","registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba":"sha256:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1"},"registry.k8s.io/pause":{"registry.k8s.io/pause:3.9":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c"}}}
	I0314 19:18:43.792451    9056 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2629 bytes)
	I0314 19:18:43.840521    9056 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 19:18:44.029057    9056 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0314 19:18:46.586431    9056 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5571073s)
	I0314 19:18:46.598318    9056 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0314 19:18:46.622400    9056 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.28.4
	I0314 19:18:46.622400    9056 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.28.4
	I0314 19:18:46.622400    9056 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.28.4
	I0314 19:18:46.622400    9056 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.28.4
	I0314 19:18:46.622400    9056 command_runner.go:130] > registry.k8s.io/etcd:3.5.9-0
	I0314 19:18:46.622400    9056 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.10.1
	I0314 19:18:46.622400    9056 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0314 19:18:46.622400    9056 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0314 19:18:46.622400    9056 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.4
	registry.k8s.io/kube-scheduler:v1.28.4
	registry.k8s.io/kube-proxy:v1.28.4
	registry.k8s.io/kube-controller-manager:v1.28.4
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0314 19:18:46.622400    9056 cache_images.go:84] Images are preloaded, skipping loading
	I0314 19:18:46.622400    9056 kubeadm.go:928] updating node { 172.17.86.124 8443 v1.28.4 docker true true} ...
	I0314 19:18:46.622986    9056 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-442000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.17.86.124
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-442000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0314 19:18:46.629705    9056 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0314 19:18:46.666793    9056 command_runner.go:130] > cgroupfs
	I0314 19:18:46.667103    9056 cni.go:84] Creating CNI manager for ""
	I0314 19:18:46.667103    9056 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0314 19:18:46.667103    9056 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0314 19:18:46.667230    9056 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.17.86.124 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-442000 NodeName:multinode-442000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.17.86.124"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.17.86.124 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0314 19:18:46.667230    9056 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.17.86.124
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-442000"
	  kubeletExtraArgs:
	    node-ip: 172.17.86.124
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.17.86.124"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0314 19:18:46.678139    9056 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0314 19:18:46.696679    9056 command_runner.go:130] > kubeadm
	I0314 19:18:46.696679    9056 command_runner.go:130] > kubectl
	I0314 19:18:46.696679    9056 command_runner.go:130] > kubelet
	I0314 19:18:46.696842    9056 binaries.go:44] Found k8s binaries, skipping transfer
	I0314 19:18:46.708843    9056 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0314 19:18:46.724257    9056 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0314 19:18:46.752717    9056 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0314 19:18:46.780544    9056 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0314 19:18:46.822612    9056 ssh_runner.go:195] Run: grep 172.17.86.124	control-plane.minikube.internal$ /etc/hosts
	I0314 19:18:46.829333    9056 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.17.86.124	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0314 19:18:46.861506    9056 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 19:18:47.054190    9056 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0314 19:18:47.081136    9056 certs.go:68] Setting up C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-442000 for IP: 172.17.86.124
	I0314 19:18:47.081136    9056 certs.go:194] generating shared ca certs ...
	I0314 19:18:47.081136    9056 certs.go:226] acquiring lock for ca certs: {Name:mk3bd6e475aadf28590677101b36f61c132d4290 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 19:18:47.081954    9056 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key
	I0314 19:18:47.082211    9056 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key
	I0314 19:18:47.082413    9056 certs.go:256] generating profile certs ...
	I0314 19:18:47.082596    9056 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-442000\client.key
	I0314 19:18:47.082596    9056 crypto.go:68] Generating cert C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-442000\client.crt with IP's: []
	I0314 19:18:47.772197    9056 crypto.go:156] Writing cert to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-442000\client.crt ...
	I0314 19:18:47.772197    9056 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-442000\client.crt: {Name:mk545a60be574dec3fdd9c0bdd4bc1a78ea65cfe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 19:18:47.773873    9056 crypto.go:164] Writing key to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-442000\client.key ...
	I0314 19:18:47.773873    9056 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-442000\client.key: {Name:mk2d9c6fdded790c868f4caa7c901c68b0d2eeab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 19:18:47.774624    9056 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-442000\apiserver.key.002627ae
	I0314 19:18:47.774624    9056 crypto.go:68] Generating cert C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-442000\apiserver.crt.002627ae with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.17.86.124]
	I0314 19:18:47.871579    9056 crypto.go:156] Writing cert to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-442000\apiserver.crt.002627ae ...
	I0314 19:18:47.871579    9056 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-442000\apiserver.crt.002627ae: {Name:mk63e0c2d38619ba447112803b6467570af87b1a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 19:18:47.873221    9056 crypto.go:164] Writing key to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-442000\apiserver.key.002627ae ...
	I0314 19:18:47.873221    9056 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-442000\apiserver.key.002627ae: {Name:mk6888b3a912b516db6a768e391b58d87d8289c2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 19:18:47.874381    9056 certs.go:381] copying C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-442000\apiserver.crt.002627ae -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-442000\apiserver.crt
	I0314 19:18:47.884576    9056 certs.go:385] copying C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-442000\apiserver.key.002627ae -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-442000\apiserver.key
	I0314 19:18:47.885021    9056 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-442000\proxy-client.key
	I0314 19:18:47.885021    9056 crypto.go:68] Generating cert C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-442000\proxy-client.crt with IP's: []
	I0314 19:18:48.305106    9056 crypto.go:156] Writing cert to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-442000\proxy-client.crt ...
	I0314 19:18:48.305106    9056 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-442000\proxy-client.crt: {Name:mk5cc46379e7ac8682b21938dc25812f50e62cd1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 19:18:48.307104    9056 crypto.go:164] Writing key to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-442000\proxy-client.key ...
	I0314 19:18:48.307104    9056 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-442000\proxy-client.key: {Name:mkfc5ae5158a2239c8b58cc48dab0132785bd0ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 19:18:48.308098    9056 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0314 19:18:48.308098    9056 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0314 19:18:48.308098    9056 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0314 19:18:48.309098    9056 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0314 19:18:48.309098    9056 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-442000\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0314 19:18:48.309098    9056 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-442000\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0314 19:18:48.309098    9056 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-442000\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0314 19:18:48.318105    9056 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-442000\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0314 19:18:48.319273    9056 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\11052.pem (1338 bytes)
	W0314 19:18:48.319683    9056 certs.go:480] ignoring C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\11052_empty.pem, impossibly tiny 0 bytes
	I0314 19:18:48.319683    9056 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0314 19:18:48.319978    9056 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0314 19:18:48.320169    9056 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0314 19:18:48.320372    9056 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0314 19:18:48.320508    9056 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\110522.pem (1708 bytes)
	I0314 19:18:48.320808    9056 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0314 19:18:48.320922    9056 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\11052.pem -> /usr/share/ca-certificates/11052.pem
	I0314 19:18:48.321033    9056 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\110522.pem -> /usr/share/ca-certificates/110522.pem
	I0314 19:18:48.321174    9056 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0314 19:18:48.365785    9056 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0314 19:18:48.412843    9056 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0314 19:18:48.455211    9056 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0314 19:18:48.502515    9056 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-442000\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0314 19:18:48.545760    9056 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-442000\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0314 19:18:48.588738    9056 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-442000\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0314 19:18:48.630735    9056 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-442000\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0314 19:18:48.673162    9056 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0314 19:18:48.716890    9056 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\11052.pem --> /usr/share/ca-certificates/11052.pem (1338 bytes)
	I0314 19:18:48.760292    9056 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\110522.pem --> /usr/share/ca-certificates/110522.pem (1708 bytes)
	I0314 19:18:48.806419    9056 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0314 19:18:48.847648    9056 ssh_runner.go:195] Run: openssl version
	I0314 19:18:48.856127    9056 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0314 19:18:48.865886    9056 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0314 19:18:48.896637    9056 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0314 19:18:48.903918    9056 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Mar 14 17:44 /usr/share/ca-certificates/minikubeCA.pem
	I0314 19:18:48.904076    9056 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 14 17:44 /usr/share/ca-certificates/minikubeCA.pem
	I0314 19:18:48.916662    9056 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0314 19:18:48.925461    9056 command_runner.go:130] > b5213941
	I0314 19:18:48.934798    9056 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0314 19:18:48.962523    9056 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11052.pem && ln -fs /usr/share/ca-certificates/11052.pem /etc/ssl/certs/11052.pem"
	I0314 19:18:48.998814    9056 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11052.pem
	I0314 19:18:49.006963    9056 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Mar 14 17:58 /usr/share/ca-certificates/11052.pem
	I0314 19:18:49.007903    9056 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 14 17:58 /usr/share/ca-certificates/11052.pem
	I0314 19:18:49.015949    9056 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11052.pem
	I0314 19:18:49.024724    9056 command_runner.go:130] > 51391683
	I0314 19:18:49.035096    9056 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11052.pem /etc/ssl/certs/51391683.0"
	I0314 19:18:49.062566    9056 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110522.pem && ln -fs /usr/share/ca-certificates/110522.pem /etc/ssl/certs/110522.pem"
	I0314 19:18:49.091548    9056 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/110522.pem
	I0314 19:18:49.098717    9056 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Mar 14 17:58 /usr/share/ca-certificates/110522.pem
	I0314 19:18:49.098786    9056 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 14 17:58 /usr/share/ca-certificates/110522.pem
	I0314 19:18:49.107493    9056 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110522.pem
	I0314 19:18:49.115720    9056 command_runner.go:130] > 3ec20f2e
	I0314 19:18:49.124628    9056 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110522.pem /etc/ssl/certs/3ec20f2e.0"
	I0314 19:18:49.152394    9056 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0314 19:18:49.161813    9056 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0314 19:18:49.162265    9056 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0314 19:18:49.162566    9056 kubeadm.go:391] StartCluster: {Name:multinode-442000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.2
8.4 ClusterName:multinode-442000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.86.124 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 19:18:49.169506    9056 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0314 19:18:49.203112    9056 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0314 19:18:49.219922    9056 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I0314 19:18:49.219922    9056 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I0314 19:18:49.219922    9056 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I0314 19:18:49.229511    9056 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0314 19:18:49.255020    9056 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0314 19:18:49.270515    9056 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0314 19:18:49.271515    9056 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0314 19:18:49.271515    9056 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0314 19:18:49.271515    9056 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0314 19:18:49.271515    9056 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0314 19:18:49.271515    9056 kubeadm.go:156] found existing configuration files:
	
	I0314 19:18:49.280229    9056 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0314 19:18:49.296207    9056 command_runner.go:130] ! grep: /etc/kubernetes/admin.conf: No such file or directory
	I0314 19:18:49.296308    9056 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0314 19:18:49.304317    9056 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0314 19:18:49.330327    9056 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0314 19:18:49.345989    9056 command_runner.go:130] ! grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0314 19:18:49.345989    9056 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0314 19:18:49.354886    9056 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0314 19:18:49.378036    9056 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0314 19:18:49.397827    9056 command_runner.go:130] ! grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0314 19:18:49.397827    9056 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0314 19:18:49.407020    9056 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0314 19:18:49.433744    9056 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0314 19:18:49.448828    9056 command_runner.go:130] ! grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0314 19:18:49.449779    9056 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0314 19:18:49.461484    9056 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0314 19:18:49.477305    9056 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0314 19:18:49.871822    9056 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0314 19:18:49.871822    9056 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0314 19:19:04.214413    9056 kubeadm.go:309] [init] Using Kubernetes version: v1.28.4
	I0314 19:19:04.214413    9056 command_runner.go:130] > [init] Using Kubernetes version: v1.28.4
	I0314 19:19:04.214575    9056 command_runner.go:130] > [preflight] Running pre-flight checks
	I0314 19:19:04.214680    9056 kubeadm.go:309] [preflight] Running pre-flight checks
	I0314 19:19:04.214975    9056 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0314 19:19:04.214975    9056 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I0314 19:19:04.215276    9056 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0314 19:19:04.215276    9056 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0314 19:19:04.215482    9056 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0314 19:19:04.215482    9056 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0314 19:19:04.215699    9056 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0314 19:19:04.215753    9056 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0314 19:19:04.220176    9056 out.go:204]   - Generating certificates and keys ...
	I0314 19:19:04.220176    9056 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0314 19:19:04.220176    9056 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0314 19:19:04.220721    9056 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0314 19:19:04.220721    9056 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0314 19:19:04.220892    9056 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I0314 19:19:04.220892    9056 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0314 19:19:04.220892    9056 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I0314 19:19:04.220892    9056 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0314 19:19:04.220892    9056 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I0314 19:19:04.220892    9056 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0314 19:19:04.221436    9056 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0314 19:19:04.221436    9056 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I0314 19:19:04.221567    9056 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0314 19:19:04.221567    9056 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I0314 19:19:04.221777    9056 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-442000] and IPs [172.17.86.124 127.0.0.1 ::1]
	I0314 19:19:04.221777    9056 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-442000] and IPs [172.17.86.124 127.0.0.1 ::1]
	I0314 19:19:04.221777    9056 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0314 19:19:04.221777    9056 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I0314 19:19:04.221777    9056 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-442000] and IPs [172.17.86.124 127.0.0.1 ::1]
	I0314 19:19:04.221777    9056 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-442000] and IPs [172.17.86.124 127.0.0.1 ::1]
	I0314 19:19:04.221777    9056 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I0314 19:19:04.222315    9056 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0314 19:19:04.222467    9056 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0314 19:19:04.222467    9056 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I0314 19:19:04.222467    9056 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0314 19:19:04.222467    9056 command_runner.go:130] > [certs] Generating "sa" key and public key
	I0314 19:19:04.222467    9056 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0314 19:19:04.222467    9056 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0314 19:19:04.222467    9056 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0314 19:19:04.222467    9056 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0314 19:19:04.222467    9056 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0314 19:19:04.222467    9056 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0314 19:19:04.222467    9056 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0314 19:19:04.222467    9056 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0314 19:19:04.222467    9056 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0314 19:19:04.222467    9056 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0314 19:19:04.222467    9056 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0314 19:19:04.222467    9056 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0314 19:19:04.223490    9056 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0314 19:19:04.223490    9056 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0314 19:19:04.225760    9056 out.go:204]   - Booting up control plane ...
	I0314 19:19:04.225760    9056 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0314 19:19:04.225760    9056 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0314 19:19:04.225760    9056 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0314 19:19:04.225760    9056 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0314 19:19:04.226779    9056 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0314 19:19:04.226833    9056 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0314 19:19:04.227045    9056 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0314 19:19:04.227045    9056 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0314 19:19:04.227256    9056 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0314 19:19:04.227256    9056 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0314 19:19:04.227411    9056 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0314 19:19:04.227411    9056 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0314 19:19:04.227563    9056 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0314 19:19:04.227563    9056 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0314 19:19:04.227926    9056 command_runner.go:130] > [apiclient] All control plane components are healthy after 8.003900 seconds
	I0314 19:19:04.227926    9056 kubeadm.go:309] [apiclient] All control plane components are healthy after 8.003900 seconds
	I0314 19:19:04.227926    9056 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0314 19:19:04.227926    9056 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0314 19:19:04.227926    9056 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0314 19:19:04.228466    9056 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0314 19:19:04.228595    9056 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I0314 19:19:04.228646    9056 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0314 19:19:04.228805    9056 kubeadm.go:309] [mark-control-plane] Marking the node multinode-442000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0314 19:19:04.228805    9056 command_runner.go:130] > [mark-control-plane] Marking the node multinode-442000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0314 19:19:04.228805    9056 command_runner.go:130] > [bootstrap-token] Using token: 7bdjrk.zjci8xrpcan3qcz1
	I0314 19:19:04.229217    9056 kubeadm.go:309] [bootstrap-token] Using token: 7bdjrk.zjci8xrpcan3qcz1
	I0314 19:19:04.233385    9056 out.go:204]   - Configuring RBAC rules ...
	I0314 19:19:04.233385    9056 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0314 19:19:04.233385    9056 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0314 19:19:04.233385    9056 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0314 19:19:04.233385    9056 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0314 19:19:04.233385    9056 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0314 19:19:04.233385    9056 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0314 19:19:04.234382    9056 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0314 19:19:04.234382    9056 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0314 19:19:04.234382    9056 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0314 19:19:04.234382    9056 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0314 19:19:04.234382    9056 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0314 19:19:04.234382    9056 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0314 19:19:04.234382    9056 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0314 19:19:04.234382    9056 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0314 19:19:04.234382    9056 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0314 19:19:04.234382    9056 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0314 19:19:04.235394    9056 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0314 19:19:04.235394    9056 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0314 19:19:04.235448    9056 kubeadm.go:309] 
	I0314 19:19:04.235579    9056 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I0314 19:19:04.235579    9056 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0314 19:19:04.235579    9056 kubeadm.go:309] 
	I0314 19:19:04.235787    9056 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I0314 19:19:04.235787    9056 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0314 19:19:04.235787    9056 kubeadm.go:309] 
	I0314 19:19:04.235787    9056 command_runner.go:130] >   mkdir -p $HOME/.kube
	I0314 19:19:04.235787    9056 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0314 19:19:04.236009    9056 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0314 19:19:04.236061    9056 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0314 19:19:04.236263    9056 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0314 19:19:04.236263    9056 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0314 19:19:04.236312    9056 kubeadm.go:309] 
	I0314 19:19:04.236465    9056 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0314 19:19:04.236465    9056 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I0314 19:19:04.236513    9056 kubeadm.go:309] 
	I0314 19:19:04.236667    9056 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0314 19:19:04.236667    9056 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0314 19:19:04.236667    9056 kubeadm.go:309] 
	I0314 19:19:04.236770    9056 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0314 19:19:04.236828    9056 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I0314 19:19:04.236977    9056 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0314 19:19:04.236977    9056 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0314 19:19:04.237099    9056 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0314 19:19:04.237148    9056 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0314 19:19:04.237148    9056 kubeadm.go:309] 
	I0314 19:19:04.237311    9056 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I0314 19:19:04.237311    9056 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0314 19:19:04.237518    9056 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I0314 19:19:04.237518    9056 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0314 19:19:04.237518    9056 kubeadm.go:309] 
	I0314 19:19:04.237695    9056 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token 7bdjrk.zjci8xrpcan3qcz1 \
	I0314 19:19:04.237695    9056 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 7bdjrk.zjci8xrpcan3qcz1 \
	I0314 19:19:04.237904    9056 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:d6cca618a9b89cd38081689b02ff56bf93130e2cdd7cca4172dee33bbb1d34eb \
	I0314 19:19:04.237904    9056 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:d6cca618a9b89cd38081689b02ff56bf93130e2cdd7cca4172dee33bbb1d34eb \
	I0314 19:19:04.237904    9056 command_runner.go:130] > 	--control-plane 
	I0314 19:19:04.237904    9056 kubeadm.go:309] 	--control-plane 
	I0314 19:19:04.237904    9056 kubeadm.go:309] 
	I0314 19:19:04.238240    9056 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I0314 19:19:04.238289    9056 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0314 19:19:04.238289    9056 kubeadm.go:309] 
	I0314 19:19:04.238523    9056 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 7bdjrk.zjci8xrpcan3qcz1 \
	I0314 19:19:04.238523    9056 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token 7bdjrk.zjci8xrpcan3qcz1 \
	I0314 19:19:04.238523    9056 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:d6cca618a9b89cd38081689b02ff56bf93130e2cdd7cca4172dee33bbb1d34eb 
	I0314 19:19:04.238523    9056 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:d6cca618a9b89cd38081689b02ff56bf93130e2cdd7cca4172dee33bbb1d34eb 
	I0314 19:19:04.238523    9056 cni.go:84] Creating CNI manager for ""
	I0314 19:19:04.238523    9056 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0314 19:19:04.242798    9056 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0314 19:19:04.260930    9056 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0314 19:19:04.269683    9056 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0314 19:19:04.269683    9056 command_runner.go:130] >   Size: 2694104   	Blocks: 5264       IO Block: 4096   regular file
	I0314 19:19:04.269736    9056 command_runner.go:130] > Device: 0,17	Inode: 3497        Links: 1
	I0314 19:19:04.269736    9056 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0314 19:19:04.269736    9056 command_runner.go:130] > Access: 2024-03-14 19:17:10.602275800 +0000
	I0314 19:19:04.269736    9056 command_runner.go:130] > Modify: 2024-03-13 22:53:41.000000000 +0000
	I0314 19:19:04.269736    9056 command_runner.go:130] > Change: 2024-03-14 19:17:03.878000000 +0000
	I0314 19:19:04.269736    9056 command_runner.go:130] >  Birth: -
	I0314 19:19:04.269818    9056 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0314 19:19:04.269818    9056 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0314 19:19:04.339460    9056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0314 19:19:05.676402    9056 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I0314 19:19:05.676483    9056 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I0314 19:19:05.676483    9056 command_runner.go:130] > serviceaccount/kindnet created
	I0314 19:19:05.676483    9056 command_runner.go:130] > daemonset.apps/kindnet created
	I0314 19:19:05.676483    9056 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.3368686s)
	I0314 19:19:05.676483    9056 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0314 19:19:05.688358    9056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:19:05.688358    9056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-442000 minikube.k8s.io/updated_at=2024_03_14T19_19_05_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=c6f78a3db54ac629870afb44fb5bc8be9e04a8c7 minikube.k8s.io/name=multinode-442000 minikube.k8s.io/primary=true
	I0314 19:19:05.699880    9056 command_runner.go:130] > -16
	I0314 19:19:05.699988    9056 ops.go:34] apiserver oom_adj: -16
	I0314 19:19:05.830705    9056 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I0314 19:19:05.845452    9056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:19:05.873831    9056 command_runner.go:130] > node/multinode-442000 labeled
	I0314 19:19:05.976674    9056 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0314 19:19:06.351246    9056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:19:06.468279    9056 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0314 19:19:06.853167    9056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:19:06.972342    9056 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0314 19:19:07.357540    9056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:19:07.473687    9056 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0314 19:19:07.859031    9056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:19:07.972654    9056 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0314 19:19:08.348837    9056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:19:08.464633    9056 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0314 19:19:08.851437    9056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:19:08.978823    9056 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0314 19:19:09.358192    9056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:19:09.480197    9056 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0314 19:19:09.859579    9056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:19:09.974860    9056 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0314 19:19:10.359445    9056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:19:10.474730    9056 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0314 19:19:10.848958    9056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:19:10.961216    9056 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0314 19:19:11.351509    9056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:19:11.470062    9056 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0314 19:19:11.857829    9056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:19:11.974743    9056 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0314 19:19:12.362418    9056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:19:12.476698    9056 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0314 19:19:12.863953    9056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:19:12.981830    9056 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0314 19:19:13.358618    9056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:19:13.482240    9056 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0314 19:19:13.850705    9056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:19:13.984052    9056 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0314 19:19:14.354759    9056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:19:14.496496    9056 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0314 19:19:14.855799    9056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:19:14.974371    9056 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0314 19:19:15.357773    9056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:19:15.480087    9056 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0314 19:19:15.860414    9056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:19:16.018775    9056 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0314 19:19:16.361497    9056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:19:16.492509    9056 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0314 19:19:16.853007    9056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:19:16.992271    9056 command_runner.go:130] > NAME      SECRETS   AGE
	I0314 19:19:16.992338    9056 command_runner.go:130] > default   0         1s
	I0314 19:19:16.992943    9056 kubeadm.go:1106] duration metric: took 11.3154205s to wait for elevateKubeSystemPrivileges
	W0314 19:19:16.992980    9056 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0314 19:19:16.992980    9056 kubeadm.go:393] duration metric: took 27.8283313s to StartCluster
	I0314 19:19:16.992980    9056 settings.go:142] acquiring lock: {Name:mk2f48f1c2db86c45c5c20d13312e07e9c171d09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 19:19:16.992980    9056 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0314 19:19:16.995018    9056 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\kubeconfig: {Name:mk3b9816be6135fef42a073128e9ed356868417a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 19:19:16.996295    9056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0314 19:19:16.996423    9056 start.go:234] Will wait 6m0s for node &{Name: IP:172.17.86.124 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0314 19:19:16.998924    9056 out.go:177] * Verifying Kubernetes components...
	I0314 19:19:16.996476    9056 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0314 19:19:16.996828    9056 config.go:182] Loaded profile config "multinode-442000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0314 19:19:16.998988    9056 addons.go:69] Setting storage-provisioner=true in profile "multinode-442000"
	I0314 19:19:16.998988    9056 addons.go:69] Setting default-storageclass=true in profile "multinode-442000"
	I0314 19:19:17.003107    9056 addons.go:234] Setting addon storage-provisioner=true in "multinode-442000"
	I0314 19:19:17.003107    9056 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-442000"
	I0314 19:19:17.003107    9056 host.go:66] Checking if "multinode-442000" exists ...
	I0314 19:19:17.003759    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000 ).state
	I0314 19:19:17.004293    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000 ).state
	I0314 19:19:17.012093    9056 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 19:19:17.489407    9056 command_runner.go:130] > apiVersion: v1
	I0314 19:19:17.489472    9056 command_runner.go:130] > data:
	I0314 19:19:17.489472    9056 command_runner.go:130] >   Corefile: |
	I0314 19:19:17.489472    9056 command_runner.go:130] >     .:53 {
	I0314 19:19:17.489472    9056 command_runner.go:130] >         errors
	I0314 19:19:17.489558    9056 command_runner.go:130] >         health {
	I0314 19:19:17.489558    9056 command_runner.go:130] >            lameduck 5s
	I0314 19:19:17.489558    9056 command_runner.go:130] >         }
	I0314 19:19:17.489558    9056 command_runner.go:130] >         ready
	I0314 19:19:17.489619    9056 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0314 19:19:17.489619    9056 command_runner.go:130] >            pods insecure
	I0314 19:19:17.489619    9056 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0314 19:19:17.489619    9056 command_runner.go:130] >            ttl 30
	I0314 19:19:17.489619    9056 command_runner.go:130] >         }
	I0314 19:19:17.489704    9056 command_runner.go:130] >         prometheus :9153
	I0314 19:19:17.489752    9056 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0314 19:19:17.489752    9056 command_runner.go:130] >            max_concurrent 1000
	I0314 19:19:17.489752    9056 command_runner.go:130] >         }
	I0314 19:19:17.489793    9056 command_runner.go:130] >         cache 30
	I0314 19:19:17.489793    9056 command_runner.go:130] >         loop
	I0314 19:19:17.489793    9056 command_runner.go:130] >         reload
	I0314 19:19:17.489793    9056 command_runner.go:130] >         loadbalance
	I0314 19:19:17.489793    9056 command_runner.go:130] >     }
	I0314 19:19:17.489793    9056 command_runner.go:130] > kind: ConfigMap
	I0314 19:19:17.489793    9056 command_runner.go:130] > metadata:
	I0314 19:19:17.489793    9056 command_runner.go:130] >   creationTimestamp: "2024-03-14T19:19:04Z"
	I0314 19:19:17.489793    9056 command_runner.go:130] >   name: coredns
	I0314 19:19:17.489793    9056 command_runner.go:130] >   namespace: kube-system
	I0314 19:19:17.489929    9056 command_runner.go:130] >   resourceVersion: "266"
	I0314 19:19:17.489929    9056 command_runner.go:130] >   uid: 01b5c7b7-d3d3-4522-bf6f-df10e46139e7
	I0314 19:19:17.490211    9056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.17.80.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0314 19:19:17.503235    9056 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0314 19:19:18.064002    9056 command_runner.go:130] > configmap/coredns replaced
	I0314 19:19:18.064002    9056 start.go:948] {"host.minikube.internal": 172.17.80.1} host record injected into CoreDNS's ConfigMap
	I0314 19:19:18.065322    9056 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0314 19:19:18.065322    9056 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0314 19:19:18.066017    9056 kapi.go:59] client config for multinode-442000: &rest.Config{Host:"https://172.17.86.124:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\multinode-442000\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\multinode-442000\\client.key", CAFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1ec9180), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0314 19:19:18.066017    9056 kapi.go:59] client config for multinode-442000: &rest.Config{Host:"https://172.17.86.124:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\multinode-442000\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\multinode-442000\\client.key", CAFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1ec9180), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0314 19:19:18.066766    9056 cert_rotation.go:137] Starting client certificate rotation controller
	I0314 19:19:18.067371    9056 node_ready.go:35] waiting up to 6m0s for node "multinode-442000" to be "Ready" ...
	I0314 19:19:18.067371    9056 round_trippers.go:463] GET https://172.17.86.124:8443/api/v1/nodes/multinode-442000
	I0314 19:19:18.067371    9056 round_trippers.go:469] Request Headers:
	I0314 19:19:18.067371    9056 round_trippers.go:463] GET https://172.17.86.124:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0314 19:19:18.067371    9056 round_trippers.go:469] Request Headers:
	I0314 19:19:18.067371    9056 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:19:18.067371    9056 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:19:18.067371    9056 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:19:18.067987    9056 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:19:18.102163    9056 round_trippers.go:574] Response Status: 200 OK in 33 milliseconds
	I0314 19:19:18.102163    9056 round_trippers.go:577] Response Headers:
	I0314 19:19:18.102240    9056 round_trippers.go:580]     Audit-Id: 6b5b8f7e-ea5d-4f15-81b1-69de6a223cc0
	I0314 19:19:18.102240    9056 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:19:18.102240    9056 round_trippers.go:580]     Content-Type: application/json
	I0314 19:19:18.102283    9056 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:19:18.102283    9056 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:19:18.102283    9056 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:19:18 GMT
	I0314 19:19:18.102283    9056 round_trippers.go:574] Response Status: 200 OK in 34 milliseconds
	I0314 19:19:18.102349    9056 round_trippers.go:577] Response Headers:
	I0314 19:19:18.102349    9056 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:19:18.102389    9056 round_trippers.go:580]     Content-Length: 291
	I0314 19:19:18.102389    9056 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:19:18 GMT
	I0314 19:19:18.102389    9056 round_trippers.go:580]     Audit-Id: e790f2b7-a1ab-4564-ac6d-af208af54880
	I0314 19:19:18.102509    9056 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:19:18.102509    9056 round_trippers.go:580]     Content-Type: application/json
	I0314 19:19:18.102571    9056 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:19:18.102571    9056 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"334","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0314 19:19:18.102626    9056 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"c7a59f7e-5968-4b64-8f4a-c66c9223a024","resourceVersion":"386","creationTimestamp":"2024-03-14T19:19:04Z"},"spec":{"replicas":2},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0314 19:19:18.103422    9056 request.go:1212] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"c7a59f7e-5968-4b64-8f4a-c66c9223a024","resourceVersion":"386","creationTimestamp":"2024-03-14T19:19:04Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0314 19:19:18.103588    9056 round_trippers.go:463] PUT https://172.17.86.124:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0314 19:19:18.103588    9056 round_trippers.go:469] Request Headers:
	I0314 19:19:18.103588    9056 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:19:18.103588    9056 round_trippers.go:473]     Content-Type: application/json
	I0314 19:19:18.103588    9056 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:19:18.118184    9056 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0314 19:19:18.118553    9056 round_trippers.go:577] Response Headers:
	I0314 19:19:18.118553    9056 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:19:18.118553    9056 round_trippers.go:580]     Content-Type: application/json
	I0314 19:19:18.118553    9056 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:19:18.118553    9056 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:19:18.118553    9056 round_trippers.go:580]     Content-Length: 291
	I0314 19:19:18.118659    9056 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:19:18 GMT
	I0314 19:19:18.118659    9056 round_trippers.go:580]     Audit-Id: a40d401a-800f-4667-9857-58f7fa0a2917
	I0314 19:19:18.118715    9056 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"c7a59f7e-5968-4b64-8f4a-c66c9223a024","resourceVersion":"392","creationTimestamp":"2024-03-14T19:19:04Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0314 19:19:18.570743    9056 round_trippers.go:463] GET https://172.17.86.124:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0314 19:19:18.570923    9056 round_trippers.go:463] GET https://172.17.86.124:8443/api/v1/nodes/multinode-442000
	I0314 19:19:18.571169    9056 round_trippers.go:469] Request Headers:
	I0314 19:19:18.570923    9056 round_trippers.go:469] Request Headers:
	I0314 19:19:18.571169    9056 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:19:18.571258    9056 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:19:18.571258    9056 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:19:18.571449    9056 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:19:18.575062    9056 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:19:18.575062    9056 round_trippers.go:577] Response Headers:
	I0314 19:19:18.575062    9056 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:19:18.575062    9056 round_trippers.go:580]     Content-Type: application/json
	I0314 19:19:18.575062    9056 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:19:18.575062    9056 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:19:18.575062    9056 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:19:18.575062    9056 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:19:18 GMT
	I0314 19:19:18.575062    9056 round_trippers.go:577] Response Headers:
	I0314 19:19:18.575194    9056 round_trippers.go:580]     Content-Length: 291
	I0314 19:19:18.575062    9056 round_trippers.go:580]     Audit-Id: bba6e845-9964-4e12-9503-a3b58ec97b45
	I0314 19:19:18.575194    9056 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:19:18 GMT
	I0314 19:19:18.575250    9056 round_trippers.go:580]     Audit-Id: f7a97983-5907-45a6-950c-6a73c04c318b
	I0314 19:19:18.575250    9056 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:19:18.575304    9056 round_trippers.go:580]     Content-Type: application/json
	I0314 19:19:18.575304    9056 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:19:18.575304    9056 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:19:18.575362    9056 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"c7a59f7e-5968-4b64-8f4a-c66c9223a024","resourceVersion":"404","creationTimestamp":"2024-03-14T19:19:04Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0314 19:19:18.575489    9056 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-442000" context rescaled to 1 replicas
	I0314 19:19:18.575585    9056 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"334","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0314 19:19:19.080675    9056 round_trippers.go:463] GET https://172.17.86.124:8443/api/v1/nodes/multinode-442000
	I0314 19:19:19.080675    9056 round_trippers.go:469] Request Headers:
	I0314 19:19:19.080675    9056 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:19:19.080675    9056 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:19:19.082653    9056 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:19:19.082653    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:19:19.083432    9056 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0314 19:19:19.084684    9056 kapi.go:59] client config for multinode-442000: &rest.Config{Host:"https://172.17.86.124:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\multinode-442000\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\multinode-442000\\client.key", CAFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1ec9180), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0314 19:19:19.084765    9056 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:19:19.084765    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:19:19.088194    9056 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0314 19:19:19.085561    9056 addons.go:234] Setting addon default-storageclass=true in "multinode-442000"
	I0314 19:19:19.088194    9056 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0314 19:19:19.088194    9056 round_trippers.go:577] Response Headers:
	I0314 19:19:19.088194    9056 host.go:66] Checking if "multinode-442000" exists ...
	I0314 19:19:19.090763    9056 round_trippers.go:580]     Audit-Id: 217b6e06-1491-4dcd-8ae5-925dafa30ec6
	I0314 19:19:19.090878    9056 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0314 19:19:19.090930    9056 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0314 19:19:19.090878    9056 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:19:19.090993    9056 round_trippers.go:580]     Content-Type: application/json
	I0314 19:19:19.091058    9056 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:19:19.091058    9056 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:19:19.091058    9056 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:19:19 GMT
	I0314 19:19:19.091058    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000 ).state
	I0314 19:19:19.091321    9056 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"334","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0314 19:19:19.091972    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000 ).state
	I0314 19:19:19.573919    9056 round_trippers.go:463] GET https://172.17.86.124:8443/api/v1/nodes/multinode-442000
	I0314 19:19:19.573919    9056 round_trippers.go:469] Request Headers:
	I0314 19:19:19.573919    9056 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:19:19.573919    9056 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:19:19.577239    9056 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:19:19.578145    9056 round_trippers.go:577] Response Headers:
	I0314 19:19:19.578145    9056 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:19:19.578210    9056 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:19:19 GMT
	I0314 19:19:19.578210    9056 round_trippers.go:580]     Audit-Id: 04b17c99-5c07-4abb-8cee-35bb32126611
	I0314 19:19:19.578210    9056 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:19:19.578210    9056 round_trippers.go:580]     Content-Type: application/json
	I0314 19:19:19.578210    9056 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:19:19.578581    9056 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"334","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0314 19:19:20.067880    9056 round_trippers.go:463] GET https://172.17.86.124:8443/api/v1/nodes/multinode-442000
	I0314 19:19:20.068108    9056 round_trippers.go:469] Request Headers:
	I0314 19:19:20.068108    9056 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:19:20.068108    9056 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:19:20.072269    9056 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 19:19:20.072356    9056 round_trippers.go:577] Response Headers:
	I0314 19:19:20.072356    9056 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:19:20.072356    9056 round_trippers.go:580]     Content-Type: application/json
	I0314 19:19:20.072356    9056 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:19:20.072356    9056 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:19:20.072356    9056 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:19:20 GMT
	I0314 19:19:20.072356    9056 round_trippers.go:580]     Audit-Id: a8f9b68f-edca-4171-ab4e-64ba16530ae8
	I0314 19:19:20.072725    9056 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"334","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0314 19:19:20.073491    9056 node_ready.go:53] node "multinode-442000" has status "Ready":"False"
	I0314 19:19:20.572963    9056 round_trippers.go:463] GET https://172.17.86.124:8443/api/v1/nodes/multinode-442000
	I0314 19:19:20.572963    9056 round_trippers.go:469] Request Headers:
	I0314 19:19:20.572963    9056 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:19:20.572963    9056 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:19:20.575562    9056 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0314 19:19:20.576319    9056 round_trippers.go:577] Response Headers:
	I0314 19:19:20.576319    9056 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:19:20 GMT
	I0314 19:19:20.576319    9056 round_trippers.go:580]     Audit-Id: c2e3a888-bb56-4e91-b136-4831aa035ed6
	I0314 19:19:20.576319    9056 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:19:20.576319    9056 round_trippers.go:580]     Content-Type: application/json
	I0314 19:19:20.576393    9056 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:19:20.576393    9056 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:19:20.576393    9056 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"334","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0314 19:19:21.079484    9056 round_trippers.go:463] GET https://172.17.86.124:8443/api/v1/nodes/multinode-442000
	I0314 19:19:21.079566    9056 round_trippers.go:469] Request Headers:
	I0314 19:19:21.079566    9056 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:19:21.079566    9056 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:19:21.083774    9056 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:19:21.083774    9056 round_trippers.go:577] Response Headers:
	I0314 19:19:21.083842    9056 round_trippers.go:580]     Audit-Id: f9f5c998-56e0-4f4a-baa7-1b1e10037146
	I0314 19:19:21.083842    9056 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:19:21.083842    9056 round_trippers.go:580]     Content-Type: application/json
	I0314 19:19:21.083842    9056 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:19:21.083842    9056 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:19:21.083842    9056 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:19:21 GMT
	I0314 19:19:21.083842    9056 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"334","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0314 19:19:21.239225    9056 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:19:21.239225    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:19:21.239313    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-442000 ).networkadapters[0]).ipaddresses[0]
	I0314 19:19:21.240101    9056 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:19:21.240101    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:19:21.240286    9056 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0314 19:19:21.240310    9056 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0314 19:19:21.240362    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000 ).state
	I0314 19:19:21.574085    9056 round_trippers.go:463] GET https://172.17.86.124:8443/api/v1/nodes/multinode-442000
	I0314 19:19:21.574202    9056 round_trippers.go:469] Request Headers:
	I0314 19:19:21.574202    9056 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:19:21.574202    9056 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:19:21.577508    9056 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:19:21.577945    9056 round_trippers.go:577] Response Headers:
	I0314 19:19:21.577945    9056 round_trippers.go:580]     Content-Type: application/json
	I0314 19:19:21.577945    9056 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:19:21.577945    9056 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:19:21.577945    9056 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:19:21 GMT
	I0314 19:19:21.577945    9056 round_trippers.go:580]     Audit-Id: 9ef3bf50-3514-4e53-8fff-c722d8758457
	I0314 19:19:21.577945    9056 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:19:21.578156    9056 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"334","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0314 19:19:22.081626    9056 round_trippers.go:463] GET https://172.17.86.124:8443/api/v1/nodes/multinode-442000
	I0314 19:19:22.081825    9056 round_trippers.go:469] Request Headers:
	I0314 19:19:22.081825    9056 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:19:22.081901    9056 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:19:22.086387    9056 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:19:22.086483    9056 round_trippers.go:577] Response Headers:
	I0314 19:19:22.086483    9056 round_trippers.go:580]     Audit-Id: 70bae9d3-e84f-4009-a4ac-3d3918ffbce2
	I0314 19:19:22.086610    9056 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:19:22.086610    9056 round_trippers.go:580]     Content-Type: application/json
	I0314 19:19:22.086807    9056 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:19:22.086886    9056 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:19:22.086886    9056 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:19:22 GMT
	I0314 19:19:22.087118    9056 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"334","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0314 19:19:22.088009    9056 node_ready.go:53] node "multinode-442000" has status "Ready":"False"
	I0314 19:19:22.576077    9056 round_trippers.go:463] GET https://172.17.86.124:8443/api/v1/nodes/multinode-442000
	I0314 19:19:22.576077    9056 round_trippers.go:469] Request Headers:
	I0314 19:19:22.576077    9056 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:19:22.576236    9056 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:19:22.579577    9056 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:19:22.579577    9056 round_trippers.go:577] Response Headers:
	I0314 19:19:22.579577    9056 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:19:22.579577    9056 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:19:22.579577    9056 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:19:22 GMT
	I0314 19:19:22.579577    9056 round_trippers.go:580]     Audit-Id: 28eec736-3ccf-4db3-a5cc-eec0c355525f
	I0314 19:19:22.579577    9056 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:19:22.579577    9056 round_trippers.go:580]     Content-Type: application/json
	I0314 19:19:22.580332    9056 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"334","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0314 19:19:23.074395    9056 round_trippers.go:463] GET https://172.17.86.124:8443/api/v1/nodes/multinode-442000
	I0314 19:19:23.074395    9056 round_trippers.go:469] Request Headers:
	I0314 19:19:23.074395    9056 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:19:23.074395    9056 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:19:23.078393    9056 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:19:23.078769    9056 round_trippers.go:577] Response Headers:
	I0314 19:19:23.078769    9056 round_trippers.go:580]     Content-Type: application/json
	I0314 19:19:23.078769    9056 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:19:23.078769    9056 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:19:23.078769    9056 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:19:23 GMT
	I0314 19:19:23.078863    9056 round_trippers.go:580]     Audit-Id: e5c6dad1-07c4-4e67-baa8-722587f714ac
	I0314 19:19:23.078863    9056 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:19:23.079243    9056 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"334","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0314 19:19:23.355775    9056 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:19:23.355775    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:19:23.355775    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-442000 ).networkadapters[0]).ipaddresses[0]
	I0314 19:19:23.579204    9056 round_trippers.go:463] GET https://172.17.86.124:8443/api/v1/nodes/multinode-442000
	I0314 19:19:23.579204    9056 round_trippers.go:469] Request Headers:
	I0314 19:19:23.579427    9056 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:19:23.579427    9056 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:19:23.582705    9056 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:19:23.582705    9056 round_trippers.go:577] Response Headers:
	I0314 19:19:23.582705    9056 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:19:23.582705    9056 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:19:23 GMT
	I0314 19:19:23.582705    9056 round_trippers.go:580]     Audit-Id: b4b05861-b09a-4607-8b5f-e743206f9532
	I0314 19:19:23.582705    9056 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:19:23.582705    9056 round_trippers.go:580]     Content-Type: application/json
	I0314 19:19:23.582705    9056 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:19:23.583200    9056 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"334","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0314 19:19:23.730982    9056 main.go:141] libmachine: [stdout =====>] : 172.17.86.124
	
	I0314 19:19:23.730982    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:19:23.730982    9056 sshutil.go:53] new ssh client: &{IP:172.17.86.124 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-442000\id_rsa Username:docker}
	I0314 19:19:23.881768    9056 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0314 19:19:24.069140    9056 round_trippers.go:463] GET https://172.17.86.124:8443/api/v1/nodes/multinode-442000
	I0314 19:19:24.069209    9056 round_trippers.go:469] Request Headers:
	I0314 19:19:24.069209    9056 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:19:24.069209    9056 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:19:24.282192    9056 round_trippers.go:574] Response Status: 200 OK in 212 milliseconds
	I0314 19:19:24.282329    9056 round_trippers.go:577] Response Headers:
	I0314 19:19:24.282329    9056 round_trippers.go:580]     Audit-Id: 9634d9b3-55bd-4d0f-9fc8-15dc9fd03b54
	I0314 19:19:24.282329    9056 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:19:24.282329    9056 round_trippers.go:580]     Content-Type: application/json
	I0314 19:19:24.282329    9056 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:19:24.282329    9056 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:19:24.282329    9056 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:19:24 GMT
	I0314 19:19:24.282329    9056 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"334","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0314 19:19:24.283102    9056 node_ready.go:53] node "multinode-442000" has status "Ready":"False"
	I0314 19:19:24.578860    9056 round_trippers.go:463] GET https://172.17.86.124:8443/api/v1/nodes/multinode-442000
	I0314 19:19:24.578912    9056 round_trippers.go:469] Request Headers:
	I0314 19:19:24.578912    9056 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:19:24.578912    9056 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:19:24.589586    9056 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0314 19:19:24.589586    9056 round_trippers.go:577] Response Headers:
	I0314 19:19:24.589586    9056 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:19:24.589586    9056 round_trippers.go:580]     Content-Type: application/json
	I0314 19:19:24.589586    9056 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:19:24.589586    9056 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:19:24.589586    9056 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:19:24 GMT
	I0314 19:19:24.589586    9056 round_trippers.go:580]     Audit-Id: f1d9beb4-c387-4100-8f73-e9538013e3b7
	I0314 19:19:24.589983    9056 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"334","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0314 19:19:24.679222    9056 command_runner.go:130] > serviceaccount/storage-provisioner created
	I0314 19:19:24.679222    9056 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I0314 19:19:24.679222    9056 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0314 19:19:24.679222    9056 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0314 19:19:24.679222    9056 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I0314 19:19:24.679222    9056 command_runner.go:130] > pod/storage-provisioner created
	I0314 19:19:25.072422    9056 round_trippers.go:463] GET https://172.17.86.124:8443/api/v1/nodes/multinode-442000
	I0314 19:19:25.072422    9056 round_trippers.go:469] Request Headers:
	I0314 19:19:25.072422    9056 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:19:25.072422    9056 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:19:25.077365    9056 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 19:19:25.077365    9056 round_trippers.go:577] Response Headers:
	I0314 19:19:25.077365    9056 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:19:25.077365    9056 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:19:25 GMT
	I0314 19:19:25.077365    9056 round_trippers.go:580]     Audit-Id: b6732572-c3e7-4fd3-a8ca-6f8c97ae221a
	I0314 19:19:25.077365    9056 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:19:25.077470    9056 round_trippers.go:580]     Content-Type: application/json
	I0314 19:19:25.077470    9056 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:19:25.077470    9056 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"334","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0314 19:19:25.579950    9056 round_trippers.go:463] GET https://172.17.86.124:8443/api/v1/nodes/multinode-442000
	I0314 19:19:25.579950    9056 round_trippers.go:469] Request Headers:
	I0314 19:19:25.580026    9056 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:19:25.580026    9056 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:19:25.583285    9056 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:19:25.583285    9056 round_trippers.go:577] Response Headers:
	I0314 19:19:25.583285    9056 round_trippers.go:580]     Audit-Id: 886eaf90-de63-4e1a-980e-9a6b40bcb0cd
	I0314 19:19:25.583285    9056 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:19:25.583285    9056 round_trippers.go:580]     Content-Type: application/json
	I0314 19:19:25.583693    9056 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:19:25.583693    9056 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:19:25.583693    9056 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:19:25 GMT
	I0314 19:19:25.583822    9056 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"334","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0314 19:19:25.777662    9056 main.go:141] libmachine: [stdout =====>] : 172.17.86.124
	
	I0314 19:19:25.777748    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:19:25.778067    9056 sshutil.go:53] new ssh client: &{IP:172.17.86.124 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-442000\id_rsa Username:docker}
	I0314 19:19:25.909740    9056 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0314 19:19:26.071150    9056 round_trippers.go:463] GET https://172.17.86.124:8443/api/v1/nodes/multinode-442000
	I0314 19:19:26.071150    9056 round_trippers.go:469] Request Headers:
	I0314 19:19:26.071150    9056 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:19:26.071150    9056 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:19:26.075100    9056 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:19:26.075100    9056 round_trippers.go:577] Response Headers:
	I0314 19:19:26.075100    9056 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:19:26.075100    9056 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:19:26.075100    9056 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:19:26 GMT
	I0314 19:19:26.075100    9056 round_trippers.go:580]     Audit-Id: c773c41c-c27f-43de-b667-6465f05093ab
	I0314 19:19:26.075100    9056 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:19:26.075100    9056 round_trippers.go:580]     Content-Type: application/json
	I0314 19:19:26.076040    9056 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"334","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0314 19:19:26.242307    9056 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I0314 19:19:26.242307    9056 round_trippers.go:463] GET https://172.17.86.124:8443/apis/storage.k8s.io/v1/storageclasses
	I0314 19:19:26.242307    9056 round_trippers.go:469] Request Headers:
	I0314 19:19:26.242307    9056 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:19:26.242307    9056 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:19:26.247663    9056 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0314 19:19:26.247663    9056 round_trippers.go:577] Response Headers:
	I0314 19:19:26.247663    9056 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:19:26 GMT
	I0314 19:19:26.247663    9056 round_trippers.go:580]     Audit-Id: 4b77ca35-2c04-4e7a-bcc8-6a2c2ed571ad
	I0314 19:19:26.247663    9056 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:19:26.247663    9056 round_trippers.go:580]     Content-Type: application/json
	I0314 19:19:26.247663    9056 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:19:26.247663    9056 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:19:26.247663    9056 round_trippers.go:580]     Content-Length: 1273
	I0314 19:19:26.247663    9056 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"435"},"items":[{"metadata":{"name":"standard","uid":"0da12f5d-9716-49a5-a75b-38054817c24c","resourceVersion":"433","creationTimestamp":"2024-03-14T19:19:26Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-03-14T19:19:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I0314 19:19:26.248280    9056 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"0da12f5d-9716-49a5-a75b-38054817c24c","resourceVersion":"433","creationTimestamp":"2024-03-14T19:19:26Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-03-14T19:19:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0314 19:19:26.248350    9056 round_trippers.go:463] PUT https://172.17.86.124:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0314 19:19:26.248350    9056 round_trippers.go:469] Request Headers:
	I0314 19:19:26.248350    9056 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:19:26.248350    9056 round_trippers.go:473]     Content-Type: application/json
	I0314 19:19:26.248350    9056 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:19:26.253999    9056 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0314 19:19:26.253999    9056 round_trippers.go:577] Response Headers:
	I0314 19:19:26.253999    9056 round_trippers.go:580]     Content-Length: 1220
	I0314 19:19:26.254991    9056 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:19:26 GMT
	I0314 19:19:26.254991    9056 round_trippers.go:580]     Audit-Id: 86735c06-ec35-4db7-b3e6-af50d1f8fe08
	I0314 19:19:26.254991    9056 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:19:26.254991    9056 round_trippers.go:580]     Content-Type: application/json
	I0314 19:19:26.254991    9056 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:19:26.254991    9056 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:19:26.255046    9056 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"0da12f5d-9716-49a5-a75b-38054817c24c","resourceVersion":"433","creationTimestamp":"2024-03-14T19:19:26Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-03-14T19:19:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0314 19:19:26.264842    9056 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0314 19:19:26.269836    9056 addons.go:505] duration metric: took 9.2727917s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0314 19:19:26.577351    9056 round_trippers.go:463] GET https://172.17.86.124:8443/api/v1/nodes/multinode-442000
	I0314 19:19:26.577351    9056 round_trippers.go:469] Request Headers:
	I0314 19:19:26.577351    9056 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:19:26.577351    9056 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:19:26.580750    9056 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0314 19:19:26.580820    9056 round_trippers.go:577] Response Headers:
	I0314 19:19:26.580820    9056 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:19:26.580820    9056 round_trippers.go:580]     Content-Type: application/json
	I0314 19:19:26.580820    9056 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:19:26.580820    9056 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:19:26.580907    9056 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:19:26 GMT
	I0314 19:19:26.580907    9056 round_trippers.go:580]     Audit-Id: ebd9f036-858a-4b44-b7ac-f9af510d8329
	I0314 19:19:26.581136    9056 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"429","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I0314 19:19:26.581719    9056 node_ready.go:49] node "multinode-442000" has status "Ready":"True"
	I0314 19:19:26.581816    9056 node_ready.go:38] duration metric: took 8.513805s for node "multinode-442000" to be "Ready" ...
	I0314 19:19:26.581816    9056 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0314 19:19:26.581982    9056 round_trippers.go:463] GET https://172.17.86.124:8443/api/v1/namespaces/kube-system/pods
	I0314 19:19:26.582051    9056 round_trippers.go:469] Request Headers:
	I0314 19:19:26.582051    9056 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:19:26.582051    9056 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:19:26.586801    9056 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 19:19:26.586801    9056 round_trippers.go:577] Response Headers:
	I0314 19:19:26.586801    9056 round_trippers.go:580]     Audit-Id: 70c2dec2-adaa-4fdf-8a05-393f5f99d4bd
	I0314 19:19:26.586801    9056 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:19:26.586801    9056 round_trippers.go:580]     Content-Type: application/json
	I0314 19:19:26.586801    9056 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:19:26.586801    9056 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:19:26.586801    9056 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:19:26 GMT
	I0314 19:19:26.587727    9056 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"436"},"items":[{"metadata":{"name":"coredns-5dd5756b68-d22jc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2a563b3f-a175-4dc2-9f0b-67dbaefbfaac","resourceVersion":"435","creationTimestamp":"2024-03-14T19:19:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"65689a14-ed43-48af-8104-53ea2e3991f3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:19:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"65689a14-ed43-48af-8104-53ea2e3991f3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 53972 chars]
	I0314 19:19:26.592227    9056 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-d22jc" in "kube-system" namespace to be "Ready" ...
	I0314 19:19:26.592445    9056 round_trippers.go:463] GET https://172.17.86.124:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-d22jc
	I0314 19:19:26.592445    9056 round_trippers.go:469] Request Headers:
	I0314 19:19:26.592445    9056 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:19:26.592445    9056 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:19:26.596093    9056 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:19:26.596093    9056 round_trippers.go:577] Response Headers:
	I0314 19:19:26.596093    9056 round_trippers.go:580]     Audit-Id: 2f0985a9-63f5-4fde-9d0c-dc842c55e137
	I0314 19:19:26.596093    9056 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:19:26.596093    9056 round_trippers.go:580]     Content-Type: application/json
	I0314 19:19:26.596093    9056 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:19:26.596093    9056 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:19:26.596093    9056 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:19:26 GMT
	I0314 19:19:26.596093    9056 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-d22jc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2a563b3f-a175-4dc2-9f0b-67dbaefbfaac","resourceVersion":"435","creationTimestamp":"2024-03-14T19:19:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"65689a14-ed43-48af-8104-53ea2e3991f3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:19:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"65689a14-ed43-48af-8104-53ea2e3991f3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6151 chars]
	I0314 19:19:26.596872    9056 round_trippers.go:463] GET https://172.17.86.124:8443/api/v1/nodes/multinode-442000
	I0314 19:19:26.596945    9056 round_trippers.go:469] Request Headers:
	I0314 19:19:26.596945    9056 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:19:26.596945    9056 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:19:26.599700    9056 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0314 19:19:26.599700    9056 round_trippers.go:577] Response Headers:
	I0314 19:19:26.599700    9056 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:19:26.599700    9056 round_trippers.go:580]     Content-Type: application/json
	I0314 19:19:26.599700    9056 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:19:26.599700    9056 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:19:26.599700    9056 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:19:26 GMT
	I0314 19:19:26.599700    9056 round_trippers.go:580]     Audit-Id: 663fe3fe-9b93-4aa5-b5e6-3fa42945907f
	I0314 19:19:26.601887    9056 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"429","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I0314 19:19:27.095517    9056 round_trippers.go:463] GET https://172.17.86.124:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-d22jc
	I0314 19:19:27.095517    9056 round_trippers.go:469] Request Headers:
	I0314 19:19:27.095517    9056 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:19:27.095517    9056 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:19:27.108224    9056 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0314 19:19:27.108582    9056 round_trippers.go:577] Response Headers:
	I0314 19:19:27.108582    9056 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:19:27.108582    9056 round_trippers.go:580]     Content-Type: application/json
	I0314 19:19:27.108582    9056 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:19:27.108582    9056 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:19:27.108582    9056 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:19:27 GMT
	I0314 19:19:27.108582    9056 round_trippers.go:580]     Audit-Id: 29520785-1034-46c2-8216-b93e3b7b7a5c
	I0314 19:19:27.108582    9056 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-d22jc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2a563b3f-a175-4dc2-9f0b-67dbaefbfaac","resourceVersion":"435","creationTimestamp":"2024-03-14T19:19:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"65689a14-ed43-48af-8104-53ea2e3991f3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:19:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"65689a14-ed43-48af-8104-53ea2e3991f3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6151 chars]
	I0314 19:19:27.109708    9056 round_trippers.go:463] GET https://172.17.86.124:8443/api/v1/nodes/multinode-442000
	I0314 19:19:27.109740    9056 round_trippers.go:469] Request Headers:
	I0314 19:19:27.109740    9056 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:19:27.109740    9056 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:19:27.117976    9056 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0314 19:19:27.117976    9056 round_trippers.go:577] Response Headers:
	I0314 19:19:27.117976    9056 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:19:27.117976    9056 round_trippers.go:580]     Content-Type: application/json
	I0314 19:19:27.117976    9056 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:19:27.117976    9056 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:19:27.117976    9056 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:19:27 GMT
	I0314 19:19:27.117976    9056 round_trippers.go:580]     Audit-Id: 6f39131d-91ef-4a1e-a5bc-5b5dd792667b
	I0314 19:19:27.119861    9056 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"429","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I0314 19:19:27.603482    9056 round_trippers.go:463] GET https://172.17.86.124:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-d22jc
	I0314 19:19:27.603482    9056 round_trippers.go:469] Request Headers:
	I0314 19:19:27.603570    9056 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:19:27.603570    9056 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:19:27.608853    9056 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0314 19:19:27.609172    9056 round_trippers.go:577] Response Headers:
	I0314 19:19:27.609172    9056 round_trippers.go:580]     Audit-Id: a79608fe-60ec-4414-80e3-34f22731cf25
	I0314 19:19:27.609172    9056 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:19:27.609172    9056 round_trippers.go:580]     Content-Type: application/json
	I0314 19:19:27.609172    9056 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:19:27.609172    9056 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:19:27.609172    9056 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:19:27 GMT
	I0314 19:19:27.609251    9056 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-d22jc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2a563b3f-a175-4dc2-9f0b-67dbaefbfaac","resourceVersion":"435","creationTimestamp":"2024-03-14T19:19:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"65689a14-ed43-48af-8104-53ea2e3991f3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:19:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"65689a14-ed43-48af-8104-53ea2e3991f3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6151 chars]
	I0314 19:19:27.610194    9056 round_trippers.go:463] GET https://172.17.86.124:8443/api/v1/nodes/multinode-442000
	I0314 19:19:27.610194    9056 round_trippers.go:469] Request Headers:
	I0314 19:19:27.610194    9056 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:19:27.610194    9056 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:19:27.613402    9056 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:19:27.613448    9056 round_trippers.go:577] Response Headers:
	I0314 19:19:27.613448    9056 round_trippers.go:580]     Audit-Id: 467714c9-a164-4b84-a220-39d749a114e8
	I0314 19:19:27.613448    9056 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:19:27.613448    9056 round_trippers.go:580]     Content-Type: application/json
	I0314 19:19:27.613448    9056 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:19:27.613448    9056 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:19:27.613448    9056 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:19:27 GMT
	I0314 19:19:27.613676    9056 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"429","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I0314 19:19:28.107517    9056 round_trippers.go:463] GET https://172.17.86.124:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-d22jc
	I0314 19:19:28.107517    9056 round_trippers.go:469] Request Headers:
	I0314 19:19:28.107517    9056 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:19:28.107517    9056 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:19:28.117664    9056 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0314 19:19:28.117664    9056 round_trippers.go:577] Response Headers:
	I0314 19:19:28.117664    9056 round_trippers.go:580]     Audit-Id: c0159bb5-7a2f-4323-8030-aae7dcb595eb
	I0314 19:19:28.117664    9056 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:19:28.117664    9056 round_trippers.go:580]     Content-Type: application/json
	I0314 19:19:28.117664    9056 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:19:28.117664    9056 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:19:28.117664    9056 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:19:28 GMT
	I0314 19:19:28.118328    9056 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-d22jc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2a563b3f-a175-4dc2-9f0b-67dbaefbfaac","resourceVersion":"435","creationTimestamp":"2024-03-14T19:19:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"65689a14-ed43-48af-8104-53ea2e3991f3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:19:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"65689a14-ed43-48af-8104-53ea2e3991f3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6151 chars]
	I0314 19:19:28.119080    9056 round_trippers.go:463] GET https://172.17.86.124:8443/api/v1/nodes/multinode-442000
	I0314 19:19:28.119080    9056 round_trippers.go:469] Request Headers:
	I0314 19:19:28.119130    9056 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:19:28.119130    9056 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:19:28.124348    9056 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0314 19:19:28.124348    9056 round_trippers.go:577] Response Headers:
	I0314 19:19:28.124348    9056 round_trippers.go:580]     Content-Type: application/json
	I0314 19:19:28.124348    9056 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:19:28.124348    9056 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:19:28.124348    9056 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:19:28 GMT
	I0314 19:19:28.124348    9056 round_trippers.go:580]     Audit-Id: c56da031-e102-4f94-b5d2-5928052fb656
	I0314 19:19:28.124348    9056 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:19:28.126250    9056 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"429","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I0314 19:19:28.606513    9056 round_trippers.go:463] GET https://172.17.86.124:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-d22jc
	I0314 19:19:28.606513    9056 round_trippers.go:469] Request Headers:
	I0314 19:19:28.606513    9056 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:19:28.606513    9056 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:19:28.610079    9056 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:19:28.610079    9056 round_trippers.go:577] Response Headers:
	I0314 19:19:28.610079    9056 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:19:28.610079    9056 round_trippers.go:580]     Content-Type: application/json
	I0314 19:19:28.610079    9056 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:19:28.610079    9056 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:19:28.610079    9056 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:19:28 GMT
	I0314 19:19:28.610079    9056 round_trippers.go:580]     Audit-Id: febd4365-d3b2-45c6-8711-c2b7992be1fd
	I0314 19:19:28.611080    9056 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-d22jc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2a563b3f-a175-4dc2-9f0b-67dbaefbfaac","resourceVersion":"446","creationTimestamp":"2024-03-14T19:19:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"65689a14-ed43-48af-8104-53ea2e3991f3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:19:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"65689a14-ed43-48af-8104-53ea2e3991f3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6283 chars]
	I0314 19:19:28.611788    9056 round_trippers.go:463] GET https://172.17.86.124:8443/api/v1/nodes/multinode-442000
	I0314 19:19:28.611788    9056 round_trippers.go:469] Request Headers:
	I0314 19:19:28.611788    9056 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:19:28.611788    9056 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:19:28.615013    9056 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:19:28.615013    9056 round_trippers.go:577] Response Headers:
	I0314 19:19:28.615013    9056 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:19:28.615013    9056 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:19:28 GMT
	I0314 19:19:28.615013    9056 round_trippers.go:580]     Audit-Id: 1cd0f839-3e9d-463a-8648-e2e40d2a06d2
	I0314 19:19:28.615013    9056 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:19:28.615013    9056 round_trippers.go:580]     Content-Type: application/json
	I0314 19:19:28.615013    9056 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:19:28.615239    9056 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"429","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I0314 19:19:28.615685    9056 pod_ready.go:92] pod "coredns-5dd5756b68-d22jc" in "kube-system" namespace has status "Ready":"True"
	I0314 19:19:28.615685    9056 pod_ready.go:81] duration metric: took 2.0232146s for pod "coredns-5dd5756b68-d22jc" in "kube-system" namespace to be "Ready" ...
	I0314 19:19:28.615734    9056 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-442000" in "kube-system" namespace to be "Ready" ...
	I0314 19:19:28.615849    9056 round_trippers.go:463] GET https://172.17.86.124:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-442000
	I0314 19:19:28.615849    9056 round_trippers.go:469] Request Headers:
	I0314 19:19:28.615894    9056 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:19:28.615918    9056 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:19:28.619166    9056 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:19:28.619166    9056 round_trippers.go:577] Response Headers:
	I0314 19:19:28.619166    9056 round_trippers.go:580]     Content-Type: application/json
	I0314 19:19:28.619166    9056 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:19:28.619166    9056 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:19:28.619166    9056 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:19:28 GMT
	I0314 19:19:28.619166    9056 round_trippers.go:580]     Audit-Id: ae5b51ce-e966-4749-aeeb-072936e71530
	I0314 19:19:28.619166    9056 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:19:28.619166    9056 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-442000","namespace":"kube-system","uid":"8974ad44-5d36-48f0-bc6b-9115bab5fb5e","resourceVersion":"410","creationTimestamp":"2024-03-14T19:19:03Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.17.86.124:2379","kubernetes.io/config.hash":"92e70beb375f9f247f5f8395dc065033","kubernetes.io/config.mirror":"92e70beb375f9f247f5f8395dc065033","kubernetes.io/config.seen":"2024-03-14T19:18:55.420198507Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:19:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 5862 chars]
	I0314 19:19:28.619888    9056 round_trippers.go:463] GET https://172.17.86.124:8443/api/v1/nodes/multinode-442000
	I0314 19:19:28.619888    9056 round_trippers.go:469] Request Headers:
	I0314 19:19:28.619888    9056 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:19:28.619888    9056 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:19:28.623757    9056 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:19:28.624293    9056 round_trippers.go:577] Response Headers:
	I0314 19:19:28.624293    9056 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:19:28.624293    9056 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:19:28 GMT
	I0314 19:19:28.624293    9056 round_trippers.go:580]     Audit-Id: 4fc6b5a0-1dd1-4cfc-a312-cd69198b73b9
	I0314 19:19:28.624293    9056 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:19:28.624293    9056 round_trippers.go:580]     Content-Type: application/json
	I0314 19:19:28.624293    9056 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:19:28.624402    9056 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"429","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I0314 19:19:28.624402    9056 pod_ready.go:92] pod "etcd-multinode-442000" in "kube-system" namespace has status "Ready":"True"
	I0314 19:19:28.624928    9056 pod_ready.go:81] duration metric: took 9.1675ms for pod "etcd-multinode-442000" in "kube-system" namespace to be "Ready" ...
	I0314 19:19:28.624928    9056 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-442000" in "kube-system" namespace to be "Ready" ...
	I0314 19:19:28.625045    9056 round_trippers.go:463] GET https://172.17.86.124:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-442000
	I0314 19:19:28.625045    9056 round_trippers.go:469] Request Headers:
	I0314 19:19:28.625045    9056 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:19:28.625045    9056 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:19:28.627611    9056 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0314 19:19:28.627611    9056 round_trippers.go:577] Response Headers:
	I0314 19:19:28.627611    9056 round_trippers.go:580]     Audit-Id: 6deaf0bc-c9ce-43b3-b341-7a98dee354b8
	I0314 19:19:28.627611    9056 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:19:28.627611    9056 round_trippers.go:580]     Content-Type: application/json
	I0314 19:19:28.627611    9056 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:19:28.627611    9056 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:19:28.627611    9056 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:19:28 GMT
	I0314 19:19:28.627611    9056 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-442000","namespace":"kube-system","uid":"02a2d011-5f4c-451c-9698-a88e42e4b6c9","resourceVersion":"414","creationTimestamp":"2024-03-14T19:19:02Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.17.86.124:8443","kubernetes.io/config.hash":"81fdcd9740169a0b72b7c7316eeac39f","kubernetes.io/config.mirror":"81fdcd9740169a0b72b7c7316eeac39f","kubernetes.io/config.seen":"2024-03-14T19:18:55.420203908Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:19:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7399 chars]
	I0314 19:19:28.628608    9056 round_trippers.go:463] GET https://172.17.86.124:8443/api/v1/nodes/multinode-442000
	I0314 19:19:28.628608    9056 round_trippers.go:469] Request Headers:
	I0314 19:19:28.628608    9056 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:19:28.628608    9056 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:19:28.631458    9056 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0314 19:19:28.631458    9056 round_trippers.go:577] Response Headers:
	I0314 19:19:28.631458    9056 round_trippers.go:580]     Audit-Id: cbe46535-fa1c-4a7f-b453-433ec118195e
	I0314 19:19:28.631458    9056 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:19:28.631458    9056 round_trippers.go:580]     Content-Type: application/json
	I0314 19:19:28.631458    9056 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:19:28.631458    9056 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:19:28.631458    9056 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:19:28 GMT
	I0314 19:19:28.631458    9056 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"429","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I0314 19:19:28.631458    9056 pod_ready.go:92] pod "kube-apiserver-multinode-442000" in "kube-system" namespace has status "Ready":"True"
	I0314 19:19:28.631458    9056 pod_ready.go:81] duration metric: took 6.5295ms for pod "kube-apiserver-multinode-442000" in "kube-system" namespace to be "Ready" ...
	I0314 19:19:28.631458    9056 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-442000" in "kube-system" namespace to be "Ready" ...
	I0314 19:19:28.631458    9056 round_trippers.go:463] GET https://172.17.86.124:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-442000
	I0314 19:19:28.631458    9056 round_trippers.go:469] Request Headers:
	I0314 19:19:28.631458    9056 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:19:28.631458    9056 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:19:28.634824    9056 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:19:28.634824    9056 round_trippers.go:577] Response Headers:
	I0314 19:19:28.634824    9056 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:19:28.634824    9056 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:19:28.634824    9056 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:19:28 GMT
	I0314 19:19:28.634824    9056 round_trippers.go:580]     Audit-Id: e39cb887-a74b-42e0-bc5b-c87877cc8897
	I0314 19:19:28.635058    9056 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:19:28.635058    9056 round_trippers.go:580]     Content-Type: application/json
	I0314 19:19:28.635119    9056 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-442000","namespace":"kube-system","uid":"b16fc874-ef74-44ca-a54f-bb678bf982df","resourceVersion":"413","creationTimestamp":"2024-03-14T19:19:01Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"a7ee530f2bd843eddeace8cd6ec0d204","kubernetes.io/config.mirror":"a7ee530f2bd843eddeace8cd6ec0d204","kubernetes.io/config.seen":"2024-03-14T19:18:55.420205308Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:19:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6969 chars]
	I0314 19:19:28.635701    9056 round_trippers.go:463] GET https://172.17.86.124:8443/api/v1/nodes/multinode-442000
	I0314 19:19:28.635701    9056 round_trippers.go:469] Request Headers:
	I0314 19:19:28.635701    9056 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:19:28.635701    9056 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:19:28.638043    9056 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0314 19:19:28.638776    9056 round_trippers.go:577] Response Headers:
	I0314 19:19:28.638776    9056 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:19:28 GMT
	I0314 19:19:28.638901    9056 round_trippers.go:580]     Audit-Id: 227bba82-a6bf-47a2-b336-443e679fcd28
	I0314 19:19:28.638901    9056 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:19:28.638901    9056 round_trippers.go:580]     Content-Type: application/json
	I0314 19:19:28.638901    9056 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:19:28.638901    9056 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:19:28.638901    9056 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"429","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I0314 19:19:28.638901    9056 pod_ready.go:92] pod "kube-controller-manager-multinode-442000" in "kube-system" namespace has status "Ready":"True"
	I0314 19:19:28.638901    9056 pod_ready.go:81] duration metric: took 7.4424ms for pod "kube-controller-manager-multinode-442000" in "kube-system" namespace to be "Ready" ...
	I0314 19:19:28.638901    9056 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-cg28g" in "kube-system" namespace to be "Ready" ...
	I0314 19:19:28.639565    9056 round_trippers.go:463] GET https://172.17.86.124:8443/api/v1/namespaces/kube-system/pods/kube-proxy-cg28g
	I0314 19:19:28.639565    9056 round_trippers.go:469] Request Headers:
	I0314 19:19:28.639565    9056 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:19:28.639565    9056 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:19:28.642543    9056 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0314 19:19:28.642543    9056 round_trippers.go:577] Response Headers:
	I0314 19:19:28.642543    9056 round_trippers.go:580]     Audit-Id: 91eb30bf-4a08-40e9-aa4c-144d5583fcac
	I0314 19:19:28.642543    9056 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:19:28.642543    9056 round_trippers.go:580]     Content-Type: application/json
	I0314 19:19:28.642543    9056 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:19:28.642543    9056 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:19:28.642543    9056 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:19:28 GMT
	I0314 19:19:28.642543    9056 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-cg28g","generateName":"kube-proxy-","namespace":"kube-system","uid":"c7f798bf-6722-4731-af8d-ccd5703d116e","resourceVersion":"405","creationTimestamp":"2024-03-14T19:19:16Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"6fc4cc4b-ef3f-4f16-8df5-a146058b364e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:19:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6fc4cc4b-ef3f-4f16-8df5-a146058b364e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5533 chars]
	I0314 19:19:28.643203    9056 round_trippers.go:463] GET https://172.17.86.124:8443/api/v1/nodes/multinode-442000
	I0314 19:19:28.643203    9056 round_trippers.go:469] Request Headers:
	I0314 19:19:28.643203    9056 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:19:28.643203    9056 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:19:28.645771    9056 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0314 19:19:28.646128    9056 round_trippers.go:577] Response Headers:
	I0314 19:19:28.646128    9056 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:19:28 GMT
	I0314 19:19:28.646128    9056 round_trippers.go:580]     Audit-Id: 493e59b8-258e-4143-973c-18bbcffd5098
	I0314 19:19:28.646128    9056 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:19:28.646128    9056 round_trippers.go:580]     Content-Type: application/json
	I0314 19:19:28.646128    9056 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:19:28.646128    9056 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:19:28.646356    9056 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"429","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I0314 19:19:28.646781    9056 pod_ready.go:92] pod "kube-proxy-cg28g" in "kube-system" namespace has status "Ready":"True"
	I0314 19:19:28.646781    9056 pod_ready.go:81] duration metric: took 7.8795ms for pod "kube-proxy-cg28g" in "kube-system" namespace to be "Ready" ...
	I0314 19:19:28.646781    9056 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-442000" in "kube-system" namespace to be "Ready" ...
	I0314 19:19:28.807635    9056 request.go:629] Waited for 160.4273ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.86.124:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-442000
	I0314 19:19:28.807635    9056 round_trippers.go:463] GET https://172.17.86.124:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-442000
	I0314 19:19:28.807635    9056 round_trippers.go:469] Request Headers:
	I0314 19:19:28.807635    9056 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:19:28.807635    9056 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:19:28.811387    9056 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:19:28.811387    9056 round_trippers.go:577] Response Headers:
	I0314 19:19:28.811387    9056 round_trippers.go:580]     Audit-Id: 8b171a01-6b46-4930-ac8f-519674737471
	I0314 19:19:28.811387    9056 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:19:28.811387    9056 round_trippers.go:580]     Content-Type: application/json
	I0314 19:19:28.811387    9056 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:19:28.811387    9056 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:19:28.811387    9056 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:19:29 GMT
	I0314 19:19:28.812091    9056 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-442000","namespace":"kube-system","uid":"76b10598-fe0d-4a14-a8e4-a32221fbb68f","resourceVersion":"412","creationTimestamp":"2024-03-14T19:19:01Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"2b2434280023596d1e3c90125a7219ed","kubernetes.io/config.mirror":"2b2434280023596d1e3c90125a7219ed","kubernetes.io/config.seen":"2024-03-14T19:18:55.420206709Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:19:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4699 chars]
	I0314 19:19:29.011128    9056 request.go:629] Waited for 198.95ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.86.124:8443/api/v1/nodes/multinode-442000
	I0314 19:19:29.011456    9056 round_trippers.go:463] GET https://172.17.86.124:8443/api/v1/nodes/multinode-442000
	I0314 19:19:29.011456    9056 round_trippers.go:469] Request Headers:
	I0314 19:19:29.011510    9056 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:19:29.011528    9056 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:19:29.015314    9056 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:19:29.015359    9056 round_trippers.go:577] Response Headers:
	I0314 19:19:29.015359    9056 round_trippers.go:580]     Audit-Id: 1ee3436a-8578-4b28-a508-96ffbcd4afd2
	I0314 19:19:29.015359    9056 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:19:29.015359    9056 round_trippers.go:580]     Content-Type: application/json
	I0314 19:19:29.015359    9056 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:19:29.015359    9056 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:19:29.015359    9056 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:19:29 GMT
	I0314 19:19:29.015359    9056 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"429","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I0314 19:19:29.016063    9056 pod_ready.go:92] pod "kube-scheduler-multinode-442000" in "kube-system" namespace has status "Ready":"True"
	I0314 19:19:29.016135    9056 pod_ready.go:81] duration metric: took 369.2555ms for pod "kube-scheduler-multinode-442000" in "kube-system" namespace to be "Ready" ...
	I0314 19:19:29.016135    9056 pod_ready.go:38] duration metric: took 2.434136s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0314 19:19:29.016205    9056 api_server.go:52] waiting for apiserver process to appear ...
	I0314 19:19:29.025082    9056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:19:29.048969    9056 command_runner.go:130] > 2278
	I0314 19:19:29.049143    9056 api_server.go:72] duration metric: took 12.0517174s to wait for apiserver process to appear ...
	I0314 19:19:29.049143    9056 api_server.go:88] waiting for apiserver healthz status ...
	I0314 19:19:29.049196    9056 api_server.go:253] Checking apiserver healthz at https://172.17.86.124:8443/healthz ...
	I0314 19:19:29.055860    9056 api_server.go:279] https://172.17.86.124:8443/healthz returned 200:
	ok
	I0314 19:19:29.056707    9056 round_trippers.go:463] GET https://172.17.86.124:8443/version
	I0314 19:19:29.056707    9056 round_trippers.go:469] Request Headers:
	I0314 19:19:29.056707    9056 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:19:29.056707    9056 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:19:29.058268    9056 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0314 19:19:29.058659    9056 round_trippers.go:577] Response Headers:
	I0314 19:19:29.058659    9056 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:19:29.058659    9056 round_trippers.go:580]     Content-Length: 264
	I0314 19:19:29.058659    9056 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:19:29 GMT
	I0314 19:19:29.058659    9056 round_trippers.go:580]     Audit-Id: 06cde281-ccff-474d-84ff-7d8af37f794b
	I0314 19:19:29.058659    9056 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:19:29.058659    9056 round_trippers.go:580]     Content-Type: application/json
	I0314 19:19:29.058659    9056 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:19:29.058659    9056 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.4",
	  "gitCommit": "bae2c62678db2b5053817bc97181fcc2e8388103",
	  "gitTreeState": "clean",
	  "buildDate": "2023-11-15T16:48:54Z",
	  "goVersion": "go1.20.11",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0314 19:19:29.058659    9056 api_server.go:141] control plane version: v1.28.4
	I0314 19:19:29.058659    9056 api_server.go:131] duration metric: took 9.5156ms to wait for apiserver health ...
	I0314 19:19:29.058659    9056 system_pods.go:43] waiting for kube-system pods to appear ...
	I0314 19:19:29.212722    9056 request.go:629] Waited for 154.051ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.86.124:8443/api/v1/namespaces/kube-system/pods
	I0314 19:19:29.212722    9056 round_trippers.go:463] GET https://172.17.86.124:8443/api/v1/namespaces/kube-system/pods
	I0314 19:19:29.212880    9056 round_trippers.go:469] Request Headers:
	I0314 19:19:29.212880    9056 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:19:29.212880    9056 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:19:29.217513    9056 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 19:19:29.218316    9056 round_trippers.go:577] Response Headers:
	I0314 19:19:29.218316    9056 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:19:29.218316    9056 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:19:29 GMT
	I0314 19:19:29.218316    9056 round_trippers.go:580]     Audit-Id: 2e4cdeaf-dd17-4807-9f27-b1b96011307b
	I0314 19:19:29.218316    9056 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:19:29.218316    9056 round_trippers.go:580]     Content-Type: application/json
	I0314 19:19:29.218316    9056 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:19:29.220155    9056 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"450"},"items":[{"metadata":{"name":"coredns-5dd5756b68-d22jc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2a563b3f-a175-4dc2-9f0b-67dbaefbfaac","resourceVersion":"446","creationTimestamp":"2024-03-14T19:19:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"65689a14-ed43-48af-8104-53ea2e3991f3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:19:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"65689a14-ed43-48af-8104-53ea2e3991f3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 54088 chars]
	I0314 19:19:29.222628    9056 system_pods.go:59] 8 kube-system pods found
	I0314 19:19:29.222695    9056 system_pods.go:61] "coredns-5dd5756b68-d22jc" [2a563b3f-a175-4dc2-9f0b-67dbaefbfaac] Running
	I0314 19:19:29.222695    9056 system_pods.go:61] "etcd-multinode-442000" [8974ad44-5d36-48f0-bc6b-9115bab5fb5e] Running
	I0314 19:19:29.222695    9056 system_pods.go:61] "kindnet-7b9lf" [677b9084-0026-4b21-b041-445940624ed7] Running
	I0314 19:19:29.222695    9056 system_pods.go:61] "kube-apiserver-multinode-442000" [02a2d011-5f4c-451c-9698-a88e42e4b6c9] Running
	I0314 19:19:29.222695    9056 system_pods.go:61] "kube-controller-manager-multinode-442000" [b16fc874-ef74-44ca-a54f-bb678bf982df] Running
	I0314 19:19:29.222695    9056 system_pods.go:61] "kube-proxy-cg28g" [c7f798bf-6722-4731-af8d-ccd5703d116e] Running
	I0314 19:19:29.222762    9056 system_pods.go:61] "kube-scheduler-multinode-442000" [76b10598-fe0d-4a14-a8e4-a32221fbb68f] Running
	I0314 19:19:29.222762    9056 system_pods.go:61] "storage-provisioner" [65d76566-4401-4b28-8452-10ed98624901] Running
	I0314 19:19:29.222762    9056 system_pods.go:74] duration metric: took 164.0906ms to wait for pod list to return data ...
	I0314 19:19:29.222762    9056 default_sa.go:34] waiting for default service account to be created ...
	I0314 19:19:29.415379    9056 request.go:629] Waited for 192.5307ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.86.124:8443/api/v1/namespaces/default/serviceaccounts
	I0314 19:19:29.415682    9056 round_trippers.go:463] GET https://172.17.86.124:8443/api/v1/namespaces/default/serviceaccounts
	I0314 19:19:29.415837    9056 round_trippers.go:469] Request Headers:
	I0314 19:19:29.415837    9056 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:19:29.415837    9056 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:19:29.421374    9056 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0314 19:19:29.421374    9056 round_trippers.go:577] Response Headers:
	I0314 19:19:29.421374    9056 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:19:29.421374    9056 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:19:29.421374    9056 round_trippers.go:580]     Content-Length: 261
	I0314 19:19:29.421374    9056 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:19:29 GMT
	I0314 19:19:29.421374    9056 round_trippers.go:580]     Audit-Id: c52939c0-3784-4e4d-a3a8-2e2940593bd9
	I0314 19:19:29.421374    9056 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:19:29.421374    9056 round_trippers.go:580]     Content-Type: application/json
	I0314 19:19:29.421374    9056 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"450"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"31dfe296-58ba-4a37-a509-52c518a0c41a","resourceVersion":"365","creationTimestamp":"2024-03-14T19:19:16Z"}}]}
	I0314 19:19:29.421374    9056 default_sa.go:45] found service account: "default"
	I0314 19:19:29.421374    9056 default_sa.go:55] duration metric: took 198.5968ms for default service account to be created ...
	I0314 19:19:29.421374    9056 system_pods.go:116] waiting for k8s-apps to be running ...
	I0314 19:19:29.616427    9056 request.go:629] Waited for 194.5078ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.86.124:8443/api/v1/namespaces/kube-system/pods
	I0314 19:19:29.616698    9056 round_trippers.go:463] GET https://172.17.86.124:8443/api/v1/namespaces/kube-system/pods
	I0314 19:19:29.616864    9056 round_trippers.go:469] Request Headers:
	I0314 19:19:29.616931    9056 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:19:29.616931    9056 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:19:29.621730    9056 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 19:19:29.621730    9056 round_trippers.go:577] Response Headers:
	I0314 19:19:29.621730    9056 round_trippers.go:580]     Content-Type: application/json
	I0314 19:19:29.621730    9056 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:19:29.621730    9056 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:19:29.621730    9056 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:19:29 GMT
	I0314 19:19:29.621730    9056 round_trippers.go:580]     Audit-Id: 84d6813b-d6ef-44d2-aafe-7f08b1275379
	I0314 19:19:29.621730    9056 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:19:29.623963    9056 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"451"},"items":[{"metadata":{"name":"coredns-5dd5756b68-d22jc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2a563b3f-a175-4dc2-9f0b-67dbaefbfaac","resourceVersion":"446","creationTimestamp":"2024-03-14T19:19:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"65689a14-ed43-48af-8104-53ea2e3991f3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:19:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"65689a14-ed43-48af-8104-53ea2e3991f3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 54088 chars]
	I0314 19:19:29.626771    9056 system_pods.go:86] 8 kube-system pods found
	I0314 19:19:29.626850    9056 system_pods.go:89] "coredns-5dd5756b68-d22jc" [2a563b3f-a175-4dc2-9f0b-67dbaefbfaac] Running
	I0314 19:19:29.626850    9056 system_pods.go:89] "etcd-multinode-442000" [8974ad44-5d36-48f0-bc6b-9115bab5fb5e] Running
	I0314 19:19:29.626850    9056 system_pods.go:89] "kindnet-7b9lf" [677b9084-0026-4b21-b041-445940624ed7] Running
	I0314 19:19:29.626850    9056 system_pods.go:89] "kube-apiserver-multinode-442000" [02a2d011-5f4c-451c-9698-a88e42e4b6c9] Running
	I0314 19:19:29.626850    9056 system_pods.go:89] "kube-controller-manager-multinode-442000" [b16fc874-ef74-44ca-a54f-bb678bf982df] Running
	I0314 19:19:29.626850    9056 system_pods.go:89] "kube-proxy-cg28g" [c7f798bf-6722-4731-af8d-ccd5703d116e] Running
	I0314 19:19:29.626850    9056 system_pods.go:89] "kube-scheduler-multinode-442000" [76b10598-fe0d-4a14-a8e4-a32221fbb68f] Running
	I0314 19:19:29.626912    9056 system_pods.go:89] "storage-provisioner" [65d76566-4401-4b28-8452-10ed98624901] Running
	I0314 19:19:29.626912    9056 system_pods.go:126] duration metric: took 204.9915ms to wait for k8s-apps to be running ...
	I0314 19:19:29.626912    9056 system_svc.go:44] waiting for kubelet service to be running ....
	I0314 19:19:29.636242    9056 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0314 19:19:29.660009    9056 system_svc.go:56] duration metric: took 33.0946ms WaitForService to wait for kubelet
	I0314 19:19:29.660783    9056 kubeadm.go:576] duration metric: took 12.663312s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0314 19:19:29.660783    9056 node_conditions.go:102] verifying NodePressure condition ...
	I0314 19:19:29.817276    9056 request.go:629] Waited for 156.38ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.86.124:8443/api/v1/nodes
	I0314 19:19:29.817488    9056 round_trippers.go:463] GET https://172.17.86.124:8443/api/v1/nodes
	I0314 19:19:29.817488    9056 round_trippers.go:469] Request Headers:
	I0314 19:19:29.817488    9056 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:19:29.817488    9056 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:19:29.820848    9056 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:19:29.820848    9056 round_trippers.go:577] Response Headers:
	I0314 19:19:29.820848    9056 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:19:29.820848    9056 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:19:30 GMT
	I0314 19:19:29.820848    9056 round_trippers.go:580]     Audit-Id: a81117a1-cf3b-457b-829d-7b47d812850b
	I0314 19:19:29.820848    9056 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:19:29.820848    9056 round_trippers.go:580]     Content-Type: application/json
	I0314 19:19:29.820848    9056 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:19:29.821916    9056 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"451"},"items":[{"metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"429","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 4835 chars]
	I0314 19:19:29.823281    9056 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0314 19:19:29.823461    9056 node_conditions.go:123] node cpu capacity is 2
	I0314 19:19:29.823461    9056 node_conditions.go:105] duration metric: took 162.6649ms to run NodePressure ...
	I0314 19:19:29.823537    9056 start.go:240] waiting for startup goroutines ...
	I0314 19:19:29.823537    9056 start.go:245] waiting for cluster config update ...
	I0314 19:19:29.823537    9056 start.go:254] writing updated cluster config ...
	I0314 19:19:29.827345    9056 out.go:177] 
	I0314 19:19:29.830579    9056 config.go:182] Loaded profile config "ha-832100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0314 19:19:29.837307    9056 config.go:182] Loaded profile config "multinode-442000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0314 19:19:29.837307    9056 profile.go:142] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-442000\config.json ...
	I0314 19:19:29.843283    9056 out.go:177] * Starting "multinode-442000-m02" worker node in "multinode-442000" cluster
	I0314 19:19:29.859795    9056 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0314 19:19:29.860538    9056 cache.go:56] Caching tarball of preloaded images
	I0314 19:19:29.860538    9056 preload.go:173] Found C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0314 19:19:29.860538    9056 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0314 19:19:29.861067    9056 profile.go:142] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-442000\config.json ...
	I0314 19:19:29.863037    9056 start.go:360] acquireMachinesLock for multinode-442000-m02: {Name:mk814f158b6187cc9297257c36fdbe0d2871c950 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0314 19:19:29.863037    9056 start.go:364] duration metric: took 0s to acquireMachinesLock for "multinode-442000-m02"
	I0314 19:19:29.863037    9056 start.go:93] Provisioning new machine with config: &{Name:multinode-442000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.28.4 ClusterName:multinode-442000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.86.124 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDis
ks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0314 19:19:29.863037    9056 start.go:125] createHost starting for "m02" (driver="hyperv")
	I0314 19:19:29.866149    9056 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0314 19:19:29.866149    9056 start.go:159] libmachine.API.Create for "multinode-442000" (driver="hyperv")
	I0314 19:19:29.866149    9056 client.go:168] LocalClient.Create starting
	I0314 19:19:29.866789    9056 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem
	I0314 19:19:29.866789    9056 main.go:141] libmachine: Decoding PEM data...
	I0314 19:19:29.866789    9056 main.go:141] libmachine: Parsing certificate...
	I0314 19:19:29.866789    9056 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem
	I0314 19:19:29.866789    9056 main.go:141] libmachine: Decoding PEM data...
	I0314 19:19:29.866789    9056 main.go:141] libmachine: Parsing certificate...
	I0314 19:19:29.866789    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0314 19:19:31.659906    9056 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0314 19:19:31.659906    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:19:31.660084    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0314 19:19:33.302323    9056 main.go:141] libmachine: [stdout =====>] : False
	
	I0314 19:19:33.302323    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:19:33.302323    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0314 19:19:34.727469    9056 main.go:141] libmachine: [stdout =====>] : True
	
	I0314 19:19:34.727469    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:19:34.727753    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0314 19:19:38.175287    9056 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0314 19:19:38.175287    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:19:38.177129    9056 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube7/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.32.1-1710348681-18375-amd64.iso...
	I0314 19:19:38.498644    9056 main.go:141] libmachine: Creating SSH key...
	I0314 19:19:38.835451    9056 main.go:141] libmachine: Creating VM...
	I0314 19:19:38.835451    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0314 19:19:41.547593    9056 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0314 19:19:41.547593    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:19:41.547872    9056 main.go:141] libmachine: Using switch "Default Switch"
	I0314 19:19:41.547927    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0314 19:19:43.233638    9056 main.go:141] libmachine: [stdout =====>] : True
	
	I0314 19:19:43.233638    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:19:43.233868    9056 main.go:141] libmachine: Creating VHD
	I0314 19:19:43.234051    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-442000-m02\fixed.vhd' -SizeBytes 10MB -Fixed
	I0314 19:19:46.847049    9056 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube7
	Path                    : C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-442000-m02\fixed
	                          .vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : D63B9E0B-C829-4A9A-BBFD-3DC3AB7DCAC0
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0314 19:19:46.847049    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:19:46.847049    9056 main.go:141] libmachine: Writing magic tar header
	I0314 19:19:46.847858    9056 main.go:141] libmachine: Writing SSH key tar header
	I0314 19:19:46.856092    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-442000-m02\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-442000-m02\disk.vhd' -VHDType Dynamic -DeleteSource
	I0314 19:19:49.893165    9056 main.go:141] libmachine: [stdout =====>] : 
	I0314 19:19:49.893165    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:19:49.893165    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-442000-m02\disk.vhd' -SizeBytes 20000MB
	I0314 19:19:52.297877    9056 main.go:141] libmachine: [stdout =====>] : 
	I0314 19:19:52.297877    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:19:52.297877    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM multinode-442000-m02 -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-442000-m02' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0314 19:19:55.741836    9056 main.go:141] libmachine: [stdout =====>] : 
	Name                 State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----                 ----- ----------- ----------------- ------   ------             -------
	multinode-442000-m02 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0314 19:19:55.741836    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:19:55.742229    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName multinode-442000-m02 -DynamicMemoryEnabled $false
	I0314 19:19:57.851551    9056 main.go:141] libmachine: [stdout =====>] : 
	I0314 19:19:57.851551    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:19:57.851978    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor multinode-442000-m02 -Count 2
	I0314 19:19:59.887962    9056 main.go:141] libmachine: [stdout =====>] : 
	I0314 19:19:59.887962    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:19:59.888826    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName multinode-442000-m02 -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-442000-m02\boot2docker.iso'
	I0314 19:20:02.298644    9056 main.go:141] libmachine: [stdout =====>] : 
	I0314 19:20:02.299184    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:20:02.299351    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName multinode-442000-m02 -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-442000-m02\disk.vhd'
	I0314 19:20:04.791298    9056 main.go:141] libmachine: [stdout =====>] : 
	I0314 19:20:04.791298    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:20:04.791298    9056 main.go:141] libmachine: Starting VM...
	I0314 19:20:04.791298    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-442000-m02
	I0314 19:20:07.680362    9056 main.go:141] libmachine: [stdout =====>] : 
	I0314 19:20:07.680362    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:20:07.680791    9056 main.go:141] libmachine: Waiting for host to start...
	I0314 19:20:07.680831    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000-m02 ).state
	I0314 19:20:09.771952    9056 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:20:09.771952    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:20:09.772891    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-442000-m02 ).networkadapters[0]).ipaddresses[0]
	I0314 19:20:12.113387    9056 main.go:141] libmachine: [stdout =====>] : 
	I0314 19:20:12.113387    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:20:13.122470    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000-m02 ).state
	I0314 19:20:15.121644    9056 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:20:15.121644    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:20:15.122092    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-442000-m02 ).networkadapters[0]).ipaddresses[0]
	I0314 19:20:17.469600    9056 main.go:141] libmachine: [stdout =====>] : 
	I0314 19:20:17.469600    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:20:18.477356    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000-m02 ).state
	I0314 19:20:20.487594    9056 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:20:20.487823    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:20:20.487901    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-442000-m02 ).networkadapters[0]).ipaddresses[0]
	I0314 19:20:22.780302    9056 main.go:141] libmachine: [stdout =====>] : 
	I0314 19:20:22.780628    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:20:23.789191    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000-m02 ).state
	I0314 19:20:25.820854    9056 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:20:25.821256    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:20:25.821337    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-442000-m02 ).networkadapters[0]).ipaddresses[0]
	I0314 19:20:28.216333    9056 main.go:141] libmachine: [stdout =====>] : 
	I0314 19:20:28.216333    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:20:29.222901    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000-m02 ).state
	I0314 19:20:31.251378    9056 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:20:31.251378    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:20:31.251606    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-442000-m02 ).networkadapters[0]).ipaddresses[0]
	I0314 19:20:33.669107    9056 main.go:141] libmachine: [stdout =====>] : 172.17.80.135
	
	I0314 19:20:33.669107    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:20:33.669107    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000-m02 ).state
	I0314 19:20:35.653960    9056 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:20:35.653960    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:20:35.653960    9056 machine.go:94] provisionDockerMachine start ...
	I0314 19:20:35.653960    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000-m02 ).state
	I0314 19:20:37.683088    9056 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:20:37.683785    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:20:37.683864    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-442000-m02 ).networkadapters[0]).ipaddresses[0]
	I0314 19:20:40.056495    9056 main.go:141] libmachine: [stdout =====>] : 172.17.80.135
	
	I0314 19:20:40.057274    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:20:40.063027    9056 main.go:141] libmachine: Using SSH client type: native
	I0314 19:20:40.074901    9056 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xac9f80] 0xaccb60 <nil>  [] 0s} 172.17.80.135 22 <nil> <nil>}
	I0314 19:20:40.074901    9056 main.go:141] libmachine: About to run SSH command:
	hostname
	I0314 19:20:40.218416    9056 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0314 19:20:40.218525    9056 buildroot.go:166] provisioning hostname "multinode-442000-m02"
	I0314 19:20:40.218525    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000-m02 ).state
	I0314 19:20:42.180793    9056 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:20:42.180793    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:20:42.180793    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-442000-m02 ).networkadapters[0]).ipaddresses[0]
	I0314 19:20:44.573143    9056 main.go:141] libmachine: [stdout =====>] : 172.17.80.135
	
	I0314 19:20:44.573326    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:20:44.577302    9056 main.go:141] libmachine: Using SSH client type: native
	I0314 19:20:44.577784    9056 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xac9f80] 0xaccb60 <nil>  [] 0s} 172.17.80.135 22 <nil> <nil>}
	I0314 19:20:44.577862    9056 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-442000-m02 && echo "multinode-442000-m02" | sudo tee /etc/hostname
	I0314 19:20:44.744224    9056 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-442000-m02
	
	I0314 19:20:44.744272    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000-m02 ).state
	I0314 19:20:46.716300    9056 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:20:46.716300    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:20:46.716300    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-442000-m02 ).networkadapters[0]).ipaddresses[0]
	I0314 19:20:49.079867    9056 main.go:141] libmachine: [stdout =====>] : 172.17.80.135
	
	I0314 19:20:49.080620    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:20:49.086355    9056 main.go:141] libmachine: Using SSH client type: native
	I0314 19:20:49.086962    9056 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xac9f80] 0xaccb60 <nil>  [] 0s} 172.17.80.135 22 <nil> <nil>}
	I0314 19:20:49.086962    9056 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-442000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-442000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-442000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0314 19:20:49.243342    9056 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0314 19:20:49.243393    9056 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube7\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube7\minikube-integration\.minikube}
	I0314 19:20:49.243393    9056 buildroot.go:174] setting up certificates
	I0314 19:20:49.243446    9056 provision.go:84] configureAuth start
	I0314 19:20:49.243502    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000-m02 ).state
	I0314 19:20:51.238283    9056 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:20:51.238283    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:20:51.238797    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-442000-m02 ).networkadapters[0]).ipaddresses[0]
	I0314 19:20:53.609845    9056 main.go:141] libmachine: [stdout =====>] : 172.17.80.135
	
	I0314 19:20:53.609902    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:20:53.609981    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000-m02 ).state
	I0314 19:20:55.574166    9056 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:20:55.574203    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:20:55.574259    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-442000-m02 ).networkadapters[0]).ipaddresses[0]
	I0314 19:20:57.946938    9056 main.go:141] libmachine: [stdout =====>] : 172.17.80.135
	
	I0314 19:20:57.946938    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:20:57.946938    9056 provision.go:143] copyHostCerts
	I0314 19:20:57.947635    9056 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem
	I0314 19:20:57.947635    9056 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem, removing ...
	I0314 19:20:57.947635    9056 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.pem
	I0314 19:20:57.948171    9056 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0314 19:20:57.949079    9056 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem
	I0314 19:20:57.949079    9056 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem, removing ...
	I0314 19:20:57.949079    9056 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cert.pem
	I0314 19:20:57.949079    9056 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0314 19:20:57.950258    9056 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem
	I0314 19:20:57.950402    9056 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem, removing ...
	I0314 19:20:57.950402    9056 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\key.pem
	I0314 19:20:57.950402    9056 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem (1679 bytes)
	I0314 19:20:57.951403    9056 provision.go:117] generating server cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-442000-m02 san=[127.0.0.1 172.17.80.135 localhost minikube multinode-442000-m02]
	I0314 19:20:58.197687    9056 provision.go:177] copyRemoteCerts
	I0314 19:20:58.207106    9056 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0314 19:20:58.207189    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000-m02 ).state
	I0314 19:21:00.161737    9056 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:21:00.162451    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:21:00.162451    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-442000-m02 ).networkadapters[0]).ipaddresses[0]
	I0314 19:21:02.564717    9056 main.go:141] libmachine: [stdout =====>] : 172.17.80.135
	
	I0314 19:21:02.564717    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:21:02.564717    9056 sshutil.go:53] new ssh client: &{IP:172.17.80.135 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-442000-m02\id_rsa Username:docker}
	I0314 19:21:02.679645    9056 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.4720629s)
	I0314 19:21:02.679735    9056 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0314 19:21:02.680163    9056 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0314 19:21:02.725298    9056 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0314 19:21:02.725347    9056 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1229 bytes)
	I0314 19:21:02.787571    9056 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0314 19:21:02.787571    9056 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0314 19:21:02.831387    9056 provision.go:87] duration metric: took 13.5869115s to configureAuth
	I0314 19:21:02.831387    9056 buildroot.go:189] setting minikube options for container-runtime
	I0314 19:21:02.832599    9056 config.go:182] Loaded profile config "multinode-442000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0314 19:21:02.832599    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000-m02 ).state
	I0314 19:21:04.797262    9056 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:21:04.797262    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:21:04.797905    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-442000-m02 ).networkadapters[0]).ipaddresses[0]
	I0314 19:21:07.216836    9056 main.go:141] libmachine: [stdout =====>] : 172.17.80.135
	
	I0314 19:21:07.216836    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:21:07.223011    9056 main.go:141] libmachine: Using SSH client type: native
	I0314 19:21:07.223707    9056 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xac9f80] 0xaccb60 <nil>  [] 0s} 172.17.80.135 22 <nil> <nil>}
	I0314 19:21:07.223707    9056 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0314 19:21:07.365033    9056 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0314 19:21:07.365127    9056 buildroot.go:70] root file system type: tmpfs
	I0314 19:21:07.365337    9056 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0314 19:21:07.365337    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000-m02 ).state
	I0314 19:21:09.323507    9056 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:21:09.323507    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:21:09.323586    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-442000-m02 ).networkadapters[0]).ipaddresses[0]
	I0314 19:21:11.686958    9056 main.go:141] libmachine: [stdout =====>] : 172.17.80.135
	
	I0314 19:21:11.686958    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:21:11.691277    9056 main.go:141] libmachine: Using SSH client type: native
	I0314 19:21:11.691660    9056 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xac9f80] 0xaccb60 <nil>  [] 0s} 172.17.80.135 22 <nil> <nil>}
	I0314 19:21:11.691842    9056 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.17.86.124"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0314 19:21:11.853818    9056 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.17.86.124
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0314 19:21:11.853973    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000-m02 ).state
	I0314 19:21:13.865969    9056 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:21:13.865969    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:21:13.866075    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-442000-m02 ).networkadapters[0]).ipaddresses[0]
	I0314 19:21:16.305917    9056 main.go:141] libmachine: [stdout =====>] : 172.17.80.135
	
	I0314 19:21:16.305917    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:21:16.310061    9056 main.go:141] libmachine: Using SSH client type: native
	I0314 19:21:16.310418    9056 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xac9f80] 0xaccb60 <nil>  [] 0s} 172.17.80.135 22 <nil> <nil>}
	I0314 19:21:16.310501    9056 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0314 19:21:18.385118    9056 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0314 19:21:18.385178    9056 machine.go:97] duration metric: took 42.7279793s to provisionDockerMachine
	I0314 19:21:18.385232    9056 client.go:171] duration metric: took 1m48.5108818s to LocalClient.Create
	I0314 19:21:18.385288    9056 start.go:167] duration metric: took 1m48.5108818s to libmachine.API.Create "multinode-442000"
	I0314 19:21:18.385288    9056 start.go:293] postStartSetup for "multinode-442000-m02" (driver="hyperv")
	I0314 19:21:18.385344    9056 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0314 19:21:18.395022    9056 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0314 19:21:18.395022    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000-m02 ).state
	I0314 19:21:20.354008    9056 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:21:20.354008    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:21:20.354008    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-442000-m02 ).networkadapters[0]).ipaddresses[0]
	I0314 19:21:22.764602    9056 main.go:141] libmachine: [stdout =====>] : 172.17.80.135
	
	I0314 19:21:22.764602    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:21:22.765154    9056 sshutil.go:53] new ssh client: &{IP:172.17.80.135 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-442000-m02\id_rsa Username:docker}
	I0314 19:21:22.883855    9056 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.4884924s)
	I0314 19:21:22.892668    9056 ssh_runner.go:195] Run: cat /etc/os-release
	I0314 19:21:22.899999    9056 command_runner.go:130] > NAME=Buildroot
	I0314 19:21:22.900231    9056 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0314 19:21:22.900231    9056 command_runner.go:130] > ID=buildroot
	I0314 19:21:22.900231    9056 command_runner.go:130] > VERSION_ID=2023.02.9
	I0314 19:21:22.900231    9056 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0314 19:21:22.900322    9056 info.go:137] Remote host: Buildroot 2023.02.9
	I0314 19:21:22.900359    9056 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\addons for local assets ...
	I0314 19:21:22.900679    9056 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\files for local assets ...
	I0314 19:21:22.901298    9056 filesync.go:149] local asset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\110522.pem -> 110522.pem in /etc/ssl/certs
	I0314 19:21:22.901298    9056 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\110522.pem -> /etc/ssl/certs/110522.pem
	I0314 19:21:22.910000    9056 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0314 19:21:22.927545    9056 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\110522.pem --> /etc/ssl/certs/110522.pem (1708 bytes)
	I0314 19:21:22.971497    9056 start.go:296] duration metric: took 4.5858613s for postStartSetup
	I0314 19:21:22.973361    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000-m02 ).state
	I0314 19:21:24.959862    9056 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:21:24.959862    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:21:24.960360    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-442000-m02 ).networkadapters[0]).ipaddresses[0]
	I0314 19:21:27.332594    9056 main.go:141] libmachine: [stdout =====>] : 172.17.80.135
	
	I0314 19:21:27.333279    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:21:27.333565    9056 profile.go:142] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-442000\config.json ...
	I0314 19:21:27.336383    9056 start.go:128] duration metric: took 1m57.4644652s to createHost
	I0314 19:21:27.336516    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000-m02 ).state
	I0314 19:21:29.321236    9056 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:21:29.321236    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:21:29.321236    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-442000-m02 ).networkadapters[0]).ipaddresses[0]
	I0314 19:21:31.748593    9056 main.go:141] libmachine: [stdout =====>] : 172.17.80.135
	
	I0314 19:21:31.748593    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:21:31.752681    9056 main.go:141] libmachine: Using SSH client type: native
	I0314 19:21:31.753205    9056 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xac9f80] 0xaccb60 <nil>  [] 0s} 172.17.80.135 22 <nil> <nil>}
	I0314 19:21:31.753284    9056 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0314 19:21:31.885214    9056 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710444092.146123639
	
	I0314 19:21:31.885214    9056 fix.go:216] guest clock: 1710444092.146123639
	I0314 19:21:31.885214    9056 fix.go:229] Guest: 2024-03-14 19:21:32.146123639 +0000 UTC Remote: 2024-03-14 19:21:27.3365167 +0000 UTC m=+322.176166501 (delta=4.809606939s)
	I0314 19:21:31.885214    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000-m02 ).state
	I0314 19:21:33.898724    9056 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:21:33.898808    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:21:33.898891    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-442000-m02 ).networkadapters[0]).ipaddresses[0]
	I0314 19:21:36.280671    9056 main.go:141] libmachine: [stdout =====>] : 172.17.80.135
	
	I0314 19:21:36.280671    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:21:36.285064    9056 main.go:141] libmachine: Using SSH client type: native
	I0314 19:21:36.285474    9056 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xac9f80] 0xaccb60 <nil>  [] 0s} 172.17.80.135 22 <nil> <nil>}
	I0314 19:21:36.285474    9056 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1710444091
	I0314 19:21:36.432820    9056 main.go:141] libmachine: SSH cmd err, output: <nil>: Thu Mar 14 19:21:31 UTC 2024
	
	I0314 19:21:36.432820    9056 fix.go:236] clock set: Thu Mar 14 19:21:31 UTC 2024
	 (err=<nil>)
	I0314 19:21:36.432820    9056 start.go:83] releasing machines lock for "multinode-442000-m02", held for 2m6.5602115s
	I0314 19:21:36.433167    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000-m02 ).state
	I0314 19:21:38.407447    9056 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:21:38.407447    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:21:38.407543    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-442000-m02 ).networkadapters[0]).ipaddresses[0]
	I0314 19:21:40.845857    9056 main.go:141] libmachine: [stdout =====>] : 172.17.80.135
	
	I0314 19:21:40.846801    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:21:40.849950    9056 out.go:177] * Found network options:
	I0314 19:21:40.852984    9056 out.go:177]   - NO_PROXY=172.17.86.124
	W0314 19:21:40.855570    9056 proxy.go:119] fail to check proxy env: Error ip not in block
	I0314 19:21:40.857868    9056 out.go:177]   - NO_PROXY=172.17.86.124
	W0314 19:21:40.860047    9056 proxy.go:119] fail to check proxy env: Error ip not in block
	W0314 19:21:40.863273    9056 proxy.go:119] fail to check proxy env: Error ip not in block
	I0314 19:21:40.865129    9056 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0314 19:21:40.865129    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000-m02 ).state
	I0314 19:21:40.874075    9056 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0314 19:21:40.874075    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000-m02 ).state
	I0314 19:21:42.898676    9056 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:21:42.898676    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:21:42.898676    9056 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:21:42.898676    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:21:42.898676    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-442000-m02 ).networkadapters[0]).ipaddresses[0]
	I0314 19:21:42.898676    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-442000-m02 ).networkadapters[0]).ipaddresses[0]
	I0314 19:21:45.291184    9056 main.go:141] libmachine: [stdout =====>] : 172.17.80.135
	
	I0314 19:21:45.291184    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:21:45.291184    9056 sshutil.go:53] new ssh client: &{IP:172.17.80.135 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-442000-m02\id_rsa Username:docker}
	I0314 19:21:45.312508    9056 main.go:141] libmachine: [stdout =====>] : 172.17.80.135
	
	I0314 19:21:45.313441    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:21:45.313744    9056 sshutil.go:53] new ssh client: &{IP:172.17.80.135 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-442000-m02\id_rsa Username:docker}
	I0314 19:21:45.395884    9056 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	I0314 19:21:45.396799    9056 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (4.522381s)
	W0314 19:21:45.396799    9056 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0314 19:21:45.409079    9056 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0314 19:21:45.470622    9056 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0314 19:21:45.470622    9056 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.6051434s)
	I0314 19:21:45.470622    9056 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0314 19:21:45.470622    9056 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0314 19:21:45.470622    9056 start.go:494] detecting cgroup driver to use...
	I0314 19:21:45.470622    9056 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0314 19:21:45.511384    9056 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0314 19:21:45.525850    9056 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0314 19:21:45.561648    9056 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0314 19:21:45.584362    9056 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0314 19:21:45.593786    9056 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0314 19:21:45.619790    9056 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0314 19:21:45.650853    9056 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0314 19:21:45.678487    9056 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0314 19:21:45.706306    9056 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0314 19:21:45.735839    9056 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0314 19:21:45.762305    9056 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0314 19:21:45.778979    9056 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0314 19:21:45.789246    9056 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0314 19:21:45.815846    9056 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 19:21:45.993165    9056 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0314 19:21:46.022215    9056 start.go:494] detecting cgroup driver to use...
	I0314 19:21:46.034901    9056 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0314 19:21:46.055701    9056 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0314 19:21:46.055701    9056 command_runner.go:130] > [Unit]
	I0314 19:21:46.055701    9056 command_runner.go:130] > Description=Docker Application Container Engine
	I0314 19:21:46.055701    9056 command_runner.go:130] > Documentation=https://docs.docker.com
	I0314 19:21:46.055701    9056 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0314 19:21:46.055701    9056 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0314 19:21:46.055701    9056 command_runner.go:130] > StartLimitBurst=3
	I0314 19:21:46.055701    9056 command_runner.go:130] > StartLimitIntervalSec=60
	I0314 19:21:46.055701    9056 command_runner.go:130] > [Service]
	I0314 19:21:46.055701    9056 command_runner.go:130] > Type=notify
	I0314 19:21:46.055701    9056 command_runner.go:130] > Restart=on-failure
	I0314 19:21:46.055701    9056 command_runner.go:130] > Environment=NO_PROXY=172.17.86.124
	I0314 19:21:46.055701    9056 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0314 19:21:46.055701    9056 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0314 19:21:46.055701    9056 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0314 19:21:46.055701    9056 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0314 19:21:46.055701    9056 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0314 19:21:46.055701    9056 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0314 19:21:46.055701    9056 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0314 19:21:46.055701    9056 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0314 19:21:46.055701    9056 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0314 19:21:46.055701    9056 command_runner.go:130] > ExecStart=
	I0314 19:21:46.055701    9056 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0314 19:21:46.055701    9056 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0314 19:21:46.055701    9056 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0314 19:21:46.055701    9056 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0314 19:21:46.055701    9056 command_runner.go:130] > LimitNOFILE=infinity
	I0314 19:21:46.055701    9056 command_runner.go:130] > LimitNPROC=infinity
	I0314 19:21:46.055701    9056 command_runner.go:130] > LimitCORE=infinity
	I0314 19:21:46.055701    9056 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0314 19:21:46.055701    9056 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0314 19:21:46.055701    9056 command_runner.go:130] > TasksMax=infinity
	I0314 19:21:46.055701    9056 command_runner.go:130] > TimeoutStartSec=0
	I0314 19:21:46.055701    9056 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0314 19:21:46.055701    9056 command_runner.go:130] > Delegate=yes
	I0314 19:21:46.055701    9056 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0314 19:21:46.055701    9056 command_runner.go:130] > KillMode=process
	I0314 19:21:46.055701    9056 command_runner.go:130] > [Install]
	I0314 19:21:46.055701    9056 command_runner.go:130] > WantedBy=multi-user.target
	I0314 19:21:46.065666    9056 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0314 19:21:46.095632    9056 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0314 19:21:46.133419    9056 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0314 19:21:46.163387    9056 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0314 19:21:46.195191    9056 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0314 19:21:46.254006    9056 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0314 19:21:46.276679    9056 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0314 19:21:46.307042    9056 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0314 19:21:46.320284    9056 ssh_runner.go:195] Run: which cri-dockerd
	I0314 19:21:46.326747    9056 command_runner.go:130] > /usr/bin/cri-dockerd
	I0314 19:21:46.337295    9056 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0314 19:21:46.354000    9056 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0314 19:21:46.394928    9056 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0314 19:21:46.580815    9056 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0314 19:21:46.780072    9056 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0314 19:21:46.780198    9056 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0314 19:21:46.826956    9056 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 19:21:47.019244    9056 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0314 19:21:49.510074    9056 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.4906407s)
	I0314 19:21:49.519993    9056 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0314 19:21:49.551729    9056 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0314 19:21:49.582489    9056 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0314 19:21:49.760362    9056 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0314 19:21:49.951816    9056 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 19:21:50.130926    9056 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0314 19:21:50.169259    9056 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0314 19:21:50.200452    9056 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 19:21:50.380350    9056 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0314 19:21:50.477785    9056 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0314 19:21:50.486557    9056 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0314 19:21:50.499715    9056 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0314 19:21:50.499755    9056 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0314 19:21:50.499755    9056 command_runner.go:130] > Device: 0,22	Inode: 887         Links: 1
	I0314 19:21:50.499755    9056 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0314 19:21:50.499755    9056 command_runner.go:130] > Access: 2024-03-14 19:21:50.665869576 +0000
	I0314 19:21:50.499755    9056 command_runner.go:130] > Modify: 2024-03-14 19:21:50.665869576 +0000
	I0314 19:21:50.499755    9056 command_runner.go:130] > Change: 2024-03-14 19:21:50.668869846 +0000
	I0314 19:21:50.499755    9056 command_runner.go:130] >  Birth: -
	I0314 19:21:50.499946    9056 start.go:562] Will wait 60s for crictl version
	I0314 19:21:50.510433    9056 ssh_runner.go:195] Run: which crictl
	I0314 19:21:50.516430    9056 command_runner.go:130] > /usr/bin/crictl
	I0314 19:21:50.525289    9056 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0314 19:21:50.590812    9056 command_runner.go:130] > Version:  0.1.0
	I0314 19:21:50.590812    9056 command_runner.go:130] > RuntimeName:  docker
	I0314 19:21:50.590812    9056 command_runner.go:130] > RuntimeVersion:  25.0.4
	I0314 19:21:50.590812    9056 command_runner.go:130] > RuntimeApiVersion:  v1
	I0314 19:21:50.590812    9056 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  25.0.4
	RuntimeApiVersion:  v1
	I0314 19:21:50.597895    9056 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0314 19:21:50.628972    9056 command_runner.go:130] > 25.0.4
	I0314 19:21:50.640419    9056 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0314 19:21:50.673001    9056 command_runner.go:130] > 25.0.4
	I0314 19:21:50.678165    9056 out.go:204] * Preparing Kubernetes v1.28.4 on Docker 25.0.4 ...
	I0314 19:21:50.681651    9056 out.go:177]   - env NO_PROXY=172.17.86.124
	I0314 19:21:50.684655    9056 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0314 19:21:50.689460    9056 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0314 19:21:50.689460    9056 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0314 19:21:50.689460    9056 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0314 19:21:50.689460    9056 ip.go:207] Found interface: {Index:7 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:82:e8:09 Flags:up|broadcast|multicast|running}
	I0314 19:21:50.691944    9056 ip.go:210] interface addr: fe80::e3be:cf7e:6bd2:b964/64
	I0314 19:21:50.691944    9056 ip.go:210] interface addr: 172.17.80.1/20
	I0314 19:21:50.702531    9056 ssh_runner.go:195] Run: grep 172.17.80.1	host.minikube.internal$ /etc/hosts
	I0314 19:21:50.709031    9056 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.17.80.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0314 19:21:50.731240    9056 mustload.go:65] Loading cluster: multinode-442000
	I0314 19:21:50.731990    9056 config.go:182] Loaded profile config "multinode-442000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0314 19:21:50.732778    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000 ).state
	I0314 19:21:52.695829    9056 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:21:52.695829    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:21:52.695829    9056 host.go:66] Checking if "multinode-442000" exists ...
	I0314 19:21:52.696455    9056 certs.go:68] Setting up C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-442000 for IP: 172.17.80.135
	I0314 19:21:52.696455    9056 certs.go:194] generating shared ca certs ...
	I0314 19:21:52.696530    9056 certs.go:226] acquiring lock for ca certs: {Name:mk3bd6e475aadf28590677101b36f61c132d4290 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 19:21:52.696676    9056 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key
	I0314 19:21:52.697203    9056 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key
	I0314 19:21:52.697397    9056 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0314 19:21:52.697397    9056 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0314 19:21:52.697397    9056 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0314 19:21:52.697397    9056 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0314 19:21:52.698111    9056 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\11052.pem (1338 bytes)
	W0314 19:21:52.698194    9056 certs.go:480] ignoring C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\11052_empty.pem, impossibly tiny 0 bytes
	I0314 19:21:52.698194    9056 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0314 19:21:52.698194    9056 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0314 19:21:52.698194    9056 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0314 19:21:52.698811    9056 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0314 19:21:52.698930    9056 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\110522.pem (1708 bytes)
	I0314 19:21:52.698930    9056 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\110522.pem -> /usr/share/ca-certificates/110522.pem
	I0314 19:21:52.698930    9056 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0314 19:21:52.699455    9056 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\11052.pem -> /usr/share/ca-certificates/11052.pem
	I0314 19:21:52.699612    9056 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0314 19:21:52.757346    9056 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0314 19:21:52.801667    9056 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0314 19:21:52.864261    9056 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0314 19:21:52.916136    9056 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\110522.pem --> /usr/share/ca-certificates/110522.pem (1708 bytes)
	I0314 19:21:52.958938    9056 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0314 19:21:53.000895    9056 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\11052.pem --> /usr/share/ca-certificates/11052.pem (1338 bytes)
	I0314 19:21:53.052980    9056 ssh_runner.go:195] Run: openssl version
	I0314 19:21:53.061295    9056 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0314 19:21:53.071511    9056 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11052.pem && ln -fs /usr/share/ca-certificates/11052.pem /etc/ssl/certs/11052.pem"
	I0314 19:21:53.126377    9056 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11052.pem
	I0314 19:21:53.134305    9056 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Mar 14 17:58 /usr/share/ca-certificates/11052.pem
	I0314 19:21:53.134426    9056 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 14 17:58 /usr/share/ca-certificates/11052.pem
	I0314 19:21:53.144633    9056 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11052.pem
	I0314 19:21:53.153091    9056 command_runner.go:130] > 51391683
	I0314 19:21:53.162050    9056 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11052.pem /etc/ssl/certs/51391683.0"
	I0314 19:21:53.190803    9056 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110522.pem && ln -fs /usr/share/ca-certificates/110522.pem /etc/ssl/certs/110522.pem"
	I0314 19:21:53.220117    9056 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/110522.pem
	I0314 19:21:53.227468    9056 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Mar 14 17:58 /usr/share/ca-certificates/110522.pem
	I0314 19:21:53.227468    9056 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 14 17:58 /usr/share/ca-certificates/110522.pem
	I0314 19:21:53.236730    9056 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110522.pem
	I0314 19:21:53.244889    9056 command_runner.go:130] > 3ec20f2e
	I0314 19:21:53.254484    9056 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110522.pem /etc/ssl/certs/3ec20f2e.0"
	I0314 19:21:53.281783    9056 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0314 19:21:53.308798    9056 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0314 19:21:53.316008    9056 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Mar 14 17:44 /usr/share/ca-certificates/minikubeCA.pem
	I0314 19:21:53.316101    9056 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 14 17:44 /usr/share/ca-certificates/minikubeCA.pem
	I0314 19:21:53.324560    9056 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0314 19:21:53.333421    9056 command_runner.go:130] > b5213941
	I0314 19:21:53.344525    9056 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0314 19:21:53.371355    9056 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0314 19:21:53.378104    9056 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0314 19:21:53.378104    9056 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0314 19:21:53.378369    9056 kubeadm.go:928] updating node {m02 172.17.80.135 8443 v1.28.4 docker false true} ...
	I0314 19:21:53.378523    9056 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-442000-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.17.80.135
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-442000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0314 19:21:53.387263    9056 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0314 19:21:53.403494    9056 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/binaries/v1.28.4': No such file or directory
	I0314 19:21:53.403737    9056 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.28.4: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.28.4': No such file or directory
	
	Initiating transfer...
	I0314 19:21:53.412504    9056 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.28.4
	I0314 19:21:53.429660    9056 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubeadm.sha256
	I0314 19:21:53.429774    9056 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubelet.sha256
	I0314 19:21:53.429660    9056 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl.sha256
	I0314 19:21:53.429899    9056 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubectl -> /var/lib/minikube/binaries/v1.28.4/kubectl
	I0314 19:21:53.429952    9056 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubeadm -> /var/lib/minikube/binaries/v1.28.4/kubeadm
	I0314 19:21:53.440897    9056 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0314 19:21:53.442009    9056 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubeadm
	I0314 19:21:53.444613    9056 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubectl
	I0314 19:21:53.462453    9056 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubelet -> /var/lib/minikube/binaries/v1.28.4/kubelet
	I0314 19:21:53.462510    9056 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubeadm': No such file or directory
	I0314 19:21:53.462593    9056 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubeadm': No such file or directory
	I0314 19:21:53.462632    9056 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubectl': No such file or directory
	I0314 19:21:53.462658    9056 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubectl': No such file or directory
	I0314 19:21:53.462658    9056 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubeadm --> /var/lib/minikube/binaries/v1.28.4/kubeadm (49102848 bytes)
	I0314 19:21:53.462658    9056 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubectl --> /var/lib/minikube/binaries/v1.28.4/kubectl (49885184 bytes)
	I0314 19:21:53.471915    9056 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubelet
	I0314 19:21:53.576412    9056 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubelet': No such file or directory
	I0314 19:21:53.576412    9056 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubelet': No such file or directory
	I0314 19:21:53.576730    9056 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubelet --> /var/lib/minikube/binaries/v1.28.4/kubelet (110850048 bytes)
	I0314 19:21:54.503377    9056 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0314 19:21:54.520475    9056 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (320 bytes)
	I0314 19:21:54.549132    9056 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0314 19:21:54.587668    9056 ssh_runner.go:195] Run: grep 172.17.86.124	control-plane.minikube.internal$ /etc/hosts
	I0314 19:21:54.593847    9056 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.17.86.124	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0314 19:21:54.623462    9056 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 19:21:54.814765    9056 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0314 19:21:54.843634    9056 host.go:66] Checking if "multinode-442000" exists ...
	I0314 19:21:54.843871    9056 start.go:316] joinCluster: &{Name:multinode-442000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.
4 ClusterName:multinode-442000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.86.124 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.17.80.135 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertEx
piration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 19:21:54.844400    9056 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0314 19:21:54.844400    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000 ).state
	I0314 19:21:56.802297    9056 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:21:56.802297    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:21:56.802505    9056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-442000 ).networkadapters[0]).ipaddresses[0]
	I0314 19:21:59.177515    9056 main.go:141] libmachine: [stdout =====>] : 172.17.86.124
	
	I0314 19:21:59.178040    9056 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:21:59.178593    9056 sshutil.go:53] new ssh client: &{IP:172.17.86.124 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-442000\id_rsa Username:docker}
	I0314 19:21:59.364396    9056 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token pa31bj.d06vwfoo3c12dik2 --discovery-token-ca-cert-hash sha256:d6cca618a9b89cd38081689b02ff56bf93130e2cdd7cca4172dee33bbb1d34eb 
	I0314 19:21:59.364531    9056 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0": (4.5197105s)
	I0314 19:21:59.364702    9056 start.go:342] trying to join worker node "m02" to cluster: &{Name:m02 IP:172.17.80.135 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0314 19:21:59.364795    9056 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token pa31bj.d06vwfoo3c12dik2 --discovery-token-ca-cert-hash sha256:d6cca618a9b89cd38081689b02ff56bf93130e2cdd7cca4172dee33bbb1d34eb --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=multinode-442000-m02"
	I0314 19:21:59.587379    9056 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0314 19:22:02.378780    9056 command_runner.go:130] > [preflight] Running pre-flight checks
	I0314 19:22:02.378850    9056 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0314 19:22:02.378850    9056 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0314 19:22:02.378850    9056 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0314 19:22:02.378850    9056 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0314 19:22:02.378850    9056 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0314 19:22:02.378850    9056 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I0314 19:22:02.378850    9056 command_runner.go:130] > This node has joined the cluster:
	I0314 19:22:02.378850    9056 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0314 19:22:02.378850    9056 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0314 19:22:02.378850    9056 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0314 19:22:02.378850    9056 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token pa31bj.d06vwfoo3c12dik2 --discovery-token-ca-cert-hash sha256:d6cca618a9b89cd38081689b02ff56bf93130e2cdd7cca4172dee33bbb1d34eb --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=multinode-442000-m02": (3.0138259s)
	I0314 19:22:02.378850    9056 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0314 19:22:02.582221    9056 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
	I0314 19:22:02.810448    9056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-442000-m02 minikube.k8s.io/updated_at=2024_03_14T19_22_02_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=c6f78a3db54ac629870afb44fb5bc8be9e04a8c7 minikube.k8s.io/name=multinode-442000 minikube.k8s.io/primary=false
	I0314 19:22:02.926192    9056 command_runner.go:130] > node/multinode-442000-m02 labeled
	I0314 19:22:02.928378    9056 start.go:318] duration metric: took 8.0838931s to joinCluster
	I0314 19:22:02.928378    9056 start.go:234] Will wait 6m0s for node &{Name:m02 IP:172.17.80.135 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0314 19:22:02.929124    9056 config.go:182] Loaded profile config "multinode-442000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0314 19:22:02.933271    9056 out.go:177] * Verifying Kubernetes components...
	I0314 19:22:02.945308    9056 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 19:22:03.162242    9056 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0314 19:22:03.189553    9056 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0314 19:22:03.189708    9056 kapi.go:59] client config for multinode-442000: &rest.Config{Host:"https://172.17.86.124:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\multinode-442000\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\multinode-442000\\client.key", CAFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1ec9180), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0314 19:22:03.190392    9056 node_ready.go:35] waiting up to 6m0s for node "multinode-442000-m02" to be "Ready" ...
	I0314 19:22:03.190392    9056 round_trippers.go:463] GET https://172.17.86.124:8443/api/v1/nodes/multinode-442000-m02
	I0314 19:22:03.190392    9056 round_trippers.go:469] Request Headers:
	I0314 19:22:03.190392    9056 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:22:03.190392    9056 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:22:03.213671    9056 round_trippers.go:574] Response Status: 200 OK in 23 milliseconds
	I0314 19:22:03.213671    9056 round_trippers.go:577] Response Headers:
	I0314 19:22:03.213671    9056 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:22:03.213671    9056 round_trippers.go:580]     Content-Type: application/json
	I0314 19:22:03.213671    9056 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:22:03.213671    9056 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:22:03.213671    9056 round_trippers.go:580]     Content-Length: 4043
	I0314 19:22:03.213671    9056 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:22:03 GMT
	I0314 19:22:03.213671    9056 round_trippers.go:580]     Audit-Id: 76549ac2-c016-4094-82fa-31e656431630
	I0314 19:22:03.213671    9056 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000-m02","uid":"5f369d83-fce6-47fe-b14b-171ed626975b","resourceVersion":"600","creationTimestamp":"2024-03-14T19:22:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_14T19_22_02_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:22:02Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3019 chars]
	I0314 19:22:03.699529    9056 round_trippers.go:463] GET https://172.17.86.124:8443/api/v1/nodes/multinode-442000-m02
	I0314 19:22:03.699529    9056 round_trippers.go:469] Request Headers:
	I0314 19:22:03.699618    9056 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:22:03.699618    9056 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:22:03.706389    9056 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0314 19:22:03.706389    9056 round_trippers.go:577] Response Headers:
	I0314 19:22:03.706389    9056 round_trippers.go:580]     Content-Length: 4043
	I0314 19:22:03.706389    9056 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:22:03 GMT
	I0314 19:22:03.706389    9056 round_trippers.go:580]     Audit-Id: 1ef77955-8b11-4c22-9104-8f935db0dee8
	I0314 19:22:03.706389    9056 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:22:03.706389    9056 round_trippers.go:580]     Content-Type: application/json
	I0314 19:22:03.706389    9056 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:22:03.706389    9056 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:22:03.706389    9056 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000-m02","uid":"5f369d83-fce6-47fe-b14b-171ed626975b","resourceVersion":"600","creationTimestamp":"2024-03-14T19:22:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_14T19_22_02_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:22:02Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3019 chars]
	I0314 19:22:04.205843    9056 round_trippers.go:463] GET https://172.17.86.124:8443/api/v1/nodes/multinode-442000-m02
	I0314 19:22:04.206007    9056 round_trippers.go:469] Request Headers:
	I0314 19:22:04.206007    9056 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:22:04.206007    9056 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:22:04.209426    9056 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:22:04.210053    9056 round_trippers.go:577] Response Headers:
	I0314 19:22:04.210053    9056 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:22:04.210053    9056 round_trippers.go:580]     Content-Length: 4043
	I0314 19:22:04.210053    9056 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:22:04 GMT
	I0314 19:22:04.210053    9056 round_trippers.go:580]     Audit-Id: 1ca89c59-521d-4aee-803c-e11acbdd8349
	I0314 19:22:04.210053    9056 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:22:04.210053    9056 round_trippers.go:580]     Content-Type: application/json
	I0314 19:22:04.210161    9056 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:22:04.210267    9056 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000-m02","uid":"5f369d83-fce6-47fe-b14b-171ed626975b","resourceVersion":"600","creationTimestamp":"2024-03-14T19:22:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_14T19_22_02_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:22:02Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3019 chars]
	I0314 19:22:04.691464    9056 round_trippers.go:463] GET https://172.17.86.124:8443/api/v1/nodes/multinode-442000-m02
	I0314 19:22:04.691577    9056 round_trippers.go:469] Request Headers:
	I0314 19:22:04.691577    9056 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:22:04.691577    9056 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:22:04.695673    9056 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:22:04.695761    9056 round_trippers.go:577] Response Headers:
	I0314 19:22:04.695761    9056 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:22:04.695761    9056 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:22:04.695761    9056 round_trippers.go:580]     Content-Length: 4043
	I0314 19:22:04.695843    9056 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:22:04 GMT
	I0314 19:22:04.695843    9056 round_trippers.go:580]     Audit-Id: 3f50a233-11bf-4ac8-8b56-f38a3841487e
	I0314 19:22:04.695843    9056 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:22:04.695843    9056 round_trippers.go:580]     Content-Type: application/json
	I0314 19:22:04.696103    9056 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000-m02","uid":"5f369d83-fce6-47fe-b14b-171ed626975b","resourceVersion":"600","creationTimestamp":"2024-03-14T19:22:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_14T19_22_02_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:22:02Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3019 chars]
	I0314 19:22:05.191223    9056 round_trippers.go:463] GET https://172.17.86.124:8443/api/v1/nodes/multinode-442000-m02
	I0314 19:22:05.191348    9056 round_trippers.go:469] Request Headers:
	I0314 19:22:05.191348    9056 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:22:05.191348    9056 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:22:05.194400    9056 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:22:05.194400    9056 round_trippers.go:577] Response Headers:
	I0314 19:22:05.194400    9056 round_trippers.go:580]     Content-Type: application/json
	I0314 19:22:05.194400    9056 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:22:05.194400    9056 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:22:05.194400    9056 round_trippers.go:580]     Content-Length: 4043
	I0314 19:22:05.194400    9056 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:22:05 GMT
	I0314 19:22:05.195298    9056 round_trippers.go:580]     Audit-Id: ba6a682e-11a6-4ff6-ae98-26511c27ceb5
	I0314 19:22:05.195298    9056 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:22:05.195431    9056 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000-m02","uid":"5f369d83-fce6-47fe-b14b-171ed626975b","resourceVersion":"600","creationTimestamp":"2024-03-14T19:22:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_14T19_22_02_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:22:02Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3019 chars]
	I0314 19:22:05.195889    9056 node_ready.go:53] node "multinode-442000-m02" has status "Ready":"False"
	I0314 19:22:05.704574    9056 round_trippers.go:463] GET https://172.17.86.124:8443/api/v1/nodes/multinode-442000-m02
	I0314 19:22:05.704574    9056 round_trippers.go:469] Request Headers:
	I0314 19:22:05.704574    9056 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:22:05.704574    9056 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:22:05.708154    9056 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:22:05.708154    9056 round_trippers.go:577] Response Headers:
	I0314 19:22:05.708154    9056 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:22:05.708154    9056 round_trippers.go:580]     Content-Type: application/json
	I0314 19:22:05.708154    9056 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:22:05.708154    9056 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:22:05.708154    9056 round_trippers.go:580]     Content-Length: 4043
	I0314 19:22:05.708154    9056 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:22:05 GMT
	I0314 19:22:05.708154    9056 round_trippers.go:580]     Audit-Id: b5bdd9f7-684d-4d3f-902b-228ea890e4d1
	I0314 19:22:05.708154    9056 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000-m02","uid":"5f369d83-fce6-47fe-b14b-171ed626975b","resourceVersion":"600","creationTimestamp":"2024-03-14T19:22:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_14T19_22_02_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:22:02Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3019 chars]
	I0314 19:22:06.194903    9056 round_trippers.go:463] GET https://172.17.86.124:8443/api/v1/nodes/multinode-442000-m02
	I0314 19:22:06.194903    9056 round_trippers.go:469] Request Headers:
	I0314 19:22:06.194903    9056 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:22:06.194903    9056 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:22:06.478040    9056 round_trippers.go:574] Response Status: 200 OK in 283 milliseconds
	I0314 19:22:06.478425    9056 round_trippers.go:577] Response Headers:
	I0314 19:22:06.478425    9056 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:22:06.478425    9056 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:22:06.478425    9056 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:22:06 GMT
	I0314 19:22:06.478425    9056 round_trippers.go:580]     Audit-Id: f04f9474-d219-4014-af13-36a5dd1ea8c6
	I0314 19:22:06.478425    9056 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:22:06.478425    9056 round_trippers.go:580]     Content-Type: application/json
	I0314 19:22:06.478528    9056 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000-m02","uid":"5f369d83-fce6-47fe-b14b-171ed626975b","resourceVersion":"605","creationTimestamp":"2024-03-14T19:22:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_14T19_22_02_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:22:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3128 chars]
	I0314 19:22:06.695274    9056 round_trippers.go:463] GET https://172.17.86.124:8443/api/v1/nodes/multinode-442000-m02
	I0314 19:22:06.695274    9056 round_trippers.go:469] Request Headers:
	I0314 19:22:06.695274    9056 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:22:06.695274    9056 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:22:06.698842    9056 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:22:06.699065    9056 round_trippers.go:577] Response Headers:
	I0314 19:22:06.699065    9056 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:22:06 GMT
	I0314 19:22:06.699065    9056 round_trippers.go:580]     Audit-Id: d710c207-de59-48df-8844-d5be09fc9753
	I0314 19:22:06.699065    9056 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:22:06.699065    9056 round_trippers.go:580]     Content-Type: application/json
	I0314 19:22:06.699065    9056 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:22:06.699065    9056 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:22:06.699234    9056 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000-m02","uid":"5f369d83-fce6-47fe-b14b-171ed626975b","resourceVersion":"605","creationTimestamp":"2024-03-14T19:22:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_14T19_22_02_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:22:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3128 chars]
	I0314 19:22:07.200099    9056 round_trippers.go:463] GET https://172.17.86.124:8443/api/v1/nodes/multinode-442000-m02
	I0314 19:22:07.201042    9056 round_trippers.go:469] Request Headers:
	I0314 19:22:07.201042    9056 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:22:07.201042    9056 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:22:07.206187    9056 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0314 19:22:07.206187    9056 round_trippers.go:577] Response Headers:
	I0314 19:22:07.206187    9056 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:22:07.206187    9056 round_trippers.go:580]     Content-Type: application/json
	I0314 19:22:07.206187    9056 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:22:07.206187    9056 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:22:07.206187    9056 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:22:07 GMT
	I0314 19:22:07.206187    9056 round_trippers.go:580]     Audit-Id: c85cd24e-e783-44f4-9764-8e46d7990538
	I0314 19:22:07.207081    9056 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000-m02","uid":"5f369d83-fce6-47fe-b14b-171ed626975b","resourceVersion":"605","creationTimestamp":"2024-03-14T19:22:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_14T19_22_02_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:22:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3128 chars]
	I0314 19:22:07.207330    9056 node_ready.go:53] node "multinode-442000-m02" has status "Ready":"False"
	I0314 19:22:07.704755    9056 round_trippers.go:463] GET https://172.17.86.124:8443/api/v1/nodes/multinode-442000-m02
	I0314 19:22:07.704755    9056 round_trippers.go:469] Request Headers:
	I0314 19:22:07.704755    9056 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:22:07.704755    9056 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:22:07.708332    9056 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:22:07.708332    9056 round_trippers.go:577] Response Headers:
	I0314 19:22:07.708332    9056 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:22:07.708332    9056 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:22:07.709081    9056 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:22:07 GMT
	I0314 19:22:07.709081    9056 round_trippers.go:580]     Audit-Id: 2a0c3e36-fc9d-4fd1-a9ff-a980ab8a6c91
	I0314 19:22:07.709081    9056 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:22:07.709081    9056 round_trippers.go:580]     Content-Type: application/json
	I0314 19:22:07.709248    9056 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000-m02","uid":"5f369d83-fce6-47fe-b14b-171ed626975b","resourceVersion":"605","creationTimestamp":"2024-03-14T19:22:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_14T19_22_02_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:22:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3128 chars]
	I0314 19:22:08.191577    9056 round_trippers.go:463] GET https://172.17.86.124:8443/api/v1/nodes/multinode-442000-m02
	I0314 19:22:08.191649    9056 round_trippers.go:469] Request Headers:
	I0314 19:22:08.191649    9056 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:22:08.191649    9056 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:22:08.214579    9056 round_trippers.go:574] Response Status: 200 OK in 21 milliseconds
	I0314 19:22:08.214611    9056 round_trippers.go:577] Response Headers:
	I0314 19:22:08.214611    9056 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:22:08 GMT
	I0314 19:22:08.214611    9056 round_trippers.go:580]     Audit-Id: 324b6e3d-1f27-4fc8-9ba8-df4ac9e7c92f
	I0314 19:22:08.214611    9056 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:22:08.214611    9056 round_trippers.go:580]     Content-Type: application/json
	I0314 19:22:08.214611    9056 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:22:08.214689    9056 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:22:08.214750    9056 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000-m02","uid":"5f369d83-fce6-47fe-b14b-171ed626975b","resourceVersion":"605","creationTimestamp":"2024-03-14T19:22:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_14T19_22_02_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:22:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3128 chars]
	I0314 19:22:08.698189    9056 round_trippers.go:463] GET https://172.17.86.124:8443/api/v1/nodes/multinode-442000-m02
	I0314 19:22:08.698189    9056 round_trippers.go:469] Request Headers:
	I0314 19:22:08.698189    9056 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:22:08.698189    9056 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:22:08.702533    9056 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 19:22:08.702533    9056 round_trippers.go:577] Response Headers:
	I0314 19:22:08.703530    9056 round_trippers.go:580]     Audit-Id: bca58514-d761-4ccb-a17a-51c75a826e06
	I0314 19:22:08.703530    9056 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:22:08.703530    9056 round_trippers.go:580]     Content-Type: application/json
	I0314 19:22:08.703530    9056 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:22:08.703530    9056 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:22:08.703530    9056 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:22:08 GMT
	I0314 19:22:08.703530    9056 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000-m02","uid":"5f369d83-fce6-47fe-b14b-171ed626975b","resourceVersion":"605","creationTimestamp":"2024-03-14T19:22:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_14T19_22_02_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:22:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3128 chars]
	I0314 19:22:09.192702    9056 round_trippers.go:463] GET https://172.17.86.124:8443/api/v1/nodes/multinode-442000-m02
	I0314 19:22:09.192806    9056 round_trippers.go:469] Request Headers:
	I0314 19:22:09.192806    9056 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:22:09.192806    9056 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:22:09.200113    9056 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0314 19:22:09.200113    9056 round_trippers.go:577] Response Headers:
	I0314 19:22:09.200113    9056 round_trippers.go:580]     Audit-Id: 3d235f04-3eb3-4624-8726-764fbf5d0715
	I0314 19:22:09.200113    9056 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:22:09.200113    9056 round_trippers.go:580]     Content-Type: application/json
	I0314 19:22:09.200113    9056 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:22:09.200113    9056 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:22:09.200113    9056 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:22:09 GMT
	I0314 19:22:09.200113    9056 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000-m02","uid":"5f369d83-fce6-47fe-b14b-171ed626975b","resourceVersion":"605","creationTimestamp":"2024-03-14T19:22:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_14T19_22_02_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:22:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3128 chars]
	I0314 19:22:09.699265    9056 round_trippers.go:463] GET https://172.17.86.124:8443/api/v1/nodes/multinode-442000-m02
	I0314 19:22:09.699472    9056 round_trippers.go:469] Request Headers:
	I0314 19:22:09.699472    9056 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:22:09.699472    9056 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:22:09.703786    9056 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 19:22:09.703786    9056 round_trippers.go:577] Response Headers:
	I0314 19:22:09.703786    9056 round_trippers.go:580]     Audit-Id: f3626a08-c6da-4fb2-991a-2c149c864a1d
	I0314 19:22:09.703786    9056 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:22:09.703786    9056 round_trippers.go:580]     Content-Type: application/json
	I0314 19:22:09.703786    9056 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:22:09.703786    9056 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:22:09.703786    9056 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:22:09 GMT
	I0314 19:22:09.704551    9056 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000-m02","uid":"5f369d83-fce6-47fe-b14b-171ed626975b","resourceVersion":"605","creationTimestamp":"2024-03-14T19:22:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_14T19_22_02_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:22:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3128 chars]
	I0314 19:22:09.704953    9056 node_ready.go:53] node "multinode-442000-m02" has status "Ready":"False"
	I0314 19:22:10.192188    9056 round_trippers.go:463] GET https://172.17.86.124:8443/api/v1/nodes/multinode-442000-m02
	I0314 19:22:10.192292    9056 round_trippers.go:469] Request Headers:
	I0314 19:22:10.192292    9056 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:22:10.192292    9056 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:22:10.198533    9056 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0314 19:22:10.198533    9056 round_trippers.go:577] Response Headers:
	I0314 19:22:10.198533    9056 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:22:10.198533    9056 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:22:10.198533    9056 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:22:10 GMT
	I0314 19:22:10.198533    9056 round_trippers.go:580]     Audit-Id: e1729667-d3a5-409c-bd49-835930ffe97d
	I0314 19:22:10.198533    9056 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:22:10.198533    9056 round_trippers.go:580]     Content-Type: application/json
	I0314 19:22:10.198533    9056 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000-m02","uid":"5f369d83-fce6-47fe-b14b-171ed626975b","resourceVersion":"605","creationTimestamp":"2024-03-14T19:22:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_14T19_22_02_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:22:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3128 chars]
	I0314 19:22:10.695332    9056 round_trippers.go:463] GET https://172.17.86.124:8443/api/v1/nodes/multinode-442000-m02
	I0314 19:22:10.695406    9056 round_trippers.go:469] Request Headers:
	I0314 19:22:10.695406    9056 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:22:10.695406    9056 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:22:10.698685    9056 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:22:10.698685    9056 round_trippers.go:577] Response Headers:
	I0314 19:22:10.698685    9056 round_trippers.go:580]     Content-Type: application/json
	I0314 19:22:10.698685    9056 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:22:10.698685    9056 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:22:10.698685    9056 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:22:10 GMT
	I0314 19:22:10.698685    9056 round_trippers.go:580]     Audit-Id: f2c84a08-e923-433e-9b0c-8c00d6d25e4d
	I0314 19:22:10.698685    9056 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:22:10.699130    9056 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000-m02","uid":"5f369d83-fce6-47fe-b14b-171ed626975b","resourceVersion":"605","creationTimestamp":"2024-03-14T19:22:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_14T19_22_02_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:22:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3128 chars]
	I0314 19:22:11.199816    9056 round_trippers.go:463] GET https://172.17.86.124:8443/api/v1/nodes/multinode-442000-m02
	I0314 19:22:11.199816    9056 round_trippers.go:469] Request Headers:
	I0314 19:22:11.199816    9056 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:22:11.199894    9056 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:22:11.204119    9056 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 19:22:11.204119    9056 round_trippers.go:577] Response Headers:
	I0314 19:22:11.204119    9056 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:22:11 GMT
	I0314 19:22:11.204119    9056 round_trippers.go:580]     Audit-Id: f4a22172-bfc6-4941-9b42-0737a9dfab48
	I0314 19:22:11.204119    9056 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:22:11.204119    9056 round_trippers.go:580]     Content-Type: application/json
	I0314 19:22:11.204119    9056 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:22:11.204119    9056 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:22:11.204737    9056 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000-m02","uid":"5f369d83-fce6-47fe-b14b-171ed626975b","resourceVersion":"605","creationTimestamp":"2024-03-14T19:22:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_14T19_22_02_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:22:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3128 chars]
	I0314 19:22:11.691419    9056 round_trippers.go:463] GET https://172.17.86.124:8443/api/v1/nodes/multinode-442000-m02
	I0314 19:22:11.691474    9056 round_trippers.go:469] Request Headers:
	I0314 19:22:11.691474    9056 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:22:11.691540    9056 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:22:11.695388    9056 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:22:11.695388    9056 round_trippers.go:577] Response Headers:
	I0314 19:22:11.695479    9056 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:22:11.695479    9056 round_trippers.go:580]     Content-Type: application/json
	I0314 19:22:11.695479    9056 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:22:11.695479    9056 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:22:11.695479    9056 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:22:11 GMT
	I0314 19:22:11.695479    9056 round_trippers.go:580]     Audit-Id: 5eb1f044-c342-4eed-84a5-40ebe6c1dde8
	I0314 19:22:11.695544    9056 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000-m02","uid":"5f369d83-fce6-47fe-b14b-171ed626975b","resourceVersion":"605","creationTimestamp":"2024-03-14T19:22:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_14T19_22_02_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:22:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3128 chars]
	I0314 19:22:12.201094    9056 round_trippers.go:463] GET https://172.17.86.124:8443/api/v1/nodes/multinode-442000-m02
	I0314 19:22:12.201198    9056 round_trippers.go:469] Request Headers:
	I0314 19:22:12.201198    9056 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:22:12.201250    9056 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:22:12.283598    9056 round_trippers.go:574] Response Status: 200 OK in 82 milliseconds
	I0314 19:22:12.283598    9056 round_trippers.go:577] Response Headers:
	I0314 19:22:12.283598    9056 round_trippers.go:580]     Content-Type: application/json
	I0314 19:22:12.283598    9056 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:22:12.283598    9056 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:22:12.283598    9056 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:22:12 GMT
	I0314 19:22:12.283598    9056 round_trippers.go:580]     Audit-Id: af0c4f43-20cf-454d-88b9-3eb69b563512
	I0314 19:22:12.283598    9056 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:22:12.284506    9056 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000-m02","uid":"5f369d83-fce6-47fe-b14b-171ed626975b","resourceVersion":"605","creationTimestamp":"2024-03-14T19:22:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_14T19_22_02_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:22:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3128 chars]
	I0314 19:22:12.284862    9056 node_ready.go:53] node "multinode-442000-m02" has status "Ready":"False"
	I0314 19:22:12.705048    9056 round_trippers.go:463] GET https://172.17.86.124:8443/api/v1/nodes/multinode-442000-m02
	I0314 19:22:12.705048    9056 round_trippers.go:469] Request Headers:
	I0314 19:22:12.705048    9056 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:22:12.705048    9056 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:22:13.006310    9056 round_trippers.go:574] Response Status: 200 OK in 301 milliseconds
	I0314 19:22:13.006401    9056 round_trippers.go:577] Response Headers:
	I0314 19:22:13.006401    9056 round_trippers.go:580]     Content-Type: application/json
	I0314 19:22:13.006401    9056 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:22:13.006401    9056 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:22:13.006401    9056 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:22:13 GMT
	I0314 19:22:13.006464    9056 round_trippers.go:580]     Audit-Id: f01dde5e-0d5e-48d3-88c6-c56a2afb4236
	I0314 19:22:13.006464    9056 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:22:13.006654    9056 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000-m02","uid":"5f369d83-fce6-47fe-b14b-171ed626975b","resourceVersion":"613","creationTimestamp":"2024-03-14T19:22:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_14T19_22_02_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:22:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3397 chars]
	I0314 19:22:13.204365    9056 round_trippers.go:463] GET https://172.17.86.124:8443/api/v1/nodes/multinode-442000-m02
	I0314 19:22:13.204365    9056 round_trippers.go:469] Request Headers:
	I0314 19:22:13.204365    9056 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:22:13.204365    9056 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:22:13.207970    9056 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:22:13.207970    9056 round_trippers.go:577] Response Headers:
	I0314 19:22:13.207970    9056 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:22:13 GMT
	I0314 19:22:13.207970    9056 round_trippers.go:580]     Audit-Id: 9b9fd8c4-a310-4247-884e-3a050152c897
	I0314 19:22:13.207970    9056 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:22:13.207970    9056 round_trippers.go:580]     Content-Type: application/json
	I0314 19:22:13.208156    9056 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:22:13.208156    9056 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:22:13.208156    9056 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000-m02","uid":"5f369d83-fce6-47fe-b14b-171ed626975b","resourceVersion":"613","creationTimestamp":"2024-03-14T19:22:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_14T19_22_02_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:22:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3397 chars]
	I0314 19:22:13.695731    9056 round_trippers.go:463] GET https://172.17.86.124:8443/api/v1/nodes/multinode-442000-m02
	I0314 19:22:13.696036    9056 round_trippers.go:469] Request Headers:
	I0314 19:22:13.696036    9056 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:22:13.696036    9056 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:22:13.700122    9056 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 19:22:13.700122    9056 round_trippers.go:577] Response Headers:
	I0314 19:22:13.700122    9056 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:22:13.700122    9056 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:22:13 GMT
	I0314 19:22:13.700122    9056 round_trippers.go:580]     Audit-Id: 63041cf6-04c8-4687-9adb-0f73e0aee851
	I0314 19:22:13.700122    9056 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:22:13.700122    9056 round_trippers.go:580]     Content-Type: application/json
	I0314 19:22:13.700122    9056 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:22:13.700122    9056 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000-m02","uid":"5f369d83-fce6-47fe-b14b-171ed626975b","resourceVersion":"613","creationTimestamp":"2024-03-14T19:22:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_14T19_22_02_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:22:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3397 chars]
	I0314 19:22:14.200985    9056 round_trippers.go:463] GET https://172.17.86.124:8443/api/v1/nodes/multinode-442000-m02
	I0314 19:22:14.200985    9056 round_trippers.go:469] Request Headers:
	I0314 19:22:14.201059    9056 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:22:14.201059    9056 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:22:14.204361    9056 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:22:14.204763    9056 round_trippers.go:577] Response Headers:
	I0314 19:22:14.204819    9056 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:22:14.204819    9056 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:22:14 GMT
	I0314 19:22:14.204819    9056 round_trippers.go:580]     Audit-Id: 4252016d-ff63-4b71-8b86-53710940043e
	I0314 19:22:14.204819    9056 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:22:14.204819    9056 round_trippers.go:580]     Content-Type: application/json
	I0314 19:22:14.204819    9056 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:22:14.204878    9056 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000-m02","uid":"5f369d83-fce6-47fe-b14b-171ed626975b","resourceVersion":"613","creationTimestamp":"2024-03-14T19:22:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_14T19_22_02_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:22:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3397 chars]
	I0314 19:22:14.703803    9056 round_trippers.go:463] GET https://172.17.86.124:8443/api/v1/nodes/multinode-442000-m02
	I0314 19:22:14.703897    9056 round_trippers.go:469] Request Headers:
	I0314 19:22:14.703897    9056 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:22:14.703897    9056 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:22:14.708421    9056 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 19:22:14.708873    9056 round_trippers.go:577] Response Headers:
	I0314 19:22:14.708873    9056 round_trippers.go:580]     Content-Type: application/json
	I0314 19:22:14.708873    9056 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:22:14.708873    9056 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:22:14.708873    9056 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:22:14 GMT
	I0314 19:22:14.708873    9056 round_trippers.go:580]     Audit-Id: 31966180-888f-4b64-89dd-a93f05ef71ad
	I0314 19:22:14.708873    9056 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:22:14.708873    9056 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000-m02","uid":"5f369d83-fce6-47fe-b14b-171ed626975b","resourceVersion":"613","creationTimestamp":"2024-03-14T19:22:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_14T19_22_02_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:22:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3397 chars]
	I0314 19:22:14.708873    9056 node_ready.go:53] node "multinode-442000-m02" has status "Ready":"False"
	I0314 19:22:15.206320    9056 round_trippers.go:463] GET https://172.17.86.124:8443/api/v1/nodes/multinode-442000-m02
	I0314 19:22:15.206373    9056 round_trippers.go:469] Request Headers:
	I0314 19:22:15.206373    9056 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:22:15.206373    9056 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:22:15.211982    9056 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0314 19:22:15.211982    9056 round_trippers.go:577] Response Headers:
	I0314 19:22:15.211982    9056 round_trippers.go:580]     Audit-Id: 67729e53-8cea-4e7d-8a8e-8f950ce19db7
	I0314 19:22:15.211982    9056 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:22:15.211982    9056 round_trippers.go:580]     Content-Type: application/json
	I0314 19:22:15.211982    9056 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:22:15.211982    9056 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:22:15.211982    9056 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:22:15 GMT
	I0314 19:22:15.213224    9056 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000-m02","uid":"5f369d83-fce6-47fe-b14b-171ed626975b","resourceVersion":"613","creationTimestamp":"2024-03-14T19:22:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_14T19_22_02_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:22:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3397 chars]
	I0314 19:22:15.697654    9056 round_trippers.go:463] GET https://172.17.86.124:8443/api/v1/nodes/multinode-442000-m02
	I0314 19:22:15.697654    9056 round_trippers.go:469] Request Headers:
	I0314 19:22:15.697654    9056 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:22:15.697654    9056 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:22:15.701661    9056 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 19:22:15.701661    9056 round_trippers.go:577] Response Headers:
	I0314 19:22:15.701661    9056 round_trippers.go:580]     Audit-Id: aec38901-0be5-4bd9-977c-ed08a8358977
	I0314 19:22:15.701661    9056 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:22:15.701661    9056 round_trippers.go:580]     Content-Type: application/json
	I0314 19:22:15.701661    9056 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:22:15.701661    9056 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:22:15.701661    9056 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:22:15 GMT
	I0314 19:22:15.701990    9056 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000-m02","uid":"5f369d83-fce6-47fe-b14b-171ed626975b","resourceVersion":"613","creationTimestamp":"2024-03-14T19:22:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_14T19_22_02_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:22:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3397 chars]
	I0314 19:22:16.206045    9056 round_trippers.go:463] GET https://172.17.86.124:8443/api/v1/nodes/multinode-442000-m02
	I0314 19:22:16.206045    9056 round_trippers.go:469] Request Headers:
	I0314 19:22:16.206045    9056 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:22:16.206045    9056 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:22:16.209645    9056 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:22:16.210387    9056 round_trippers.go:577] Response Headers:
	I0314 19:22:16.210496    9056 round_trippers.go:580]     Audit-Id: a29dae83-c050-47e0-b7f9-e1c2f035c26c
	I0314 19:22:16.210599    9056 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:22:16.210645    9056 round_trippers.go:580]     Content-Type: application/json
	I0314 19:22:16.210645    9056 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:22:16.210645    9056 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:22:16.210744    9056 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:22:16 GMT
	I0314 19:22:16.210906    9056 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000-m02","uid":"5f369d83-fce6-47fe-b14b-171ed626975b","resourceVersion":"613","creationTimestamp":"2024-03-14T19:22:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_14T19_22_02_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:22:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3397 chars]
	I0314 19:22:16.696072    9056 round_trippers.go:463] GET https://172.17.86.124:8443/api/v1/nodes/multinode-442000-m02
	I0314 19:22:16.696291    9056 round_trippers.go:469] Request Headers:
	I0314 19:22:16.696291    9056 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:22:16.696291    9056 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:22:16.699756    9056 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:22:16.699756    9056 round_trippers.go:577] Response Headers:
	I0314 19:22:16.699756    9056 round_trippers.go:580]     Audit-Id: c0e6bfe0-98b7-4bb6-b340-5164d263f588
	I0314 19:22:16.699756    9056 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:22:16.699756    9056 round_trippers.go:580]     Content-Type: application/json
	I0314 19:22:16.699756    9056 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:22:16.699756    9056 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:22:16.699756    9056 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:22:16 GMT
	I0314 19:22:16.700618    9056 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000-m02","uid":"5f369d83-fce6-47fe-b14b-171ed626975b","resourceVersion":"613","creationTimestamp":"2024-03-14T19:22:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_14T19_22_02_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:22:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3397 chars]
	I0314 19:22:17.205365    9056 round_trippers.go:463] GET https://172.17.86.124:8443/api/v1/nodes/multinode-442000-m02
	I0314 19:22:17.205365    9056 round_trippers.go:469] Request Headers:
	I0314 19:22:17.205365    9056 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:22:17.205365    9056 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:22:17.208926    9056 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:22:17.208926    9056 round_trippers.go:577] Response Headers:
	I0314 19:22:17.208926    9056 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:22:17 GMT
	I0314 19:22:17.208926    9056 round_trippers.go:580]     Audit-Id: be4e5713-635d-403b-abda-f8ba44a75dba
	I0314 19:22:17.208926    9056 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:22:17.208926    9056 round_trippers.go:580]     Content-Type: application/json
	I0314 19:22:17.208926    9056 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:22:17.208926    9056 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:22:17.208926    9056 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000-m02","uid":"5f369d83-fce6-47fe-b14b-171ed626975b","resourceVersion":"613","creationTimestamp":"2024-03-14T19:22:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_14T19_22_02_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:22:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3397 chars]
	I0314 19:22:17.209747    9056 node_ready.go:53] node "multinode-442000-m02" has status "Ready":"False"
	I0314 19:22:17.695566    9056 round_trippers.go:463] GET https://172.17.86.124:8443/api/v1/nodes/multinode-442000-m02
	I0314 19:22:17.695566    9056 round_trippers.go:469] Request Headers:
	I0314 19:22:17.695566    9056 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:22:17.695566    9056 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:22:17.699849    9056 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 19:22:17.699849    9056 round_trippers.go:577] Response Headers:
	I0314 19:22:17.699927    9056 round_trippers.go:580]     Content-Type: application/json
	I0314 19:22:17.699927    9056 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:22:17.699927    9056 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:22:17.699927    9056 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:22:17 GMT
	I0314 19:22:17.699927    9056 round_trippers.go:580]     Audit-Id: 4dd4472f-af53-42ad-9df9-4e536c503cf9
	I0314 19:22:17.699927    9056 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:22:17.700114    9056 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000-m02","uid":"5f369d83-fce6-47fe-b14b-171ed626975b","resourceVersion":"613","creationTimestamp":"2024-03-14T19:22:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_14T19_22_02_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:22:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3397 chars]
	I0314 19:22:18.203651    9056 round_trippers.go:463] GET https://172.17.86.124:8443/api/v1/nodes/multinode-442000-m02
	I0314 19:22:18.203651    9056 round_trippers.go:469] Request Headers:
	I0314 19:22:18.203651    9056 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:22:18.203651    9056 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:22:18.208982    9056 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0314 19:22:18.208982    9056 round_trippers.go:577] Response Headers:
	I0314 19:22:18.208982    9056 round_trippers.go:580]     Audit-Id: df3eb8ec-76a3-44eb-9215-f84b82c0c9b0
	I0314 19:22:18.209096    9056 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:22:18.209096    9056 round_trippers.go:580]     Content-Type: application/json
	I0314 19:22:18.209096    9056 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:22:18.209096    9056 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:22:18.209096    9056 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:22:18 GMT
	I0314 19:22:18.210475    9056 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000-m02","uid":"5f369d83-fce6-47fe-b14b-171ed626975b","resourceVersion":"613","creationTimestamp":"2024-03-14T19:22:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_14T19_22_02_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:22:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3397 chars]
	I0314 19:22:18.707530    9056 round_trippers.go:463] GET https://172.17.86.124:8443/api/v1/nodes/multinode-442000-m02
	I0314 19:22:18.707530    9056 round_trippers.go:469] Request Headers:
	I0314 19:22:18.707530    9056 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:22:18.707530    9056 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:22:19.086661    9056 round_trippers.go:574] Response Status: 200 OK in 379 milliseconds
	I0314 19:22:19.086787    9056 round_trippers.go:577] Response Headers:
	I0314 19:22:19.086787    9056 round_trippers.go:580]     Content-Type: application/json
	I0314 19:22:19.086787    9056 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:22:19.086787    9056 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:22:19.086787    9056 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:22:19 GMT
	I0314 19:22:19.086787    9056 round_trippers.go:580]     Audit-Id: e47406ed-6228-4469-9514-553e877412d7
	I0314 19:22:19.086787    9056 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:22:19.087044    9056 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000-m02","uid":"5f369d83-fce6-47fe-b14b-171ed626975b","resourceVersion":"613","creationTimestamp":"2024-03-14T19:22:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_14T19_22_02_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:22:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3397 chars]
	I0314 19:22:19.205116    9056 round_trippers.go:463] GET https://172.17.86.124:8443/api/v1/nodes/multinode-442000-m02
	I0314 19:22:19.205116    9056 round_trippers.go:469] Request Headers:
	I0314 19:22:19.205116    9056 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:22:19.205116    9056 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:22:19.208682    9056 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:22:19.208682    9056 round_trippers.go:577] Response Headers:
	I0314 19:22:19.208682    9056 round_trippers.go:580]     Audit-Id: dbaa9b4b-2b58-4db4-9975-5579ad614abb
	I0314 19:22:19.208682    9056 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:22:19.208682    9056 round_trippers.go:580]     Content-Type: application/json
	I0314 19:22:19.208682    9056 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:22:19.208682    9056 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:22:19.208682    9056 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:22:19 GMT
	I0314 19:22:19.209045    9056 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000-m02","uid":"5f369d83-fce6-47fe-b14b-171ed626975b","resourceVersion":"613","creationTimestamp":"2024-03-14T19:22:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_14T19_22_02_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:22:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3397 chars]
	I0314 19:22:19.704742    9056 round_trippers.go:463] GET https://172.17.86.124:8443/api/v1/nodes/multinode-442000-m02
	I0314 19:22:19.705142    9056 round_trippers.go:469] Request Headers:
	I0314 19:22:19.705142    9056 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:22:19.705142    9056 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:22:19.707721    9056 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0314 19:22:19.707721    9056 round_trippers.go:577] Response Headers:
	I0314 19:22:19.707721    9056 round_trippers.go:580]     Audit-Id: 59a0cc58-9962-4d56-823f-8821a4a7942b
	I0314 19:22:19.707721    9056 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:22:19.708735    9056 round_trippers.go:580]     Content-Type: application/json
	I0314 19:22:19.708735    9056 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:22:19.708735    9056 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:22:19.708735    9056 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:22:19 GMT
	I0314 19:22:19.708735    9056 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000-m02","uid":"5f369d83-fce6-47fe-b14b-171ed626975b","resourceVersion":"613","creationTimestamp":"2024-03-14T19:22:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_14T19_22_02_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:22:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3397 chars]
	I0314 19:22:19.709266    9056 node_ready.go:53] node "multinode-442000-m02" has status "Ready":"False"
	I0314 19:22:20.192200    9056 round_trippers.go:463] GET https://172.17.86.124:8443/api/v1/nodes/multinode-442000-m02
	I0314 19:22:20.192200    9056 round_trippers.go:469] Request Headers:
	I0314 19:22:20.192285    9056 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:22:20.192285    9056 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:22:20.196918    9056 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 19:22:20.196966    9056 round_trippers.go:577] Response Headers:
	I0314 19:22:20.196966    9056 round_trippers.go:580]     Content-Type: application/json
	I0314 19:22:20.196966    9056 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:22:20.196966    9056 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:22:20.197072    9056 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:22:20 GMT
	I0314 19:22:20.197072    9056 round_trippers.go:580]     Audit-Id: 2aefd559-ed17-4a09-bdfd-bad5dda25a6f
	I0314 19:22:20.197072    9056 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:22:20.197330    9056 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000-m02","uid":"5f369d83-fce6-47fe-b14b-171ed626975b","resourceVersion":"613","creationTimestamp":"2024-03-14T19:22:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_14T19_22_02_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:22:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3397 chars]
	I0314 19:22:20.694089    9056 round_trippers.go:463] GET https://172.17.86.124:8443/api/v1/nodes/multinode-442000-m02
	I0314 19:22:20.694168    9056 round_trippers.go:469] Request Headers:
	I0314 19:22:20.694168    9056 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:22:20.694168    9056 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:22:20.697029    9056 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0314 19:22:20.697029    9056 round_trippers.go:577] Response Headers:
	I0314 19:22:20.697029    9056 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:22:20.697029    9056 round_trippers.go:580]     Content-Type: application/json
	I0314 19:22:20.697029    9056 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:22:20.697029    9056 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:22:20.698052    9056 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:22:20 GMT
	I0314 19:22:20.698052    9056 round_trippers.go:580]     Audit-Id: f35fe208-b0f9-488d-945f-63f9cce340c1
	I0314 19:22:20.698108    9056 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000-m02","uid":"5f369d83-fce6-47fe-b14b-171ed626975b","resourceVersion":"634","creationTimestamp":"2024-03-14T19:22:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_14T19_22_02_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:22:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3263 chars]
	I0314 19:22:20.698108    9056 node_ready.go:49] node "multinode-442000-m02" has status "Ready":"True"
	I0314 19:22:20.698108    9056 node_ready.go:38] duration metric: took 17.5063863s for node "multinode-442000-m02" to be "Ready" ...
	I0314 19:22:20.698108    9056 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0314 19:22:20.698108    9056 round_trippers.go:463] GET https://172.17.86.124:8443/api/v1/namespaces/kube-system/pods
	I0314 19:22:20.698640    9056 round_trippers.go:469] Request Headers:
	I0314 19:22:20.698640    9056 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:22:20.698640    9056 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:22:20.703825    9056 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0314 19:22:20.703825    9056 round_trippers.go:577] Response Headers:
	I0314 19:22:20.703825    9056 round_trippers.go:580]     Content-Type: application/json
	I0314 19:22:20.703825    9056 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:22:20.703825    9056 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:22:20.703825    9056 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:22:20 GMT
	I0314 19:22:20.703825    9056 round_trippers.go:580]     Audit-Id: c9cd5ed8-a0a4-466d-8028-0cb23df4e487
	I0314 19:22:20.704303    9056 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:22:20.705603    9056 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"634"},"items":[{"metadata":{"name":"coredns-5dd5756b68-d22jc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2a563b3f-a175-4dc2-9f0b-67dbaefbfaac","resourceVersion":"446","creationTimestamp":"2024-03-14T19:19:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"65689a14-ed43-48af-8104-53ea2e3991f3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:19:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"65689a14-ed43-48af-8104-53ea2e3991f3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 67474 chars]
	I0314 19:22:20.708556    9056 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-d22jc" in "kube-system" namespace to be "Ready" ...
	I0314 19:22:20.708608    9056 round_trippers.go:463] GET https://172.17.86.124:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-d22jc
	I0314 19:22:20.708769    9056 round_trippers.go:469] Request Headers:
	I0314 19:22:20.708769    9056 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:22:20.708769    9056 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:22:20.711822    9056 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:22:20.711822    9056 round_trippers.go:577] Response Headers:
	I0314 19:22:20.711822    9056 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:22:20 GMT
	I0314 19:22:20.711822    9056 round_trippers.go:580]     Audit-Id: f26291dd-274e-452d-a9c6-36e64c29afd0
	I0314 19:22:20.711822    9056 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:22:20.711822    9056 round_trippers.go:580]     Content-Type: application/json
	I0314 19:22:20.711822    9056 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:22:20.711822    9056 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:22:20.711822    9056 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-d22jc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2a563b3f-a175-4dc2-9f0b-67dbaefbfaac","resourceVersion":"446","creationTimestamp":"2024-03-14T19:19:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"65689a14-ed43-48af-8104-53ea2e3991f3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:19:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"65689a14-ed43-48af-8104-53ea2e3991f3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6283 chars]
	I0314 19:22:20.711822    9056 round_trippers.go:463] GET https://172.17.86.124:8443/api/v1/nodes/multinode-442000
	I0314 19:22:20.711822    9056 round_trippers.go:469] Request Headers:
	I0314 19:22:20.711822    9056 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:22:20.711822    9056 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:22:20.714644    9056 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0314 19:22:20.714644    9056 round_trippers.go:577] Response Headers:
	I0314 19:22:20.714644    9056 round_trippers.go:580]     Audit-Id: 18d4c718-19da-416d-a314-5434ece1c248
	I0314 19:22:20.714644    9056 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:22:20.714644    9056 round_trippers.go:580]     Content-Type: application/json
	I0314 19:22:20.715528    9056 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:22:20.715528    9056 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:22:20.715528    9056 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:22:20 GMT
	I0314 19:22:20.715746    9056 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"457","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0314 19:22:20.716092    9056 pod_ready.go:92] pod "coredns-5dd5756b68-d22jc" in "kube-system" namespace has status "Ready":"True"
	I0314 19:22:20.716185    9056 pod_ready.go:81] duration metric: took 7.4833ms for pod "coredns-5dd5756b68-d22jc" in "kube-system" namespace to be "Ready" ...
	I0314 19:22:20.716185    9056 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-442000" in "kube-system" namespace to be "Ready" ...
	I0314 19:22:20.716281    9056 round_trippers.go:463] GET https://172.17.86.124:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-442000
	I0314 19:22:20.716281    9056 round_trippers.go:469] Request Headers:
	I0314 19:22:20.716281    9056 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:22:20.716281    9056 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:22:20.718654    9056 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0314 19:22:20.718654    9056 round_trippers.go:577] Response Headers:
	I0314 19:22:20.718654    9056 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:22:20.718654    9056 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:22:20 GMT
	I0314 19:22:20.718654    9056 round_trippers.go:580]     Audit-Id: fa148d72-9d80-42ca-8b36-0b02a516e91f
	I0314 19:22:20.718654    9056 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:22:20.718654    9056 round_trippers.go:580]     Content-Type: application/json
	I0314 19:22:20.718654    9056 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:22:20.718654    9056 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-442000","namespace":"kube-system","uid":"8974ad44-5d36-48f0-bc6b-9115bab5fb5e","resourceVersion":"410","creationTimestamp":"2024-03-14T19:19:03Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.17.86.124:2379","kubernetes.io/config.hash":"92e70beb375f9f247f5f8395dc065033","kubernetes.io/config.mirror":"92e70beb375f9f247f5f8395dc065033","kubernetes.io/config.seen":"2024-03-14T19:18:55.420198507Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:19:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 5862 chars]
	I0314 19:22:20.719653    9056 round_trippers.go:463] GET https://172.17.86.124:8443/api/v1/nodes/multinode-442000
	I0314 19:22:20.719653    9056 round_trippers.go:469] Request Headers:
	I0314 19:22:20.719653    9056 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:22:20.719653    9056 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:22:20.722529    9056 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0314 19:22:20.722529    9056 round_trippers.go:577] Response Headers:
	I0314 19:22:20.722529    9056 round_trippers.go:580]     Audit-Id: 916a3673-f6ae-4ec5-b535-b4b7aa07ce24
	I0314 19:22:20.722529    9056 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:22:20.722529    9056 round_trippers.go:580]     Content-Type: application/json
	I0314 19:22:20.722529    9056 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:22:20.722529    9056 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:22:20.722529    9056 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:22:20 GMT
	I0314 19:22:20.723181    9056 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"457","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0314 19:22:20.723539    9056 pod_ready.go:92] pod "etcd-multinode-442000" in "kube-system" namespace has status "Ready":"True"
	I0314 19:22:20.723539    9056 pod_ready.go:81] duration metric: took 7.3534ms for pod "etcd-multinode-442000" in "kube-system" namespace to be "Ready" ...
	I0314 19:22:20.723539    9056 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-442000" in "kube-system" namespace to be "Ready" ...
	I0314 19:22:20.723539    9056 round_trippers.go:463] GET https://172.17.86.124:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-442000
	I0314 19:22:20.723539    9056 round_trippers.go:469] Request Headers:
	I0314 19:22:20.723539    9056 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:22:20.723539    9056 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:22:20.726214    9056 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0314 19:22:20.727156    9056 round_trippers.go:577] Response Headers:
	I0314 19:22:20.727156    9056 round_trippers.go:580]     Audit-Id: e3e15290-b739-481c-a676-a49d92fc7192
	I0314 19:22:20.727156    9056 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:22:20.727156    9056 round_trippers.go:580]     Content-Type: application/json
	I0314 19:22:20.727156    9056 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:22:20.727156    9056 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:22:20.727156    9056 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:22:20 GMT
	I0314 19:22:20.727437    9056 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-442000","namespace":"kube-system","uid":"02a2d011-5f4c-451c-9698-a88e42e4b6c9","resourceVersion":"414","creationTimestamp":"2024-03-14T19:19:02Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.17.86.124:8443","kubernetes.io/config.hash":"81fdcd9740169a0b72b7c7316eeac39f","kubernetes.io/config.mirror":"81fdcd9740169a0b72b7c7316eeac39f","kubernetes.io/config.seen":"2024-03-14T19:18:55.420203908Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:19:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7399 chars]
	I0314 19:22:20.728006    9056 round_trippers.go:463] GET https://172.17.86.124:8443/api/v1/nodes/multinode-442000
	I0314 19:22:20.728006    9056 round_trippers.go:469] Request Headers:
	I0314 19:22:20.728006    9056 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:22:20.728006    9056 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:22:20.730570    9056 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0314 19:22:20.730570    9056 round_trippers.go:577] Response Headers:
	I0314 19:22:20.730570    9056 round_trippers.go:580]     Audit-Id: d0c9ba65-6c43-4b80-afc1-76dbcd1c7eeb
	I0314 19:22:20.730570    9056 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:22:20.730570    9056 round_trippers.go:580]     Content-Type: application/json
	I0314 19:22:20.730570    9056 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:22:20.730570    9056 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:22:20.730762    9056 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:22:20 GMT
	I0314 19:22:20.730884    9056 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"457","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0314 19:22:20.731236    9056 pod_ready.go:92] pod "kube-apiserver-multinode-442000" in "kube-system" namespace has status "Ready":"True"
	I0314 19:22:20.731236    9056 pod_ready.go:81] duration metric: took 7.6966ms for pod "kube-apiserver-multinode-442000" in "kube-system" namespace to be "Ready" ...
	I0314 19:22:20.731236    9056 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-442000" in "kube-system" namespace to be "Ready" ...
	I0314 19:22:20.731236    9056 round_trippers.go:463] GET https://172.17.86.124:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-442000
	I0314 19:22:20.731236    9056 round_trippers.go:469] Request Headers:
	I0314 19:22:20.731236    9056 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:22:20.731236    9056 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:22:20.733785    9056 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0314 19:22:20.733785    9056 round_trippers.go:577] Response Headers:
	I0314 19:22:20.733785    9056 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:22:20.733785    9056 round_trippers.go:580]     Content-Type: application/json
	I0314 19:22:20.733785    9056 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:22:20.733785    9056 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:22:20.733785    9056 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:22:21 GMT
	I0314 19:22:20.734704    9056 round_trippers.go:580]     Audit-Id: 126dc5b6-78ff-4621-9c8f-62cba4e55a0c
	I0314 19:22:20.734863    9056 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-442000","namespace":"kube-system","uid":"b16fc874-ef74-44ca-a54f-bb678bf982df","resourceVersion":"413","creationTimestamp":"2024-03-14T19:19:01Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"a7ee530f2bd843eddeace8cd6ec0d204","kubernetes.io/config.mirror":"a7ee530f2bd843eddeace8cd6ec0d204","kubernetes.io/config.seen":"2024-03-14T19:18:55.420205308Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:19:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6969 chars]
	I0314 19:22:20.735446    9056 round_trippers.go:463] GET https://172.17.86.124:8443/api/v1/nodes/multinode-442000
	I0314 19:22:20.735446    9056 round_trippers.go:469] Request Headers:
	I0314 19:22:20.735446    9056 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:22:20.735529    9056 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:22:20.737656    9056 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0314 19:22:20.737656    9056 round_trippers.go:577] Response Headers:
	I0314 19:22:20.737656    9056 round_trippers.go:580]     Audit-Id: 2024caab-e501-49a1-bbc1-ed3bc967ebd6
	I0314 19:22:20.737656    9056 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:22:20.737656    9056 round_trippers.go:580]     Content-Type: application/json
	I0314 19:22:20.737924    9056 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:22:20.737924    9056 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:22:20.737924    9056 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:22:21 GMT
	I0314 19:22:20.738162    9056 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"457","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0314 19:22:20.738509    9056 pod_ready.go:92] pod "kube-controller-manager-multinode-442000" in "kube-system" namespace has status "Ready":"True"
	I0314 19:22:20.738563    9056 pod_ready.go:81] duration metric: took 7.3258ms for pod "kube-controller-manager-multinode-442000" in "kube-system" namespace to be "Ready" ...
	I0314 19:22:20.738563    9056 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-72dzs" in "kube-system" namespace to be "Ready" ...
	I0314 19:22:20.899228    9056 request.go:629] Waited for 160.5036ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.86.124:8443/api/v1/namespaces/kube-system/pods/kube-proxy-72dzs
	I0314 19:22:20.899439    9056 round_trippers.go:463] GET https://172.17.86.124:8443/api/v1/namespaces/kube-system/pods/kube-proxy-72dzs
	I0314 19:22:20.899439    9056 round_trippers.go:469] Request Headers:
	I0314 19:22:20.899439    9056 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:22:20.899439    9056 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:22:20.901746    9056 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0314 19:22:20.902757    9056 round_trippers.go:577] Response Headers:
	I0314 19:22:20.902757    9056 round_trippers.go:580]     Audit-Id: aebd7976-ca81-4e2f-930c-44e62d4143ec
	I0314 19:22:20.902757    9056 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:22:20.902757    9056 round_trippers.go:580]     Content-Type: application/json
	I0314 19:22:20.902757    9056 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:22:20.902757    9056 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:22:20.902757    9056 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:22:21 GMT
	I0314 19:22:20.902757    9056 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-72dzs","generateName":"kube-proxy-","namespace":"kube-system","uid":"80b840b0-3803-4102-a966-ea73aed74f49","resourceVersion":"621","creationTimestamp":"2024-03-14T19:22:02Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"6fc4cc4b-ef3f-4f16-8df5-a146058b364e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:22:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6fc4cc4b-ef3f-4f16-8df5-a146058b364e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5541 chars]
	I0314 19:22:21.103179    9056 request.go:629] Waited for 199.2119ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.86.124:8443/api/v1/nodes/multinode-442000-m02
	I0314 19:22:21.103276    9056 round_trippers.go:463] GET https://172.17.86.124:8443/api/v1/nodes/multinode-442000-m02
	I0314 19:22:21.103401    9056 round_trippers.go:469] Request Headers:
	I0314 19:22:21.103437    9056 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:22:21.103468    9056 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:22:21.106883    9056 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:22:21.106883    9056 round_trippers.go:577] Response Headers:
	I0314 19:22:21.106883    9056 round_trippers.go:580]     Audit-Id: 93e3aee3-376f-40c8-94a1-0ebc40f09d35
	I0314 19:22:21.106883    9056 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:22:21.106883    9056 round_trippers.go:580]     Content-Type: application/json
	I0314 19:22:21.106883    9056 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:22:21.106883    9056 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:22:21.107338    9056 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:22:21 GMT
	I0314 19:22:21.107478    9056 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000-m02","uid":"5f369d83-fce6-47fe-b14b-171ed626975b","resourceVersion":"635","creationTimestamp":"2024-03-14T19:22:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_14T19_22_02_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:22:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3143 chars]
	I0314 19:22:21.107863    9056 pod_ready.go:92] pod "kube-proxy-72dzs" in "kube-system" namespace has status "Ready":"True"
	I0314 19:22:21.107933    9056 pod_ready.go:81] duration metric: took 369.3428ms for pod "kube-proxy-72dzs" in "kube-system" namespace to be "Ready" ...
	I0314 19:22:21.107933    9056 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-cg28g" in "kube-system" namespace to be "Ready" ...
	I0314 19:22:21.306275    9056 request.go:629] Waited for 198.1726ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.86.124:8443/api/v1/namespaces/kube-system/pods/kube-proxy-cg28g
	I0314 19:22:21.306275    9056 round_trippers.go:463] GET https://172.17.86.124:8443/api/v1/namespaces/kube-system/pods/kube-proxy-cg28g
	I0314 19:22:21.306275    9056 round_trippers.go:469] Request Headers:
	I0314 19:22:21.306275    9056 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:22:21.306275    9056 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:22:21.310796    9056 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 19:22:21.311205    9056 round_trippers.go:577] Response Headers:
	I0314 19:22:21.311205    9056 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:22:21.311205    9056 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:22:21 GMT
	I0314 19:22:21.311205    9056 round_trippers.go:580]     Audit-Id: 4d2d17d8-9b9e-4bf8-ab4f-a7b42a77d699
	I0314 19:22:21.311205    9056 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:22:21.311205    9056 round_trippers.go:580]     Content-Type: application/json
	I0314 19:22:21.311277    9056 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:22:21.311311    9056 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-cg28g","generateName":"kube-proxy-","namespace":"kube-system","uid":"c7f798bf-6722-4731-af8d-ccd5703d116e","resourceVersion":"405","creationTimestamp":"2024-03-14T19:19:16Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"6fc4cc4b-ef3f-4f16-8df5-a146058b364e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:19:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6fc4cc4b-ef3f-4f16-8df5-a146058b364e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5533 chars]
	I0314 19:22:21.508993    9056 request.go:629] Waited for 196.8279ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.86.124:8443/api/v1/nodes/multinode-442000
	I0314 19:22:21.508993    9056 round_trippers.go:463] GET https://172.17.86.124:8443/api/v1/nodes/multinode-442000
	I0314 19:22:21.508993    9056 round_trippers.go:469] Request Headers:
	I0314 19:22:21.508993    9056 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:22:21.508993    9056 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:22:21.513559    9056 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 19:22:21.513559    9056 round_trippers.go:577] Response Headers:
	I0314 19:22:21.513559    9056 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:22:21.513559    9056 round_trippers.go:580]     Content-Type: application/json
	I0314 19:22:21.513559    9056 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:22:21.513559    9056 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:22:21.513559    9056 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:22:21 GMT
	I0314 19:22:21.513680    9056 round_trippers.go:580]     Audit-Id: 96c6cc8c-df8a-41cf-9928-0f49aa8beb2b
	I0314 19:22:21.513870    9056 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"457","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0314 19:22:21.514236    9056 pod_ready.go:92] pod "kube-proxy-cg28g" in "kube-system" namespace has status "Ready":"True"
	I0314 19:22:21.514236    9056 pod_ready.go:81] duration metric: took 406.2713ms for pod "kube-proxy-cg28g" in "kube-system" namespace to be "Ready" ...
	I0314 19:22:21.514236    9056 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-442000" in "kube-system" namespace to be "Ready" ...
	I0314 19:22:21.694699    9056 request.go:629] Waited for 180.45ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.86.124:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-442000
	I0314 19:22:21.695033    9056 round_trippers.go:463] GET https://172.17.86.124:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-442000
	I0314 19:22:21.695132    9056 round_trippers.go:469] Request Headers:
	I0314 19:22:21.695132    9056 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:22:21.695132    9056 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:22:21.698362    9056 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:22:21.698362    9056 round_trippers.go:577] Response Headers:
	I0314 19:22:21.698362    9056 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:22:21.698362    9056 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:22:21.698752    9056 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:22:21 GMT
	I0314 19:22:21.698752    9056 round_trippers.go:580]     Audit-Id: b34b4ac5-efc4-41e0-9884-1b5a3e0ead3e
	I0314 19:22:21.698792    9056 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:22:21.698792    9056 round_trippers.go:580]     Content-Type: application/json
	I0314 19:22:21.698914    9056 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-442000","namespace":"kube-system","uid":"76b10598-fe0d-4a14-a8e4-a32221fbb68f","resourceVersion":"412","creationTimestamp":"2024-03-14T19:19:01Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"2b2434280023596d1e3c90125a7219ed","kubernetes.io/config.mirror":"2b2434280023596d1e3c90125a7219ed","kubernetes.io/config.seen":"2024-03-14T19:18:55.420206709Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:19:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4699 chars]
	I0314 19:22:21.897293    9056 request.go:629] Waited for 197.8239ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.86.124:8443/api/v1/nodes/multinode-442000
	I0314 19:22:21.897293    9056 round_trippers.go:463] GET https://172.17.86.124:8443/api/v1/nodes/multinode-442000
	I0314 19:22:21.897293    9056 round_trippers.go:469] Request Headers:
	I0314 19:22:21.897293    9056 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:22:21.897293    9056 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:22:21.900882    9056 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:22:21.900882    9056 round_trippers.go:577] Response Headers:
	I0314 19:22:21.901742    9056 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:22:21.901742    9056 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:22:22 GMT
	I0314 19:22:21.901742    9056 round_trippers.go:580]     Audit-Id: 78cfc5a4-545f-40f2-af33-db5aa70afd65
	I0314 19:22:21.901742    9056 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:22:21.901742    9056 round_trippers.go:580]     Content-Type: application/json
	I0314 19:22:21.901742    9056 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:22:21.902072    9056 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"457","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0314 19:22:21.902904    9056 pod_ready.go:92] pod "kube-scheduler-multinode-442000" in "kube-system" namespace has status "Ready":"True"
	I0314 19:22:21.902904    9056 pod_ready.go:81] duration metric: took 388.6387ms for pod "kube-scheduler-multinode-442000" in "kube-system" namespace to be "Ready" ...
	I0314 19:22:21.902904    9056 pod_ready.go:38] duration metric: took 1.2047047s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0314 19:22:21.903009    9056 system_svc.go:44] waiting for kubelet service to be running ....
	I0314 19:22:21.915673    9056 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0314 19:22:21.939086    9056 system_svc.go:56] duration metric: took 36.1794ms WaitForService to wait for kubelet
	I0314 19:22:21.939086    9056 kubeadm.go:576] duration metric: took 19.0092651s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0314 19:22:21.939086    9056 node_conditions.go:102] verifying NodePressure condition ...
	I0314 19:22:22.106032    9056 request.go:629] Waited for 166.9331ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.86.124:8443/api/v1/nodes
	I0314 19:22:22.106032    9056 round_trippers.go:463] GET https://172.17.86.124:8443/api/v1/nodes
	I0314 19:22:22.106032    9056 round_trippers.go:469] Request Headers:
	I0314 19:22:22.106363    9056 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:22:22.106363    9056 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:22:22.111386    9056 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 19:22:22.111410    9056 round_trippers.go:577] Response Headers:
	I0314 19:22:22.111410    9056 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:22:22.111410    9056 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:22:22 GMT
	I0314 19:22:22.111410    9056 round_trippers.go:580]     Audit-Id: ac7c66ed-665d-4c77-8b4d-efbe1ec95106
	I0314 19:22:22.111410    9056 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:22:22.111410    9056 round_trippers.go:580]     Content-Type: application/json
	I0314 19:22:22.111410    9056 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:22:22.112442    9056 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"636"},"items":[{"metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"457","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 9146 chars]
	I0314 19:22:22.113250    9056 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0314 19:22:22.113250    9056 node_conditions.go:123] node cpu capacity is 2
	I0314 19:22:22.113332    9056 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0314 19:22:22.113332    9056 node_conditions.go:123] node cpu capacity is 2
	I0314 19:22:22.113332    9056 node_conditions.go:105] duration metric: took 174.2327ms to run NodePressure ...
	I0314 19:22:22.113332    9056 start.go:240] waiting for startup goroutines ...
	I0314 19:22:22.113416    9056 start.go:254] writing updated cluster config ...
	I0314 19:22:22.123664    9056 ssh_runner.go:195] Run: rm -f paused
	I0314 19:22:22.251525    9056 start.go:600] kubectl: 1.29.2, cluster: 1.28.4 (minor skew: 1)
	I0314 19:22:22.257451    9056 out.go:177] * Done! kubectl is now configured to use "multinode-442000" cluster and "default" namespace by default
	
	
	==> Docker <==
	Mar 14 19:19:27 multinode-442000 dockerd[1335]: time="2024-03-14T19:19:27.457252020Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 14 19:19:27 multinode-442000 dockerd[1335]: time="2024-03-14T19:19:27.457435646Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 14 19:19:27 multinode-442000 dockerd[1335]: time="2024-03-14T19:19:27.457523558Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 14 19:19:27 multinode-442000 dockerd[1335]: time="2024-03-14T19:19:27.457633273Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 14 19:19:27 multinode-442000 dockerd[1335]: time="2024-03-14T19:19:27.584396423Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 14 19:19:27 multinode-442000 dockerd[1335]: time="2024-03-14T19:19:27.584530241Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 14 19:19:27 multinode-442000 dockerd[1335]: time="2024-03-14T19:19:27.584550244Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 14 19:19:27 multinode-442000 dockerd[1335]: time="2024-03-14T19:19:27.584707966Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 14 19:22:46 multinode-442000 dockerd[1335]: time="2024-03-14T19:22:46.089104493Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 14 19:22:46 multinode-442000 dockerd[1335]: time="2024-03-14T19:22:46.089180603Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 14 19:22:46 multinode-442000 dockerd[1335]: time="2024-03-14T19:22:46.089197505Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 14 19:22:46 multinode-442000 dockerd[1335]: time="2024-03-14T19:22:46.089352524Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 14 19:22:46 multinode-442000 cri-dockerd[1219]: time="2024-03-14T19:22:46Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/fa0f2372c88eef3de0c7caa0041064157c314aff4c14bf6622f34dd89106f773/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Mar 14 19:22:47 multinode-442000 cri-dockerd[1219]: time="2024-03-14T19:22:47Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	Mar 14 19:22:47 multinode-442000 dockerd[1335]: time="2024-03-14T19:22:47.593294878Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 14 19:22:47 multinode-442000 dockerd[1335]: time="2024-03-14T19:22:47.593441790Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 14 19:22:47 multinode-442000 dockerd[1335]: time="2024-03-14T19:22:47.593456291Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 14 19:22:47 multinode-442000 dockerd[1335]: time="2024-03-14T19:22:47.594086740Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 14 19:23:33 multinode-442000 dockerd[1328]: 2024/03/14 19:23:33 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Mar 14 19:23:34 multinode-442000 dockerd[1328]: 2024/03/14 19:23:34 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Mar 14 19:23:34 multinode-442000 dockerd[1328]: 2024/03/14 19:23:34 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Mar 14 19:23:34 multinode-442000 dockerd[1328]: 2024/03/14 19:23:34 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Mar 14 19:23:34 multinode-442000 dockerd[1328]: 2024/03/14 19:23:34 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Mar 14 19:23:34 multinode-442000 dockerd[1328]: 2024/03/14 19:23:34 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Mar 14 19:23:34 multinode-442000 dockerd[1328]: 2024/03/14 19:23:34 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	0cd43cdaa31c9       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   11 minutes ago      Running             busybox                   0                   fa0f2372c88ee       busybox-5b5d89c9d6-7446n
	8899bc0038935       ead0a4a53df89                                                                                         14 minutes ago      Running             coredns                   0                   a3dba3fc54c01       coredns-5dd5756b68-d22jc
	07c2872c48eda       6e38f40d628db                                                                                         14 minutes ago      Running             storage-provisioner       0                   b179d157b6b2f       storage-provisioner
	1a321c0e89971       kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988              15 minutes ago      Running             kindnet-cni               0                   b046b896affe9       kindnet-7b9lf
	2a62baf3f1b46       83f6cc407eed8                                                                                         15 minutes ago      Running             kube-proxy                0                   9b3244b47278e       kube-proxy-cg28g
	cd640f130e429       7fe0e6f37db33                                                                                         15 minutes ago      Running             kube-apiserver            0                   ab390fc53b998       kube-apiserver-multinode-442000
	dbb603289bf16       e3db313c6dbc0                                                                                         15 minutes ago      Running             kube-scheduler            0                   54e39762d7a64       kube-scheduler-multinode-442000
	16b80f73683dc       d058aa5ab969c                                                                                         15 minutes ago      Running             kube-controller-manager   0                   102c907609a3a       kube-controller-manager-multinode-442000
	9585e3eb2ead2       73deb9a3f7025                                                                                         15 minutes ago      Running             etcd                      0                   af5b88117f99a       etcd-multinode-442000
	
	
	==> coredns [8899bc003893] <==
	[INFO] 10.244.0.3:45005 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000148512s
	[INFO] 10.244.1.2:51938 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000100608s
	[INFO] 10.244.1.2:46248 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.00024762s
	[INFO] 10.244.1.2:46501 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000100408s
	[INFO] 10.244.1.2:52414 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000056704s
	[INFO] 10.244.1.2:44908 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000121409s
	[INFO] 10.244.1.2:49578 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00011941s
	[INFO] 10.244.1.2:51057 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000060205s
	[INFO] 10.244.1.2:56240 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000055805s
	[INFO] 10.244.0.3:32901 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000172914s
	[INFO] 10.244.0.3:41115 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000149912s
	[INFO] 10.244.0.3:40494 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00013161s
	[INFO] 10.244.0.3:40575 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000077106s
	[INFO] 10.244.1.2:55307 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000194115s
	[INFO] 10.244.1.2:46435 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00025832s
	[INFO] 10.244.1.2:52095 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000156813s
	[INFO] 10.244.1.2:57849 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00012701s
	[INFO] 10.244.0.3:47270 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000244119s
	[INFO] 10.244.0.3:59009 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000411532s
	[INFO] 10.244.0.3:40925 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000108108s
	[INFO] 10.244.0.3:56417 - 5 "PTR IN 1.80.17.172.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000067706s
	[INFO] 10.244.1.2:36896 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000108409s
	[INFO] 10.244.1.2:38949 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000118209s
	[INFO] 10.244.1.2:56933 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000156413s
	[INFO] 10.244.1.2:35971 - 5 "PTR IN 1.80.17.172.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000072406s
	
	
	==> describe nodes <==
	Name:               multinode-442000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-442000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c6f78a3db54ac629870afb44fb5bc8be9e04a8c7
	                    minikube.k8s.io/name=multinode-442000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_14T19_19_05_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 14 Mar 2024 19:19:00 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-442000
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 14 Mar 2024 19:34:22 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 14 Mar 2024 19:33:23 +0000   Thu, 14 Mar 2024 19:18:58 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 14 Mar 2024 19:33:23 +0000   Thu, 14 Mar 2024 19:18:58 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 14 Mar 2024 19:33:23 +0000   Thu, 14 Mar 2024 19:18:58 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 14 Mar 2024 19:33:23 +0000   Thu, 14 Mar 2024 19:19:26 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.17.86.124
	  Hostname:    multinode-442000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164268Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164268Ki
	  pods:               110
	System Info:
	  Machine ID:                 6a631478f2504cf7a53faa0b685d7672
	  System UUID:                8469b663-ea90-da4f-856d-11034a8f65d8
	  Boot ID:                    a1b2bf56-435d-41c4-ac00-a53a4e6ba2b7
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://25.0.4
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-7446n                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 coredns-5dd5756b68-d22jc                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     15m
	  kube-system                 etcd-multinode-442000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         15m
	  kube-system                 kindnet-7b9lf                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      15m
	  kube-system                 kube-apiserver-multinode-442000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-controller-manager-multinode-442000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-proxy-cg28g                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-scheduler-multinode-442000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 15m                kube-proxy       
	  Normal  NodeHasSufficientMemory  15m (x8 over 15m)  kubelet          Node multinode-442000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m (x8 over 15m)  kubelet          Node multinode-442000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m (x7 over 15m)  kubelet          Node multinode-442000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  15m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 15m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  15m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  15m                kubelet          Node multinode-442000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m                kubelet          Node multinode-442000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m                kubelet          Node multinode-442000 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           15m                node-controller  Node multinode-442000 event: Registered Node multinode-442000 in Controller
	  Normal  NodeReady                14m                kubelet          Node multinode-442000 status is now: NodeReady
	
	
	Name:               multinode-442000-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-442000-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c6f78a3db54ac629870afb44fb5bc8be9e04a8c7
	                    minikube.k8s.io/name=multinode-442000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_03_14T19_22_02_0700
	                    minikube.k8s.io/version=v1.32.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 14 Mar 2024 19:22:02 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-442000-m02
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 14 Mar 2024 19:34:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 14 Mar 2024 19:33:15 +0000   Thu, 14 Mar 2024 19:22:02 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 14 Mar 2024 19:33:15 +0000   Thu, 14 Mar 2024 19:22:02 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 14 Mar 2024 19:33:15 +0000   Thu, 14 Mar 2024 19:22:02 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 14 Mar 2024 19:33:15 +0000   Thu, 14 Mar 2024 19:22:20 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.17.80.135
	  Hostname:    multinode-442000-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164268Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164268Ki
	  pods:               110
	System Info:
	  Machine ID:                 35b6f7da4d3943d99d8a5913cae1c8fb
	  System UUID:                0b9b8376-0767-f940-9973-d373e3dc050d
	  Boot ID:                    45d479cc-26e8-46a6-9431-50637071f586
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://25.0.4
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-8drpb    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kindnet-c7m4p               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      12m
	  kube-system                 kube-proxy-72dzs            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 12m                kube-proxy       
	  Normal  NodeHasSufficientMemory  12m (x5 over 12m)  kubelet          Node multinode-442000-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m (x5 over 12m)  kubelet          Node multinode-442000-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m (x5 over 12m)  kubelet          Node multinode-442000-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           12m                node-controller  Node multinode-442000-m02 event: Registered Node multinode-442000-m02 in Controller
	  Normal  NodeReady                12m                kubelet          Node multinode-442000-m02 status is now: NodeReady
	
	
	Name:               multinode-442000-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-442000-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c6f78a3db54ac629870afb44fb5bc8be9e04a8c7
	                    minikube.k8s.io/name=multinode-442000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_03_14T19_26_25_0700
	                    minikube.k8s.io/version=v1.32.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 14 Mar 2024 19:26:25 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-442000-m03
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 14 Mar 2024 19:33:03 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Thu, 14 Mar 2024 19:32:03 +0000   Thu, 14 Mar 2024 19:33:46 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Thu, 14 Mar 2024 19:32:03 +0000   Thu, 14 Mar 2024 19:33:46 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Thu, 14 Mar 2024 19:32:03 +0000   Thu, 14 Mar 2024 19:33:46 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Thu, 14 Mar 2024 19:32:03 +0000   Thu, 14 Mar 2024 19:33:46 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  172.17.85.186
	  Hostname:    multinode-442000-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164268Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164268Ki
	  pods:               110
	System Info:
	  Machine ID:                 95f7e177855d4ce6b7cbd71b3fdc8796
	  System UUID:                71573585-d564-f043-9154-3d5854ce61b8
	  Boot ID:                    ecc3a90f-bb14-45bd-8273-3432f9a984f5
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://25.0.4
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-r7zdb       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      8m
	  kube-system                 kube-proxy-w2qls    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 7m50s              kube-proxy       
	  Normal  NodeHasSufficientMemory  8m (x5 over 8m2s)  kubelet          Node multinode-442000-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m (x5 over 8m2s)  kubelet          Node multinode-442000-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m (x5 over 8m2s)  kubelet          Node multinode-442000-m03 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           7m59s              node-controller  Node multinode-442000-m03 event: Registered Node multinode-442000-m03 in Controller
	  Normal  NodeReady                7m42s              kubelet          Node multinode-442000-m03 status is now: NodeReady
	  Normal  NodeNotReady             39s                node-controller  Node multinode-442000-m03 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +5.966249] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +46.436645] systemd-fstab-generator[637]: Ignoring "noauto" option for root device
	[  +0.171687] systemd-fstab-generator[649]: Ignoring "noauto" option for root device
	[Mar14 19:18] systemd-fstab-generator[936]: Ignoring "noauto" option for root device
	[  +0.091270] kauditd_printk_skb: 59 callbacks suppressed
	[  +0.500402] systemd-fstab-generator[977]: Ignoring "noauto" option for root device
	[  +0.197011] systemd-fstab-generator[989]: Ignoring "noauto" option for root device
	[  +0.205731] systemd-fstab-generator[1003]: Ignoring "noauto" option for root device
	[  +2.775868] systemd-fstab-generator[1172]: Ignoring "noauto" option for root device
	[  +0.177460] systemd-fstab-generator[1184]: Ignoring "noauto" option for root device
	[  +0.203045] systemd-fstab-generator[1196]: Ignoring "noauto" option for root device
	[  +0.266065] systemd-fstab-generator[1211]: Ignoring "noauto" option for root device
	[ +13.055443] systemd-fstab-generator[1320]: Ignoring "noauto" option for root device
	[  +0.108732] kauditd_printk_skb: 205 callbacks suppressed
	[  +2.915011] systemd-fstab-generator[1516]: Ignoring "noauto" option for root device
	[  +7.510287] systemd-fstab-generator[1792]: Ignoring "noauto" option for root device
	[  +0.092469] kauditd_printk_skb: 73 callbacks suppressed
	[Mar14 19:19] systemd-fstab-generator[2793]: Ignoring "noauto" option for root device
	[  +0.126539] kauditd_printk_skb: 62 callbacks suppressed
	[ +13.125030] systemd-fstab-generator[4402]: Ignoring "noauto" option for root device
	[  +0.147537] kauditd_printk_skb: 12 callbacks suppressed
	[  +7.428947] kauditd_printk_skb: 51 callbacks suppressed
	[Mar14 19:22] kauditd_printk_skb: 21 callbacks suppressed
	
	
	==> etcd [9585e3eb2ead] <==
	{"level":"info","ts":"2024-03-14T19:26:28.301439Z","caller":"traceutil/trace.go:171","msg":"trace[238900786] transaction","detail":"{read_only:false; response_revision:921; number_of_response:1; }","duration":"109.811528ms","start":"2024-03-14T19:26:28.191522Z","end":"2024-03-14T19:26:28.301333Z","steps":["trace[238900786] 'process raft request'  (duration: 109.155861ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-14T19:26:28.657446Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"256.135956ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1117"}
	{"level":"info","ts":"2024-03-14T19:26:28.657542Z","caller":"traceutil/trace.go:171","msg":"trace[2119101273] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:921; }","duration":"256.242066ms","start":"2024-03-14T19:26:28.401285Z","end":"2024-03-14T19:26:28.657527Z","steps":["trace[2119101273] 'range keys from in-memory index tree'  (duration: 256.042746ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-14T19:26:28.828879Z","caller":"traceutil/trace.go:171","msg":"trace[311403769] transaction","detail":"{read_only:false; response_revision:922; number_of_response:1; }","duration":"167.37356ms","start":"2024-03-14T19:26:28.661488Z","end":"2024-03-14T19:26:28.828862Z","steps":["trace[311403769] 'process raft request'  (duration: 167.189142ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-14T19:26:38.534548Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"176.706725ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-442000-m03\" ","response":"range_response_count:1 size:3148"}
	{"level":"info","ts":"2024-03-14T19:26:38.534625Z","caller":"traceutil/trace.go:171","msg":"trace[997703428] range","detail":"{range_begin:/registry/minions/multinode-442000-m03; range_end:; response_count:1; response_revision:940; }","duration":"176.794934ms","start":"2024-03-14T19:26:38.357815Z","end":"2024-03-14T19:26:38.53461Z","steps":["trace[997703428] 'range keys from in-memory index tree'  (duration: 176.489203ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-14T19:26:38.741911Z","caller":"traceutil/trace.go:171","msg":"trace[523352154] transaction","detail":"{read_only:false; response_revision:941; number_of_response:1; }","duration":"337.263802ms","start":"2024-03-14T19:26:38.404632Z","end":"2024-03-14T19:26:38.741896Z","steps":["trace[523352154] 'process raft request'  (duration: 337.155791ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-14T19:26:38.742208Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-14T19:26:38.404612Z","time spent":"337.365212ms","remote":"127.0.0.1:37118","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":569,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-node-lease/multinode-442000-m02\" mod_revision:921 > success:<request_put:<key:\"/registry/leases/kube-node-lease/multinode-442000-m02\" value_size:508 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/multinode-442000-m02\" > >"}
	{"level":"info","ts":"2024-03-14T19:28:58.017013Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":758}
	{"level":"info","ts":"2024-03-14T19:28:58.019324Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":758,"took":"1.18993ms","hash":968606657}
	{"level":"info","ts":"2024-03-14T19:28:58.019629Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":968606657,"revision":758,"compact-revision":-1}
	{"level":"info","ts":"2024-03-14T19:33:40.379429Z","caller":"traceutil/trace.go:171","msg":"trace[1112312551] linearizableReadLoop","detail":"{readStateIndex:1566; appliedIndex:1565; }","duration":"116.019709ms","start":"2024-03-14T19:33:40.263394Z","end":"2024-03-14T19:33:40.379414Z","steps":["trace[1112312551] 'read index received'  (duration: 115.875793ms)","trace[1112312551] 'applied index is now lower than readState.Index'  (duration: 143.316µs)"],"step_count":2}
	{"level":"warn","ts":"2024-03-14T19:33:40.379929Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"116.563772ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-03-14T19:33:40.379983Z","caller":"traceutil/trace.go:171","msg":"trace[2136804309] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1370; }","duration":"116.63018ms","start":"2024-03-14T19:33:40.263341Z","end":"2024-03-14T19:33:40.379971Z","steps":["trace[2136804309] 'agreement among raft nodes before linearized reading'  (duration: 116.179128ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-14T19:33:40.380338Z","caller":"traceutil/trace.go:171","msg":"trace[492976900] transaction","detail":"{read_only:false; response_revision:1370; number_of_response:1; }","duration":"131.153959ms","start":"2024-03-14T19:33:40.249173Z","end":"2024-03-14T19:33:40.380327Z","steps":["trace[492976900] 'process raft request'  (duration: 130.133941ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-14T19:33:44.591974Z","caller":"traceutil/trace.go:171","msg":"trace[661932523] transaction","detail":"{read_only:false; response_revision:1374; number_of_response:1; }","duration":"184.974788ms","start":"2024-03-14T19:33:44.406981Z","end":"2024-03-14T19:33:44.591956Z","steps":["trace[661932523] 'process raft request'  (duration: 184.712058ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-14T19:33:50.896094Z","caller":"traceutil/trace.go:171","msg":"trace[823893338] transaction","detail":"{read_only:false; response_revision:1388; number_of_response:1; }","duration":"196.594446ms","start":"2024-03-14T19:33:50.699482Z","end":"2024-03-14T19:33:50.896077Z","steps":["trace[823893338] 'process raft request'  (duration: 196.486934ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-14T19:33:52.690029Z","caller":"traceutil/trace.go:171","msg":"trace[533814961] transaction","detail":"{read_only:false; response_revision:1391; number_of_response:1; }","duration":"100.356414ms","start":"2024-03-14T19:33:52.589654Z","end":"2024-03-14T19:33:52.69001Z","steps":["trace[533814961] 'process raft request'  (duration: 32.930911ms)","trace[533814961] 'compare'  (duration: 67.253783ms)"],"step_count":2}
	{"level":"info","ts":"2024-03-14T19:33:58.146317Z","caller":"traceutil/trace.go:171","msg":"trace[840314161] compact","detail":"{revision:1090; response_revision:1395; }","duration":"109.384067ms","start":"2024-03-14T19:33:58.036914Z","end":"2024-03-14T19:33:58.146298Z","steps":["trace[840314161] 'process raft request'  (duration: 39.531878ms)","trace[840314161] 'check and update compact revision'  (duration: 69.664267ms)"],"step_count":2}
	{"level":"info","ts":"2024-03-14T19:33:58.146892Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1090}
	{"level":"info","ts":"2024-03-14T19:33:58.148384Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":1090,"took":"1.000116ms","hash":1796084551}
	{"level":"info","ts":"2024-03-14T19:33:58.148434Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1796084551,"revision":1090,"compact-revision":758}
	{"level":"info","ts":"2024-03-14T19:33:59.838192Z","caller":"traceutil/trace.go:171","msg":"trace[341358702] transaction","detail":"{read_only:false; response_revision:1398; number_of_response:1; }","duration":"118.56143ms","start":"2024-03-14T19:33:59.719611Z","end":"2024-03-14T19:33:59.838172Z","steps":["trace[341358702] 'process raft request'  (duration: 118.21849ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-14T19:34:02.701303Z","caller":"traceutil/trace.go:171","msg":"trace[850547472] transaction","detail":"{read_only:false; response_revision:1401; number_of_response:1; }","duration":"115.233049ms","start":"2024-03-14T19:34:02.586006Z","end":"2024-03-14T19:34:02.701239Z","steps":["trace[850547472] 'process raft request'  (duration: 46.582997ms)","trace[850547472] 'compare'  (duration: 68.548941ms)"],"step_count":2}
	{"level":"info","ts":"2024-03-14T19:34:03.237118Z","caller":"traceutil/trace.go:171","msg":"trace[147474340] transaction","detail":"{read_only:false; response_revision:1402; number_of_response:1; }","duration":"102.551881ms","start":"2024-03-14T19:34:03.134547Z","end":"2024-03-14T19:34:03.237099Z","steps":["trace[147474340] 'process raft request'  (duration: 102.268649ms)"],"step_count":1}
	
	
	==> kernel <==
	 19:34:25 up 17 min,  0 users,  load average: 0.21, 0.31, 0.26
	Linux multinode-442000 5.10.207 #1 SMP Wed Mar 13 22:01:28 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [1a321c0e8997] <==
	I0314 19:33:36.841183       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:33:46.854483       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:33:46.854585       1 main.go:227] handling current node
	I0314 19:33:46.854600       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:33:46.854608       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:33:46.855303       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:33:46.855389       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:33:56.867052       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:33:56.867136       1 main.go:227] handling current node
	I0314 19:33:56.867150       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:33:56.867158       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:33:56.867493       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:33:56.867886       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:34:06.874298       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:34:06.874391       1 main.go:227] handling current node
	I0314 19:34:06.874405       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:34:06.874413       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:34:06.874932       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:34:06.874962       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:34:16.890513       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:34:16.890589       1 main.go:227] handling current node
	I0314 19:34:16.890604       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:34:16.890612       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:34:16.890870       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:34:16.890953       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [cd640f130e42] <==
	I0314 19:19:02.338109       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0314 19:19:02.515980       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0314 19:19:02.531592       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [172.17.86.124]
	I0314 19:19:02.533129       1 controller.go:624] quota admission added evaluator for: endpoints
	I0314 19:19:02.541303       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0314 19:19:03.233535       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0314 19:19:04.375127       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0314 19:19:04.404662       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0314 19:19:04.419364       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0314 19:19:16.278098       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	I0314 19:19:16.777362       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I0314 19:22:06.744902       1 trace.go:236] Trace[2066474087]: "Patch" accept:application/vnd.kubernetes.protobuf, */*,audit-id:81ead457-b6db-4a38-8f07-c91ac503f121,client:172.17.86.124,protocol:HTTP/2.0,resource:nodes,scope:resource,url:/api/v1/nodes/multinode-442000-m02,user-agent:kube-controller-manager/v1.28.4 (linux/amd64) kubernetes/bae2c62/system:serviceaccount:kube-system:node-controller,verb:PATCH (14-Mar-2024 19:22:06.227) (total time: 516ms):
	Trace[2066474087]: ["GuaranteedUpdate etcd3" audit-id:81ead457-b6db-4a38-8f07-c91ac503f121,key:/minions/multinode-442000-m02,type:*core.Node,resource:nodes 516ms (19:22:06.227)
	Trace[2066474087]:  ---"Txn call completed" 514ms (19:22:06.743)]
	Trace[2066474087]: ---"Object stored in database" 514ms (19:22:06.743)
	Trace[2066474087]: [516.448841ms] [516.448841ms] END
	I0314 19:22:13.272541       1 trace.go:236] Trace[889089018]: "Patch" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:b82fc3fc-7e3f-4e3e-bc0a-01ae982f3b56,client:172.17.80.135,protocol:HTTP/2.0,resource:nodes,scope:resource,url:/api/v1/nodes/multinode-442000-m02/status,user-agent:kubelet/v1.28.4 (linux/amd64) kubernetes/bae2c62,verb:PATCH (14-Mar-2024 19:22:12.681) (total time: 590ms):
	Trace[889089018]: ["GuaranteedUpdate etcd3" audit-id:b82fc3fc-7e3f-4e3e-bc0a-01ae982f3b56,key:/minions/multinode-442000-m02,type:*core.Node,resource:nodes 590ms (19:22:12.682)
	Trace[889089018]:  ---"Txn call completed" 587ms (19:22:13.272)]
	Trace[889089018]: ---"Object stored in database" 587ms (19:22:13.272)
	Trace[889089018]: [590.631134ms] [590.631134ms] END
	I0314 19:22:13.354500       1 trace.go:236] Trace[1511663482]: "GuaranteedUpdate etcd3" audit-id:,key:/masterleases/172.17.86.124,type:*v1.Endpoints,resource:apiServerIPInfo (14-Mar-2024 19:22:12.501) (total time: 853ms):
	Trace[1511663482]: ---"Transaction prepared" 720ms (19:22:13.271)
	Trace[1511663482]: ---"Txn call completed" 82ms (19:22:13.354)
	Trace[1511663482]: [853.309636ms] [853.309636ms] END
	
	
	==> kube-controller-manager [16b80f73683d] <==
	I0314 19:22:06.146201       1 event.go:307] "Event occurred" object="multinode-442000-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-442000-m02 event: Registered Node multinode-442000-m02 in Controller"
	I0314 19:22:20.862710       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-442000-m02"
	I0314 19:22:45.188036       1 event.go:307] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-5b5d89c9d6 to 2"
	I0314 19:22:45.218022       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5b5d89c9d6-8drpb"
	I0314 19:22:45.241867       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5b5d89c9d6-7446n"
	I0314 19:22:45.267427       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="80.313691ms"
	I0314 19:22:45.292961       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="25.159362ms"
	I0314 19:22:45.311264       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="18.241692ms"
	I0314 19:22:45.311407       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="93.911µs"
	I0314 19:22:48.320252       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="21.515467ms"
	I0314 19:22:48.320403       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="46.303µs"
	I0314 19:22:48.344640       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="8.018521ms"
	I0314 19:22:48.344838       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="42.804µs"
	I0314 19:26:25.208780       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-442000-m02"
	I0314 19:26:25.214591       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-442000-m03\" does not exist"
	I0314 19:26:25.248082       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-442000-m03" podCIDRs=["10.244.2.0/24"]
	I0314 19:26:25.265233       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-r7zdb"
	I0314 19:26:25.273144       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-w2qls"
	I0314 19:26:26.207170       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-442000-m03"
	I0314 19:26:26.207236       1 event.go:307] "Event occurred" object="multinode-442000-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-442000-m03 event: Registered Node multinode-442000-m03 in Controller"
	I0314 19:26:43.758846       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-442000-m02"
	I0314 19:33:46.333556       1 event.go:307] "Event occurred" object="multinode-442000-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-442000-m03 status is now: NodeNotReady"
	I0314 19:33:46.333891       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-442000-m02"
	I0314 19:33:46.348976       1 event.go:307] "Event occurred" object="kube-system/kindnet-r7zdb" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0314 19:33:46.370200       1 event.go:307] "Event occurred" object="kube-system/kube-proxy-w2qls" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	
	
	==> kube-proxy [2a62baf3f1b4] <==
	I0314 19:19:18.247796       1 server_others.go:69] "Using iptables proxy"
	I0314 19:19:18.275162       1 node.go:141] Successfully retrieved node IP: 172.17.86.124
	I0314 19:19:18.379821       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0314 19:19:18.379851       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0314 19:19:18.395429       1 server_others.go:152] "Using iptables Proxier"
	I0314 19:19:18.395506       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0314 19:19:18.395856       1 server.go:846] "Version info" version="v1.28.4"
	I0314 19:19:18.395890       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0314 19:19:18.417861       1 config.go:188] "Starting service config controller"
	I0314 19:19:18.417913       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0314 19:19:18.417950       1 config.go:97] "Starting endpoint slice config controller"
	I0314 19:19:18.420511       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0314 19:19:18.426566       1 config.go:315] "Starting node config controller"
	I0314 19:19:18.426600       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0314 19:19:18.519508       1 shared_informer.go:318] Caches are synced for service config
	I0314 19:19:18.524347       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0314 19:19:18.527360       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [dbb603289bf1] <==
	W0314 19:19:01.382148       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0314 19:19:01.382194       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0314 19:19:01.454259       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0314 19:19:01.454398       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0314 19:19:01.505982       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0314 19:19:01.506182       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0314 19:19:01.640521       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0314 19:19:01.640836       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0314 19:19:01.681052       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0314 19:19:01.681953       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0314 19:19:01.732243       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0314 19:19:01.732288       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0314 19:19:01.767241       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0314 19:19:01.767329       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0314 19:19:01.783665       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0314 19:19:01.783845       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0314 19:19:01.812936       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0314 19:19:01.813027       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0314 19:19:01.821109       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0314 19:19:01.821267       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0314 19:19:01.843311       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0314 19:19:01.843339       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0314 19:19:01.914649       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0314 19:19:01.914986       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0314 19:19:04.090863       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Mar 14 19:30:04 multinode-442000 kubelet[2820]: E0314 19:30:04.692008    2820 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 14 19:30:04 multinode-442000 kubelet[2820]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 14 19:30:04 multinode-442000 kubelet[2820]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 14 19:30:04 multinode-442000 kubelet[2820]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 14 19:30:04 multinode-442000 kubelet[2820]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 14 19:31:04 multinode-442000 kubelet[2820]: E0314 19:31:04.691782    2820 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 14 19:31:04 multinode-442000 kubelet[2820]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 14 19:31:04 multinode-442000 kubelet[2820]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 14 19:31:04 multinode-442000 kubelet[2820]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 14 19:31:04 multinode-442000 kubelet[2820]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 14 19:32:04 multinode-442000 kubelet[2820]: E0314 19:32:04.689494    2820 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 14 19:32:04 multinode-442000 kubelet[2820]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 14 19:32:04 multinode-442000 kubelet[2820]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 14 19:32:04 multinode-442000 kubelet[2820]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 14 19:32:04 multinode-442000 kubelet[2820]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 14 19:33:04 multinode-442000 kubelet[2820]: E0314 19:33:04.690279    2820 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 14 19:33:04 multinode-442000 kubelet[2820]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 14 19:33:04 multinode-442000 kubelet[2820]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 14 19:33:04 multinode-442000 kubelet[2820]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 14 19:33:04 multinode-442000 kubelet[2820]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 14 19:34:04 multinode-442000 kubelet[2820]: E0314 19:34:04.689992    2820 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 14 19:34:04 multinode-442000 kubelet[2820]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 14 19:34:04 multinode-442000 kubelet[2820]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 14 19:34:04 multinode-442000 kubelet[2820]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 14 19:34:04 multinode-442000 kubelet[2820]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0314 19:34:17.907033    3240 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-442000 -n multinode-442000
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-442000 -n multinode-442000: (11.1188522s)
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-442000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/StopNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/StopNode (97.15s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (565.82s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-442000
multinode_test.go:321: (dbg) Run:  out/minikube-windows-amd64.exe stop -p multinode-442000
E0314 19:38:18.290510   11052 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-953400\client.crt: The system cannot find the path specified.
E0314 19:38:38.526939   11052 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-866600\client.crt: The system cannot find the path specified.
multinode_test.go:321: (dbg) Done: out/minikube-windows-amd64.exe stop -p multinode-442000: (1m32.4729086s)
multinode_test.go:326: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-442000 --wait=true -v=8 --alsologtostderr
E0314 19:41:41.803562   11052 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-866600\client.crt: The system cannot find the path specified.
E0314 19:43:18.300935   11052 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-953400\client.crt: The system cannot find the path specified.
E0314 19:43:38.550715   11052 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-866600\client.crt: The system cannot find the path specified.
multinode_test.go:326: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p multinode-442000 --wait=true -v=8 --alsologtostderr: exit status 1 (7m2.8189774s)

                                                
                                                
-- stdout --
	* [multinode-442000] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4046 Build 19045.4046
	  - KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=18384
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the hyperv driver based on existing profile
	* Starting "multinode-442000" primary control-plane node in "multinode-442000" cluster
	* Restarting existing hyperv VM for "multinode-442000" ...
	* Preparing Kubernetes v1.28.4 on Docker 25.0.4 ...
	* Configuring CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	* Enabled addons: 
	
	* Starting "multinode-442000-m02" worker node in "multinode-442000" cluster
	* Restarting existing hyperv VM for "multinode-442000-m02" ...
	* Found network options:
	  - NO_PROXY=172.17.93.236
	  - NO_PROXY=172.17.93.236
	* Preparing Kubernetes v1.28.4 on Docker 25.0.4 ...
	  - env NO_PROXY=172.17.93.236
	* Verifying Kubernetes components...
	
	* Starting "multinode-442000-m03" worker node in "multinode-442000" cluster
	* Restarting existing hyperv VM for "multinode-442000-m03" ...

                                                
                                                
-- /stdout --
** stderr ** 
	W0314 19:39:02.574596    8428 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0314 19:39:02.625615    8428 out.go:291] Setting OutFile to fd 1780 ...
	I0314 19:39:02.626675    8428 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 19:39:02.626675    8428 out.go:304] Setting ErrFile to fd 1656...
	I0314 19:39:02.626675    8428 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 19:39:02.646420    8428 out.go:298] Setting JSON to false
	I0314 19:39:02.649032    8428 start.go:129] hostinfo: {"hostname":"minikube7","uptime":66947,"bootTime":1710378195,"procs":197,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4046 Build 19045.4046","kernelVersion":"10.0.19045.4046 Build 19045.4046","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f66ed2ea-6c04-4a6b-8eea-b2fb0953e990"}
	W0314 19:39:02.649032    8428 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0314 19:39:02.676633    8428 out.go:177] * [multinode-442000] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4046 Build 19045.4046
	I0314 19:39:02.876298    8428 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0314 19:39:02.719328    8428 notify.go:220] Checking for updates...
	I0314 19:39:03.065147    8428 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0314 19:39:03.115186    8428 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	I0314 19:39:03.254105    8428 out.go:177]   - MINIKUBE_LOCATION=18384
	I0314 19:39:03.420663    8428 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0314 19:39:03.429141    8428 config.go:182] Loaded profile config "multinode-442000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0314 19:39:03.429417    8428 driver.go:392] Setting default libvirt URI to qemu:///system
	I0314 19:39:08.617424    8428 out.go:177] * Using the hyperv driver based on existing profile
	I0314 19:39:08.622317    8428 start.go:297] selected driver: hyperv
	I0314 19:39:08.622317    8428 start.go:901] validating driver "hyperv" against &{Name:multinode-442000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Ku
bernetesVersion:v1.28.4 ClusterName:multinode-442000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.86.124 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.17.80.135 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.17.84.215 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:fa
lse ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 19:39:08.622487    8428 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0314 19:39:08.669081    8428 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0314 19:39:08.669151    8428 cni.go:84] Creating CNI manager for ""
	I0314 19:39:08.669151    8428 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0314 19:39:08.669295    8428 start.go:340] cluster config:
	{Name:multinode-442000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-442000 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.86.124 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.17.80.135 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.17.84.215 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner
:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFir
mwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 19:39:08.669603    8428 iso.go:125] acquiring lock: {Name:mk1b3e73402180391a20a865a9454da445c269fc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0314 19:39:08.823039    8428 out.go:177] * Starting "multinode-442000" primary control-plane node in "multinode-442000" cluster
	I0314 19:39:08.872180    8428 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0314 19:39:08.872280    8428 preload.go:147] Found local preload: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I0314 19:39:08.872280    8428 cache.go:56] Caching tarball of preloaded images
	I0314 19:39:08.872812    8428 preload.go:173] Found C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0314 19:39:08.873066    8428 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0314 19:39:08.873445    8428 profile.go:142] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-442000\config.json ...
	I0314 19:39:08.877074    8428 start.go:360] acquireMachinesLock for multinode-442000: {Name:mk814f158b6187cc9297257c36fdbe0d2871c950 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0314 19:39:08.877162    8428 start.go:364] duration metric: took 88.5µs to acquireMachinesLock for "multinode-442000"
	I0314 19:39:08.877162    8428 start.go:96] Skipping create...Using existing machine configuration
	I0314 19:39:08.877162    8428 fix.go:54] fixHost starting: 
	I0314 19:39:08.877808    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000 ).state
	I0314 19:39:11.462259    8428 main.go:141] libmachine: [stdout =====>] : Off
	
	I0314 19:39:11.462259    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:39:11.462259    8428 fix.go:112] recreateIfNeeded on multinode-442000: state=Stopped err=<nil>
	W0314 19:39:11.462259    8428 fix.go:138] unexpected machine state, will restart: <nil>
	I0314 19:39:11.527884    8428 out.go:177] * Restarting existing hyperv VM for "multinode-442000" ...
	I0314 19:39:11.531003    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-442000
	I0314 19:39:15.520294    8428 main.go:141] libmachine: [stdout =====>] : 
	I0314 19:39:15.520294    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:39:15.520294    8428 main.go:141] libmachine: Waiting for host to start...
	I0314 19:39:15.520294    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000 ).state
	I0314 19:39:17.578362    8428 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:39:17.578865    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:39:17.578865    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-442000 ).networkadapters[0]).ipaddresses[0]
	I0314 19:39:19.898828    8428 main.go:141] libmachine: [stdout =====>] : 
	I0314 19:39:19.898828    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:39:20.908383    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000 ).state
	I0314 19:39:22.933851    8428 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:39:22.933851    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:39:22.934499    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-442000 ).networkadapters[0]).ipaddresses[0]
	I0314 19:39:25.225186    8428 main.go:141] libmachine: [stdout =====>] : 
	I0314 19:39:25.225186    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:39:26.227725    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000 ).state
	I0314 19:39:28.251206    8428 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:39:28.251388    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:39:28.251486    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-442000 ).networkadapters[0]).ipaddresses[0]
	I0314 19:39:30.558089    8428 main.go:141] libmachine: [stdout =====>] : 
	I0314 19:39:30.558089    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:39:31.566622    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000 ).state
	I0314 19:39:33.559717    8428 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:39:33.559781    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:39:33.559781    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-442000 ).networkadapters[0]).ipaddresses[0]
	I0314 19:39:35.875289    8428 main.go:141] libmachine: [stdout =====>] : 
	I0314 19:39:35.875289    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:39:36.886006    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000 ).state
	I0314 19:39:38.917520    8428 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:39:38.917939    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:39:38.917939    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-442000 ).networkadapters[0]).ipaddresses[0]
	I0314 19:39:41.267585    8428 main.go:141] libmachine: [stdout =====>] : 172.17.93.236
	
	I0314 19:39:41.267585    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:39:41.270463    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000 ).state
	I0314 19:39:43.251733    8428 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:39:43.251880    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:39:43.251957    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-442000 ).networkadapters[0]).ipaddresses[0]
	I0314 19:39:45.644162    8428 main.go:141] libmachine: [stdout =====>] : 172.17.93.236
	
	I0314 19:39:45.644162    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:39:45.644792    8428 profile.go:142] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-442000\config.json ...
	I0314 19:39:45.646761    8428 machine.go:94] provisionDockerMachine start ...
	I0314 19:39:45.646870    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000 ).state
	I0314 19:39:47.623471    8428 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:39:47.623557    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:39:47.623557    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-442000 ).networkadapters[0]).ipaddresses[0]
	I0314 19:39:49.994101    8428 main.go:141] libmachine: [stdout =====>] : 172.17.93.236
	
	I0314 19:39:49.994101    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:39:49.998736    8428 main.go:141] libmachine: Using SSH client type: native
	I0314 19:39:49.998736    8428 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xac9f80] 0xaccb60 <nil>  [] 0s} 172.17.93.236 22 <nil> <nil>}
	I0314 19:39:49.998736    8428 main.go:141] libmachine: About to run SSH command:
	hostname
	I0314 19:39:50.139786    8428 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0314 19:39:50.139884    8428 buildroot.go:166] provisioning hostname "multinode-442000"
	I0314 19:39:50.140008    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000 ).state
	I0314 19:39:52.110791    8428 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:39:52.110791    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:39:52.110791    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-442000 ).networkadapters[0]).ipaddresses[0]
	I0314 19:39:54.474094    8428 main.go:141] libmachine: [stdout =====>] : 172.17.93.236
	
	I0314 19:39:54.474094    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:39:54.478157    8428 main.go:141] libmachine: Using SSH client type: native
	I0314 19:39:54.478566    8428 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xac9f80] 0xaccb60 <nil>  [] 0s} 172.17.93.236 22 <nil> <nil>}
	I0314 19:39:54.478647    8428 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-442000 && echo "multinode-442000" | sudo tee /etc/hostname
	I0314 19:39:54.645826    8428 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-442000
	
	I0314 19:39:54.645915    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000 ).state
	I0314 19:39:56.597485    8428 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:39:56.597485    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:39:56.597797    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-442000 ).networkadapters[0]).ipaddresses[0]
	I0314 19:39:58.974093    8428 main.go:141] libmachine: [stdout =====>] : 172.17.93.236
	
	I0314 19:39:58.974093    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:39:58.981067    8428 main.go:141] libmachine: Using SSH client type: native
	I0314 19:39:58.981067    8428 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xac9f80] 0xaccb60 <nil>  [] 0s} 172.17.93.236 22 <nil> <nil>}
	I0314 19:39:58.981067    8428 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-442000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-442000/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-442000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0314 19:39:59.130757    8428 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0314 19:39:59.130757    8428 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube7\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube7\minikube-integration\.minikube}
	I0314 19:39:59.130757    8428 buildroot.go:174] setting up certificates
	I0314 19:39:59.130757    8428 provision.go:84] configureAuth start
	I0314 19:39:59.131540    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000 ).state
	I0314 19:40:01.112146    8428 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:40:01.112146    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:40:01.112204    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-442000 ).networkadapters[0]).ipaddresses[0]
	I0314 19:40:03.486170    8428 main.go:141] libmachine: [stdout =====>] : 172.17.93.236
	
	I0314 19:40:03.486170    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:40:03.486170    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000 ).state
	I0314 19:40:05.459428    8428 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:40:05.459428    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:40:05.459428    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-442000 ).networkadapters[0]).ipaddresses[0]
	I0314 19:40:07.792496    8428 main.go:141] libmachine: [stdout =====>] : 172.17.93.236
	
	I0314 19:40:07.792496    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:40:07.792496    8428 provision.go:143] copyHostCerts
	I0314 19:40:07.793369    8428 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem
	I0314 19:40:07.793369    8428 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem, removing ...
	I0314 19:40:07.793369    8428 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.pem
	I0314 19:40:07.794065    8428 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0314 19:40:07.795007    8428 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem
	I0314 19:40:07.795797    8428 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem, removing ...
	I0314 19:40:07.795961    8428 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cert.pem
	I0314 19:40:07.795961    8428 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0314 19:40:07.796719    8428 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem
	I0314 19:40:07.797326    8428 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem, removing ...
	I0314 19:40:07.797326    8428 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\key.pem
	I0314 19:40:07.797326    8428 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem (1679 bytes)
	I0314 19:40:07.797996    8428 provision.go:117] generating server cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-442000 san=[127.0.0.1 172.17.93.236 localhost minikube multinode-442000]
	I0314 19:40:08.179126    8428 provision.go:177] copyRemoteCerts
	I0314 19:40:08.191121    8428 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0314 19:40:08.191121    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000 ).state
	I0314 19:40:10.185425    8428 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:40:10.185425    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:40:10.186036    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-442000 ).networkadapters[0]).ipaddresses[0]
	I0314 19:40:12.534992    8428 main.go:141] libmachine: [stdout =====>] : 172.17.93.236
	
	I0314 19:40:12.535721    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:40:12.535721    8428 sshutil.go:53] new ssh client: &{IP:172.17.93.236 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-442000\id_rsa Username:docker}
	I0314 19:40:12.643746    8428 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.4522878s)
	I0314 19:40:12.645778    8428 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0314 19:40:12.646410    8428 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0314 19:40:12.690092    8428 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0314 19:40:12.690092    8428 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1216 bytes)
	I0314 19:40:12.736222    8428 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0314 19:40:12.736595    8428 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0314 19:40:12.783798    8428 provision.go:87] duration metric: took 13.6520056s to configureAuth
	I0314 19:40:12.783938    8428 buildroot.go:189] setting minikube options for container-runtime
	I0314 19:40:12.784532    8428 config.go:182] Loaded profile config "multinode-442000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0314 19:40:12.784623    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000 ).state
	I0314 19:40:14.772316    8428 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:40:14.772571    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:40:14.772571    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-442000 ).networkadapters[0]).ipaddresses[0]
	I0314 19:40:17.126045    8428 main.go:141] libmachine: [stdout =====>] : 172.17.93.236
	
	I0314 19:40:17.126045    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:40:17.130726    8428 main.go:141] libmachine: Using SSH client type: native
	I0314 19:40:17.131251    8428 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xac9f80] 0xaccb60 <nil>  [] 0s} 172.17.93.236 22 <nil> <nil>}
	I0314 19:40:17.131364    8428 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0314 19:40:17.274520    8428 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0314 19:40:17.274520    8428 buildroot.go:70] root file system type: tmpfs
	I0314 19:40:17.274520    8428 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0314 19:40:17.274520    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000 ).state
	I0314 19:40:19.239278    8428 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:40:19.239278    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:40:19.240298    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-442000 ).networkadapters[0]).ipaddresses[0]
	I0314 19:40:21.613985    8428 main.go:141] libmachine: [stdout =====>] : 172.17.93.236
	
	I0314 19:40:21.613985    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:40:21.618312    8428 main.go:141] libmachine: Using SSH client type: native
	I0314 19:40:21.618465    8428 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xac9f80] 0xaccb60 <nil>  [] 0s} 172.17.93.236 22 <nil> <nil>}
	I0314 19:40:21.618465    8428 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0314 19:40:21.786728    8428 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0314 19:40:21.786728    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000 ).state
	I0314 19:40:23.741801    8428 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:40:23.741801    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:40:23.742000    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-442000 ).networkadapters[0]).ipaddresses[0]
	I0314 19:40:26.151856    8428 main.go:141] libmachine: [stdout =====>] : 172.17.93.236
	
	I0314 19:40:26.151856    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:40:26.156707    8428 main.go:141] libmachine: Using SSH client type: native
	I0314 19:40:26.156707    8428 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xac9f80] 0xaccb60 <nil>  [] 0s} 172.17.93.236 22 <nil> <nil>}
	I0314 19:40:26.156707    8428 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0314 19:40:28.541060    8428 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0314 19:40:28.541142    8428 machine.go:97] duration metric: took 42.8911279s to provisionDockerMachine
	I0314 19:40:28.541142    8428 start.go:293] postStartSetup for "multinode-442000" (driver="hyperv")
	I0314 19:40:28.541142    8428 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0314 19:40:28.552934    8428 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0314 19:40:28.552934    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000 ).state
	I0314 19:40:30.512463    8428 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:40:30.512463    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:40:30.512463    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-442000 ).networkadapters[0]).ipaddresses[0]
	I0314 19:40:32.860394    8428 main.go:141] libmachine: [stdout =====>] : 172.17.93.236
	
	I0314 19:40:32.860394    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:40:32.861252    8428 sshutil.go:53] new ssh client: &{IP:172.17.93.236 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-442000\id_rsa Username:docker}
	I0314 19:40:32.968061    8428 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.4147186s)
	I0314 19:40:32.976856    8428 ssh_runner.go:195] Run: cat /etc/os-release
	I0314 19:40:32.983165    8428 command_runner.go:130] > NAME=Buildroot
	I0314 19:40:32.983165    8428 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0314 19:40:32.983285    8428 command_runner.go:130] > ID=buildroot
	I0314 19:40:32.983285    8428 command_runner.go:130] > VERSION_ID=2023.02.9
	I0314 19:40:32.983285    8428 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0314 19:40:32.983350    8428 info.go:137] Remote host: Buildroot 2023.02.9
	I0314 19:40:32.983350    8428 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\addons for local assets ...
	I0314 19:40:32.983350    8428 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\files for local assets ...
	I0314 19:40:32.984582    8428 filesync.go:149] local asset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\110522.pem -> 110522.pem in /etc/ssl/certs
	I0314 19:40:32.984582    8428 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\110522.pem -> /etc/ssl/certs/110522.pem
	I0314 19:40:32.994800    8428 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0314 19:40:33.010951    8428 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\110522.pem --> /etc/ssl/certs/110522.pem (1708 bytes)
	I0314 19:40:33.054192    8428 start.go:296] duration metric: took 4.5127083s for postStartSetup
	I0314 19:40:33.054192    8428 fix.go:56] duration metric: took 1m24.1706439s for fixHost
	I0314 19:40:33.054192    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000 ).state
	I0314 19:40:35.037620    8428 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:40:35.037620    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:40:35.037620    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-442000 ).networkadapters[0]).ipaddresses[0]
	I0314 19:40:37.375754    8428 main.go:141] libmachine: [stdout =====>] : 172.17.93.236
	
	I0314 19:40:37.375754    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:40:37.379584    8428 main.go:141] libmachine: Using SSH client type: native
	I0314 19:40:37.380125    8428 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xac9f80] 0xaccb60 <nil>  [] 0s} 172.17.93.236 22 <nil> <nil>}
	I0314 19:40:37.380125    8428 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0314 19:40:37.519664    8428 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710445237.779688673
	
	I0314 19:40:37.519664    8428 fix.go:216] guest clock: 1710445237.779688673
	I0314 19:40:37.519664    8428 fix.go:229] Guest: 2024-03-14 19:40:37.779688673 +0000 UTC Remote: 2024-03-14 19:40:33.0541927 +0000 UTC m=+90.580944101 (delta=4.725495973s)
	I0314 19:40:37.519734    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000 ).state
	I0314 19:40:39.497293    8428 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:40:39.498250    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:40:39.498372    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-442000 ).networkadapters[0]).ipaddresses[0]
	I0314 19:40:41.891520    8428 main.go:141] libmachine: [stdout =====>] : 172.17.93.236
	
	I0314 19:40:41.891520    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:40:41.895458    8428 main.go:141] libmachine: Using SSH client type: native
	I0314 19:40:41.896077    8428 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xac9f80] 0xaccb60 <nil>  [] 0s} 172.17.93.236 22 <nil> <nil>}
	I0314 19:40:41.896077    8428 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1710445237
	I0314 19:40:42.049221    8428 main.go:141] libmachine: SSH cmd err, output: <nil>: Thu Mar 14 19:40:37 UTC 2024
	
	I0314 19:40:42.049221    8428 fix.go:236] clock set: Thu Mar 14 19:40:37 UTC 2024
	 (err=<nil>)
	I0314 19:40:42.049386    8428 start.go:83] releasing machines lock for "multinode-442000", held for 1m33.1651553s
	I0314 19:40:42.049461    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000 ).state
	I0314 19:40:44.010021    8428 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:40:44.010705    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:40:44.010782    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-442000 ).networkadapters[0]).ipaddresses[0]
	I0314 19:40:46.365248    8428 main.go:141] libmachine: [stdout =====>] : 172.17.93.236
	
	I0314 19:40:46.365577    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:40:46.368874    8428 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0314 19:40:46.368953    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000 ).state
	I0314 19:40:46.376353    8428 ssh_runner.go:195] Run: cat /version.json
	I0314 19:40:46.376353    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000 ).state
	I0314 19:40:48.346155    8428 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:40:48.346155    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:40:48.346155    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-442000 ).networkadapters[0]).ipaddresses[0]
	I0314 19:40:48.348108    8428 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:40:48.348108    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:40:48.348108    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-442000 ).networkadapters[0]).ipaddresses[0]
	I0314 19:40:50.725561    8428 main.go:141] libmachine: [stdout =====>] : 172.17.93.236
	
	I0314 19:40:50.725561    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:40:50.726613    8428 sshutil.go:53] new ssh client: &{IP:172.17.93.236 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-442000\id_rsa Username:docker}
	I0314 19:40:50.769534    8428 main.go:141] libmachine: [stdout =====>] : 172.17.93.236
	
	I0314 19:40:50.769761    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:40:50.769761    8428 sshutil.go:53] new ssh client: &{IP:172.17.93.236 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-442000\id_rsa Username:docker}
	I0314 19:40:50.956491    8428 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0314 19:40:50.956491    8428 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.5872692s)
	I0314 19:40:50.956693    8428 command_runner.go:130] > {"iso_version": "v1.32.1-1710348681-18375", "kicbase_version": "v0.0.42-1710284843-18375", "minikube_version": "v1.32.0", "commit": "fd5757a6603390a2c0efe3b1e5cdd797538203fd"}
	I0314 19:40:50.956780    8428 ssh_runner.go:235] Completed: cat /version.json: (4.5799927s)
	I0314 19:40:50.966080    8428 ssh_runner.go:195] Run: systemctl --version
	I0314 19:40:50.974657    8428 command_runner.go:130] > systemd 252 (252)
	I0314 19:40:50.974657    8428 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0314 19:40:50.984378    8428 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0314 19:40:50.991360    8428 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0314 19:40:50.992238    8428 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0314 19:40:51.000634    8428 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0314 19:40:51.026317    8428 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0314 19:40:51.026451    8428 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0314 19:40:51.026451    8428 start.go:494] detecting cgroup driver to use...
	I0314 19:40:51.026451    8428 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0314 19:40:51.061844    8428 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0314 19:40:51.073589    8428 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0314 19:40:51.101324    8428 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0314 19:40:51.119293    8428 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0314 19:40:51.127857    8428 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0314 19:40:51.154447    8428 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0314 19:40:51.182910    8428 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0314 19:40:51.211448    8428 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0314 19:40:51.237874    8428 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0314 19:40:51.266309    8428 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0314 19:40:51.294353    8428 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0314 19:40:51.310243    8428 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0314 19:40:51.320623    8428 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0314 19:40:51.349378    8428 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 19:40:51.545869    8428 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0314 19:40:51.574590    8428 start.go:494] detecting cgroup driver to use...
	I0314 19:40:51.586105    8428 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0314 19:40:51.607564    8428 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0314 19:40:51.607564    8428 command_runner.go:130] > [Unit]
	I0314 19:40:51.607564    8428 command_runner.go:130] > Description=Docker Application Container Engine
	I0314 19:40:51.607564    8428 command_runner.go:130] > Documentation=https://docs.docker.com
	I0314 19:40:51.607564    8428 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0314 19:40:51.607564    8428 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0314 19:40:51.607564    8428 command_runner.go:130] > StartLimitBurst=3
	I0314 19:40:51.607564    8428 command_runner.go:130] > StartLimitIntervalSec=60
	I0314 19:40:51.607564    8428 command_runner.go:130] > [Service]
	I0314 19:40:51.607564    8428 command_runner.go:130] > Type=notify
	I0314 19:40:51.607564    8428 command_runner.go:130] > Restart=on-failure
	I0314 19:40:51.607564    8428 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0314 19:40:51.607564    8428 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0314 19:40:51.607564    8428 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0314 19:40:51.607564    8428 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0314 19:40:51.607564    8428 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0314 19:40:51.607564    8428 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0314 19:40:51.607564    8428 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0314 19:40:51.607564    8428 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0314 19:40:51.607564    8428 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0314 19:40:51.607564    8428 command_runner.go:130] > ExecStart=
	I0314 19:40:51.607564    8428 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0314 19:40:51.608058    8428 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0314 19:40:51.608058    8428 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0314 19:40:51.608058    8428 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0314 19:40:51.608058    8428 command_runner.go:130] > LimitNOFILE=infinity
	I0314 19:40:51.608058    8428 command_runner.go:130] > LimitNPROC=infinity
	I0314 19:40:51.608058    8428 command_runner.go:130] > LimitCORE=infinity
	I0314 19:40:51.608058    8428 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0314 19:40:51.608058    8428 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0314 19:40:51.608058    8428 command_runner.go:130] > TasksMax=infinity
	I0314 19:40:51.608058    8428 command_runner.go:130] > TimeoutStartSec=0
	I0314 19:40:51.608058    8428 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0314 19:40:51.608058    8428 command_runner.go:130] > Delegate=yes
	I0314 19:40:51.608058    8428 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0314 19:40:51.608058    8428 command_runner.go:130] > KillMode=process
	I0314 19:40:51.608058    8428 command_runner.go:130] > [Install]
	I0314 19:40:51.608058    8428 command_runner.go:130] > WantedBy=multi-user.target
	I0314 19:40:51.618678    8428 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0314 19:40:51.651292    8428 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0314 19:40:51.683681    8428 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0314 19:40:51.714551    8428 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0314 19:40:51.745489    8428 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0314 19:40:51.805850    8428 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0314 19:40:51.828345    8428 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0314 19:40:51.861970    8428 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0314 19:40:51.874456    8428 ssh_runner.go:195] Run: which cri-dockerd
	I0314 19:40:51.880911    8428 command_runner.go:130] > /usr/bin/cri-dockerd
	I0314 19:40:51.891375    8428 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0314 19:40:51.907991    8428 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0314 19:40:51.945642    8428 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0314 19:40:52.127221    8428 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0314 19:40:52.308629    8428 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0314 19:40:52.308852    8428 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0314 19:40:52.347014    8428 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 19:40:52.537598    8428 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0314 19:40:55.155720    8428 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.617924s)
	I0314 19:40:55.167960    8428 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0314 19:40:55.201822    8428 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0314 19:40:55.232206    8428 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0314 19:40:55.423642    8428 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0314 19:40:55.609931    8428 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 19:40:55.797295    8428 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0314 19:40:55.835509    8428 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0314 19:40:55.866682    8428 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 19:40:56.052216    8428 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0314 19:40:56.149554    8428 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0314 19:40:56.158895    8428 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0314 19:40:56.168281    8428 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0314 19:40:56.168281    8428 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0314 19:40:56.168281    8428 command_runner.go:130] > Device: 0,22	Inode: 856         Links: 1
	I0314 19:40:56.168281    8428 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0314 19:40:56.168281    8428 command_runner.go:130] > Access: 2024-03-14 19:40:56.338615339 +0000
	I0314 19:40:56.168739    8428 command_runner.go:130] > Modify: 2024-03-14 19:40:56.338615339 +0000
	I0314 19:40:56.168739    8428 command_runner.go:130] > Change: 2024-03-14 19:40:56.341615570 +0000
	I0314 19:40:56.168739    8428 command_runner.go:130] >  Birth: -
	I0314 19:40:56.168797    8428 start.go:562] Will wait 60s for crictl version
	I0314 19:40:56.178007    8428 ssh_runner.go:195] Run: which crictl
	I0314 19:40:56.185001    8428 command_runner.go:130] > /usr/bin/crictl
	I0314 19:40:56.193733    8428 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0314 19:40:56.257753    8428 command_runner.go:130] > Version:  0.1.0
	I0314 19:40:56.257753    8428 command_runner.go:130] > RuntimeName:  docker
	I0314 19:40:56.257753    8428 command_runner.go:130] > RuntimeVersion:  25.0.4
	I0314 19:40:56.257753    8428 command_runner.go:130] > RuntimeApiVersion:  v1
	I0314 19:40:56.260162    8428 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  25.0.4
	RuntimeApiVersion:  v1
	I0314 19:40:56.266763    8428 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0314 19:40:56.296840    8428 command_runner.go:130] > 25.0.4
	I0314 19:40:56.305077    8428 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0314 19:40:56.336519    8428 command_runner.go:130] > 25.0.4
	I0314 19:40:56.342370    8428 out.go:204] * Preparing Kubernetes v1.28.4 on Docker 25.0.4 ...
	I0314 19:40:56.342370    8428 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0314 19:40:56.347124    8428 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0314 19:40:56.347124    8428 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0314 19:40:56.347124    8428 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0314 19:40:56.347124    8428 ip.go:207] Found interface: {Index:7 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:82:e8:09 Flags:up|broadcast|multicast|running}
	I0314 19:40:56.349770    8428 ip.go:210] interface addr: fe80::e3be:cf7e:6bd2:b964/64
	I0314 19:40:56.349770    8428 ip.go:210] interface addr: 172.17.80.1/20
	I0314 19:40:56.357988    8428 ssh_runner.go:195] Run: grep 172.17.80.1	host.minikube.internal$ /etc/hosts
	I0314 19:40:56.364079    8428 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.17.80.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0314 19:40:56.384355    8428 kubeadm.go:877] updating cluster {Name:multinode-442000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.28.4 ClusterName:multinode-442000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.93.236 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.17.80.135 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.17.84.215 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress
-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0314 19:40:56.384641    8428 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0314 19:40:56.391424    8428 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0314 19:40:56.419361    8428 command_runner.go:130] > kindest/kindnetd:v20240202-8f1494ea
	I0314 19:40:56.419361    8428 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.28.4
	I0314 19:40:56.419361    8428 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.28.4
	I0314 19:40:56.419361    8428 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.28.4
	I0314 19:40:56.419361    8428 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.28.4
	I0314 19:40:56.419361    8428 command_runner.go:130] > registry.k8s.io/etcd:3.5.9-0
	I0314 19:40:56.419361    8428 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.10.1
	I0314 19:40:56.419361    8428 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0314 19:40:56.419361    8428 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0314 19:40:56.419361    8428 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0314 19:40:56.420806    8428 docker.go:685] Got preloaded images: -- stdout --
	kindest/kindnetd:v20240202-8f1494ea
	registry.k8s.io/kube-apiserver:v1.28.4
	registry.k8s.io/kube-proxy:v1.28.4
	registry.k8s.io/kube-scheduler:v1.28.4
	registry.k8s.io/kube-controller-manager:v1.28.4
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0314 19:40:56.420806    8428 docker.go:615] Images already preloaded, skipping extraction
	I0314 19:40:56.431512    8428 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0314 19:40:56.456399    8428 command_runner.go:130] > kindest/kindnetd:v20240202-8f1494ea
	I0314 19:40:56.456480    8428 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.28.4
	I0314 19:40:56.456480    8428 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.28.4
	I0314 19:40:56.456480    8428 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.28.4
	I0314 19:40:56.456480    8428 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.28.4
	I0314 19:40:56.456480    8428 command_runner.go:130] > registry.k8s.io/etcd:3.5.9-0
	I0314 19:40:56.456480    8428 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.10.1
	I0314 19:40:56.456480    8428 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0314 19:40:56.456480    8428 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0314 19:40:56.456480    8428 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0314 19:40:56.456480    8428 docker.go:685] Got preloaded images: -- stdout --
	kindest/kindnetd:v20240202-8f1494ea
	registry.k8s.io/kube-apiserver:v1.28.4
	registry.k8s.io/kube-scheduler:v1.28.4
	registry.k8s.io/kube-controller-manager:v1.28.4
	registry.k8s.io/kube-proxy:v1.28.4
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0314 19:40:56.456480    8428 cache_images.go:84] Images are preloaded, skipping loading
	I0314 19:40:56.456480    8428 kubeadm.go:928] updating node { 172.17.93.236 8443 v1.28.4 docker true true} ...
	I0314 19:40:56.456480    8428 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-442000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.17.93.236
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-442000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0314 19:40:56.463446    8428 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0314 19:40:56.493313    8428 command_runner.go:130] > cgroupfs
	I0314 19:40:56.494532    8428 cni.go:84] Creating CNI manager for ""
	I0314 19:40:56.494603    8428 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0314 19:40:56.494674    8428 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0314 19:40:56.494700    8428 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.17.93.236 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-442000 NodeName:multinode-442000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.17.93.236"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.17.93.236 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0314 19:40:56.494700    8428 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.17.93.236
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-442000"
	  kubeletExtraArgs:
	    node-ip: 172.17.93.236
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.17.93.236"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0314 19:40:56.504511    8428 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0314 19:40:56.521995    8428 command_runner.go:130] > kubeadm
	I0314 19:40:56.521995    8428 command_runner.go:130] > kubectl
	I0314 19:40:56.521995    8428 command_runner.go:130] > kubelet
	I0314 19:40:56.522073    8428 binaries.go:44] Found k8s binaries, skipping transfer
	I0314 19:40:56.531041    8428 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0314 19:40:56.546860    8428 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0314 19:40:56.575351    8428 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0314 19:40:56.608897    8428 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0314 19:40:56.647785    8428 ssh_runner.go:195] Run: grep 172.17.93.236	control-plane.minikube.internal$ /etc/hosts
	I0314 19:40:56.653743    8428 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.17.93.236	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0314 19:40:56.683448    8428 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 19:40:56.876493    8428 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0314 19:40:56.903499    8428 certs.go:68] Setting up C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-442000 for IP: 172.17.93.236
	I0314 19:40:56.903499    8428 certs.go:194] generating shared ca certs ...
	I0314 19:40:56.903499    8428 certs.go:226] acquiring lock for ca certs: {Name:mk3bd6e475aadf28590677101b36f61c132d4290 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 19:40:56.903499    8428 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key
	I0314 19:40:56.904508    8428 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key
	I0314 19:40:56.904508    8428 certs.go:256] generating profile certs ...
	I0314 19:40:56.905498    8428 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-442000\client.key
	I0314 19:40:56.905498    8428 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-442000\apiserver.key.4297411e
	I0314 19:40:56.905498    8428 crypto.go:68] Generating cert C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-442000\apiserver.crt.4297411e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.17.93.236]
	I0314 19:40:56.973061    8428 crypto.go:156] Writing cert to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-442000\apiserver.crt.4297411e ...
	I0314 19:40:56.973061    8428 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-442000\apiserver.crt.4297411e: {Name:mk3aa0c8e492a00a020e4819ada54e3fb813a9b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 19:40:56.974071    8428 crypto.go:164] Writing key to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-442000\apiserver.key.4297411e ...
	I0314 19:40:56.974071    8428 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-442000\apiserver.key.4297411e: {Name:mk67eb1255f403684b279a0cad001ea7a631783c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 19:40:56.975243    8428 certs.go:381] copying C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-442000\apiserver.crt.4297411e -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-442000\apiserver.crt
	I0314 19:40:56.989288    8428 certs.go:385] copying C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-442000\apiserver.key.4297411e -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-442000\apiserver.key
	I0314 19:40:56.990279    8428 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-442000\proxy-client.key
	I0314 19:40:56.990279    8428 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0314 19:40:56.990279    8428 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0314 19:40:56.990279    8428 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0314 19:40:56.990279    8428 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0314 19:40:56.990279    8428 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-442000\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0314 19:40:56.991281    8428 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-442000\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0314 19:40:56.991281    8428 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-442000\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0314 19:40:56.991281    8428 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-442000\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0314 19:40:56.991281    8428 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\11052.pem (1338 bytes)
	W0314 19:40:56.991281    8428 certs.go:480] ignoring C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\11052_empty.pem, impossibly tiny 0 bytes
	I0314 19:40:56.991281    8428 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0314 19:40:56.992289    8428 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0314 19:40:56.992289    8428 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0314 19:40:56.992289    8428 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0314 19:40:56.992289    8428 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\110522.pem (1708 bytes)
	I0314 19:40:56.992289    8428 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\110522.pem -> /usr/share/ca-certificates/110522.pem
	I0314 19:40:56.992289    8428 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0314 19:40:56.992289    8428 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\11052.pem -> /usr/share/ca-certificates/11052.pem
	I0314 19:40:56.993277    8428 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0314 19:40:57.041055    8428 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0314 19:40:57.085389    8428 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0314 19:40:57.135501    8428 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0314 19:40:57.177078    8428 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-442000\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0314 19:40:57.219978    8428 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-442000\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0314 19:40:57.263688    8428 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-442000\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0314 19:40:57.308090    8428 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-442000\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0314 19:40:57.349693    8428 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\110522.pem --> /usr/share/ca-certificates/110522.pem (1708 bytes)
	I0314 19:40:57.388829    8428 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0314 19:40:57.443289    8428 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\11052.pem --> /usr/share/ca-certificates/11052.pem (1338 bytes)
	I0314 19:40:57.482666    8428 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0314 19:40:57.522357    8428 ssh_runner.go:195] Run: openssl version
	I0314 19:40:57.531101    8428 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0314 19:40:57.540550    8428 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110522.pem && ln -fs /usr/share/ca-certificates/110522.pem /etc/ssl/certs/110522.pem"
	I0314 19:40:57.567626    8428 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/110522.pem
	I0314 19:40:57.575461    8428 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Mar 14 17:58 /usr/share/ca-certificates/110522.pem
	I0314 19:40:57.575461    8428 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 14 17:58 /usr/share/ca-certificates/110522.pem
	I0314 19:40:57.584643    8428 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110522.pem
	I0314 19:40:57.592872    8428 command_runner.go:130] > 3ec20f2e
	I0314 19:40:57.601393    8428 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110522.pem /etc/ssl/certs/3ec20f2e.0"
	I0314 19:40:57.627162    8428 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0314 19:40:57.658079    8428 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0314 19:40:57.665232    8428 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Mar 14 17:44 /usr/share/ca-certificates/minikubeCA.pem
	I0314 19:40:57.665232    8428 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 14 17:44 /usr/share/ca-certificates/minikubeCA.pem
	I0314 19:40:57.674049    8428 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0314 19:40:57.681843    8428 command_runner.go:130] > b5213941
	I0314 19:40:57.690689    8428 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0314 19:40:57.717923    8428 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11052.pem && ln -fs /usr/share/ca-certificates/11052.pem /etc/ssl/certs/11052.pem"
	I0314 19:40:57.745112    8428 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11052.pem
	I0314 19:40:57.751922    8428 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Mar 14 17:58 /usr/share/ca-certificates/11052.pem
	I0314 19:40:57.752117    8428 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 14 17:58 /usr/share/ca-certificates/11052.pem
	I0314 19:40:57.763062    8428 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11052.pem
	I0314 19:40:57.771658    8428 command_runner.go:130] > 51391683
	I0314 19:40:57.780245    8428 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11052.pem /etc/ssl/certs/51391683.0"
	I0314 19:40:57.810149    8428 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0314 19:40:57.817135    8428 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0314 19:40:57.817379    8428 command_runner.go:130] >   Size: 1164      	Blocks: 8          IO Block: 4096   regular file
	I0314 19:40:57.817379    8428 command_runner.go:130] > Device: 8,1	Inode: 9430309     Links: 1
	I0314 19:40:57.817466    8428 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0314 19:40:57.817466    8428 command_runner.go:130] > Access: 2024-03-14 19:18:50.767195126 +0000
	I0314 19:40:57.817466    8428 command_runner.go:130] > Modify: 2024-03-14 19:18:50.767195126 +0000
	I0314 19:40:57.817540    8428 command_runner.go:130] > Change: 2024-03-14 19:18:50.767195126 +0000
	I0314 19:40:57.817589    8428 command_runner.go:130] >  Birth: 2024-03-14 19:18:50.767195126 +0000
	I0314 19:40:57.827750    8428 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0314 19:40:57.837857    8428 command_runner.go:130] > Certificate will not expire
	I0314 19:40:57.846977    8428 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0314 19:40:57.856185    8428 command_runner.go:130] > Certificate will not expire
	I0314 19:40:57.864861    8428 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0314 19:40:57.874470    8428 command_runner.go:130] > Certificate will not expire
	I0314 19:40:57.885563    8428 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0314 19:40:57.895080    8428 command_runner.go:130] > Certificate will not expire
	I0314 19:40:57.903869    8428 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0314 19:40:57.914464    8428 command_runner.go:130] > Certificate will not expire
	I0314 19:40:57.923585    8428 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0314 19:40:57.933178    8428 command_runner.go:130] > Certificate will not expire
	I0314 19:40:57.933561    8428 kubeadm.go:391] StartCluster: {Name:multinode-442000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.2
8.4 ClusterName:multinode-442000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.93.236 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.17.80.135 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.17.84.215 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dn
s:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 19:40:57.939846    8428 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0314 19:40:57.974028    8428 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0314 19:40:57.992181    8428 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I0314 19:40:57.992251    8428 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I0314 19:40:57.992251    8428 command_runner.go:130] > /var/lib/minikube/etcd:
	I0314 19:40:57.992251    8428 command_runner.go:130] > member
	W0314 19:40:57.992342    8428 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0314 19:40:57.992375    8428 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0314 19:40:57.992375    8428 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0314 19:40:58.001174    8428 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0314 19:40:58.016522    8428 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0314 19:40:58.017278    8428 kubeconfig.go:47] verify endpoint returned: get endpoint: "multinode-442000" does not appear in C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0314 19:40:58.018120    8428 kubeconfig.go:62] C:\Users\jenkins.minikube7\minikube-integration\kubeconfig needs updating (will repair): [kubeconfig missing "multinode-442000" cluster setting kubeconfig missing "multinode-442000" context setting]
	I0314 19:40:58.018690    8428 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\kubeconfig: {Name:mk3b9816be6135fef42a073128e9ed356868417a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 19:40:58.032678    8428 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0314 19:40:58.033397    8428 kapi.go:59] client config for multinode-442000: &rest.Config{Host:"https://172.17.93.236:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\multinode-442000/client.crt", KeyFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\multinode-442000/client.key", CAFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData
:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1ec9180), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0314 19:40:58.034722    8428 cert_rotation.go:137] Starting client certificate rotation controller
	I0314 19:40:58.043318    8428 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0314 19:40:58.060922    8428 command_runner.go:130] > --- /var/tmp/minikube/kubeadm.yaml
	I0314 19:40:58.060922    8428 command_runner.go:130] > +++ /var/tmp/minikube/kubeadm.yaml.new
	I0314 19:40:58.060922    8428 command_runner.go:130] > @@ -1,7 +1,7 @@
	I0314 19:40:58.060922    8428 command_runner.go:130] >  apiVersion: kubeadm.k8s.io/v1beta3
	I0314 19:40:58.060922    8428 command_runner.go:130] >  kind: InitConfiguration
	I0314 19:40:58.060922    8428 command_runner.go:130] >  localAPIEndpoint:
	I0314 19:40:58.060922    8428 command_runner.go:130] > -  advertiseAddress: 172.17.86.124
	I0314 19:40:58.060922    8428 command_runner.go:130] > +  advertiseAddress: 172.17.93.236
	I0314 19:40:58.060922    8428 command_runner.go:130] >    bindPort: 8443
	I0314 19:40:58.060922    8428 command_runner.go:130] >  bootstrapTokens:
	I0314 19:40:58.060922    8428 command_runner.go:130] >    - groups:
	I0314 19:40:58.060922    8428 command_runner.go:130] > @@ -14,13 +14,13 @@
	I0314 19:40:58.060922    8428 command_runner.go:130] >    criSocket: unix:///var/run/cri-dockerd.sock
	I0314 19:40:58.060922    8428 command_runner.go:130] >    name: "multinode-442000"
	I0314 19:40:58.060922    8428 command_runner.go:130] >    kubeletExtraArgs:
	I0314 19:40:58.060922    8428 command_runner.go:130] > -    node-ip: 172.17.86.124
	I0314 19:40:58.060922    8428 command_runner.go:130] > +    node-ip: 172.17.93.236
	I0314 19:40:58.060922    8428 command_runner.go:130] >    taints: []
	I0314 19:40:58.060922    8428 command_runner.go:130] >  ---
	I0314 19:40:58.060922    8428 command_runner.go:130] >  apiVersion: kubeadm.k8s.io/v1beta3
	I0314 19:40:58.060922    8428 command_runner.go:130] >  kind: ClusterConfiguration
	I0314 19:40:58.060922    8428 command_runner.go:130] >  apiServer:
	I0314 19:40:58.060922    8428 command_runner.go:130] > -  certSANs: ["127.0.0.1", "localhost", "172.17.86.124"]
	I0314 19:40:58.060922    8428 command_runner.go:130] > +  certSANs: ["127.0.0.1", "localhost", "172.17.93.236"]
	I0314 19:40:58.060922    8428 command_runner.go:130] >    extraArgs:
	I0314 19:40:58.060922    8428 command_runner.go:130] >      enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	I0314 19:40:58.060922    8428 command_runner.go:130] >  controllerManager:
	I0314 19:40:58.060922    8428 kubeadm.go:634] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -1,7 +1,7 @@
	 apiVersion: kubeadm.k8s.io/v1beta3
	 kind: InitConfiguration
	 localAPIEndpoint:
	-  advertiseAddress: 172.17.86.124
	+  advertiseAddress: 172.17.93.236
	   bindPort: 8443
	 bootstrapTokens:
	   - groups:
	@@ -14,13 +14,13 @@
	   criSocket: unix:///var/run/cri-dockerd.sock
	   name: "multinode-442000"
	   kubeletExtraArgs:
	-    node-ip: 172.17.86.124
	+    node-ip: 172.17.93.236
	   taints: []
	 ---
	 apiVersion: kubeadm.k8s.io/v1beta3
	 kind: ClusterConfiguration
	 apiServer:
	-  certSANs: ["127.0.0.1", "localhost", "172.17.86.124"]
	+  certSANs: ["127.0.0.1", "localhost", "172.17.93.236"]
	   extraArgs:
	     enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	 controllerManager:
	
	-- /stdout --
	I0314 19:40:58.060922    8428 kubeadm.go:1153] stopping kube-system containers ...
	I0314 19:40:58.067921    8428 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0314 19:40:58.101075    8428 command_runner.go:130] > 8899bc003893
	I0314 19:40:58.101075    8428 command_runner.go:130] > 07c2872c48ed
	I0314 19:40:58.101075    8428 command_runner.go:130] > b179d157b6b2
	I0314 19:40:58.101075    8428 command_runner.go:130] > a3dba3fc54c0
	I0314 19:40:58.101075    8428 command_runner.go:130] > 1a321c0e8997
	I0314 19:40:58.101075    8428 command_runner.go:130] > 2a62baf3f1b4
	I0314 19:40:58.101075    8428 command_runner.go:130] > 9b3244b47278
	I0314 19:40:58.101075    8428 command_runner.go:130] > b046b896affe
	I0314 19:40:58.101075    8428 command_runner.go:130] > cd640f130e42
	I0314 19:40:58.101075    8428 command_runner.go:130] > dbb603289bf1
	I0314 19:40:58.101075    8428 command_runner.go:130] > 16b80f73683d
	I0314 19:40:58.101075    8428 command_runner.go:130] > 9585e3eb2ead
	I0314 19:40:58.101075    8428 command_runner.go:130] > 54e39762d7a6
	I0314 19:40:58.101075    8428 command_runner.go:130] > 102c907609a3
	I0314 19:40:58.101075    8428 command_runner.go:130] > ab390fc53b99
	I0314 19:40:58.101075    8428 command_runner.go:130] > af5b88117f99
	I0314 19:40:58.101075    8428 docker.go:483] Stopping containers: [8899bc003893 07c2872c48ed b179d157b6b2 a3dba3fc54c0 1a321c0e8997 2a62baf3f1b4 9b3244b47278 b046b896affe cd640f130e42 dbb603289bf1 16b80f73683d 9585e3eb2ead 54e39762d7a6 102c907609a3 ab390fc53b99 af5b88117f99]
	I0314 19:40:58.109662    8428 ssh_runner.go:195] Run: docker stop 8899bc003893 07c2872c48ed b179d157b6b2 a3dba3fc54c0 1a321c0e8997 2a62baf3f1b4 9b3244b47278 b046b896affe cd640f130e42 dbb603289bf1 16b80f73683d 9585e3eb2ead 54e39762d7a6 102c907609a3 ab390fc53b99 af5b88117f99
	I0314 19:40:58.134945    8428 command_runner.go:130] > 8899bc003893
	I0314 19:40:58.134945    8428 command_runner.go:130] > 07c2872c48ed
	I0314 19:40:58.134945    8428 command_runner.go:130] > b179d157b6b2
	I0314 19:40:58.134945    8428 command_runner.go:130] > a3dba3fc54c0
	I0314 19:40:58.134945    8428 command_runner.go:130] > 1a321c0e8997
	I0314 19:40:58.134945    8428 command_runner.go:130] > 2a62baf3f1b4
	I0314 19:40:58.134945    8428 command_runner.go:130] > 9b3244b47278
	I0314 19:40:58.134945    8428 command_runner.go:130] > b046b896affe
	I0314 19:40:58.134945    8428 command_runner.go:130] > cd640f130e42
	I0314 19:40:58.134945    8428 command_runner.go:130] > dbb603289bf1
	I0314 19:40:58.134945    8428 command_runner.go:130] > 16b80f73683d
	I0314 19:40:58.134945    8428 command_runner.go:130] > 9585e3eb2ead
	I0314 19:40:58.134945    8428 command_runner.go:130] > 54e39762d7a6
	I0314 19:40:58.134945    8428 command_runner.go:130] > 102c907609a3
	I0314 19:40:58.134945    8428 command_runner.go:130] > ab390fc53b99
	I0314 19:40:58.134945    8428 command_runner.go:130] > af5b88117f99
	I0314 19:40:58.145935    8428 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0314 19:40:58.181868    8428 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0314 19:40:58.199931    8428 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0314 19:40:58.199970    8428 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0314 19:40:58.199970    8428 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0314 19:40:58.199970    8428 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0314 19:40:58.199970    8428 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0314 19:40:58.199970    8428 kubeadm.go:156] found existing configuration files:
	
	I0314 19:40:58.208510    8428 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0314 19:40:58.225973    8428 command_runner.go:130] ! grep: /etc/kubernetes/admin.conf: No such file or directory
	I0314 19:40:58.226140    8428 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0314 19:40:58.238965    8428 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0314 19:40:58.266015    8428 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0314 19:40:58.282779    8428 command_runner.go:130] ! grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0314 19:40:58.282884    8428 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0314 19:40:58.292147    8428 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0314 19:40:58.317530    8428 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0314 19:40:58.334084    8428 command_runner.go:130] ! grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0314 19:40:58.334204    8428 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0314 19:40:58.343828    8428 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0314 19:40:58.372412    8428 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0314 19:40:58.387831    8428 command_runner.go:130] ! grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0314 19:40:58.387831    8428 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0314 19:40:58.396514    8428 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0314 19:40:58.421893    8428 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0314 19:40:58.437677    8428 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0314 19:40:58.745595    8428 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0314 19:40:58.745691    8428 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0314 19:40:58.745691    8428 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0314 19:40:58.745691    8428 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0314 19:40:58.745785    8428 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I0314 19:40:58.745785    8428 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I0314 19:40:58.745825    8428 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I0314 19:40:58.745825    8428 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I0314 19:40:58.745857    8428 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I0314 19:40:58.745902    8428 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0314 19:40:58.745936    8428 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0314 19:40:58.745980    8428 command_runner.go:130] > [certs] Using the existing "sa" key
	I0314 19:40:58.746082    8428 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0314 19:40:59.622877    8428 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0314 19:40:59.622877    8428 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0314 19:40:59.622877    8428 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0314 19:40:59.622877    8428 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0314 19:40:59.622877    8428 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0314 19:40:59.622877    8428 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0314 19:40:59.919191    8428 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0314 19:40:59.919229    8428 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0314 19:40:59.919229    8428 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0314 19:40:59.919229    8428 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0314 19:41:00.010216    8428 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0314 19:41:00.010216    8428 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0314 19:41:00.010216    8428 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0314 19:41:00.010216    8428 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0314 19:41:00.010216    8428 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0314 19:41:00.104060    8428 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0314 19:41:00.104060    8428 api_server.go:52] waiting for apiserver process to appear ...
	I0314 19:41:00.113047    8428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:41:00.616123    8428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:41:01.124257    8428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:41:01.628803    8428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:41:02.121788    8428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:41:02.142784    8428 command_runner.go:130] > 2008
	I0314 19:41:02.143188    8428 api_server.go:72] duration metric: took 2.0389736s to wait for apiserver process to appear ...
	I0314 19:41:02.143188    8428 api_server.go:88] waiting for apiserver healthz status ...
	I0314 19:41:02.143188    8428 api_server.go:253] Checking apiserver healthz at https://172.17.93.236:8443/healthz ...
	I0314 19:41:05.419799    8428 api_server.go:279] https://172.17.93.236:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0314 19:41:05.419799    8428 api_server.go:103] status: https://172.17.93.236:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0314 19:41:05.419799    8428 api_server.go:253] Checking apiserver healthz at https://172.17.93.236:8443/healthz ...
	I0314 19:41:05.503543    8428 api_server.go:279] https://172.17.93.236:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0314 19:41:05.503543    8428 api_server.go:103] status: https://172.17.93.236:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0314 19:41:05.654492    8428 api_server.go:253] Checking apiserver healthz at https://172.17.93.236:8443/healthz ...
	I0314 19:41:05.665202    8428 api_server.go:279] https://172.17.93.236:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0314 19:41:05.666026    8428 api_server.go:103] status: https://172.17.93.236:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0314 19:41:06.157882    8428 api_server.go:253] Checking apiserver healthz at https://172.17.93.236:8443/healthz ...
	I0314 19:41:06.186077    8428 api_server.go:279] https://172.17.93.236:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0314 19:41:06.186077    8428 api_server.go:103] status: https://172.17.93.236:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0314 19:41:06.652460    8428 api_server.go:253] Checking apiserver healthz at https://172.17.93.236:8443/healthz ...
	I0314 19:41:06.660908    8428 api_server.go:279] https://172.17.93.236:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0314 19:41:06.660908    8428 api_server.go:103] status: https://172.17.93.236:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0314 19:41:07.144026    8428 api_server.go:253] Checking apiserver healthz at https://172.17.93.236:8443/healthz ...
	I0314 19:41:07.150727    8428 api_server.go:279] https://172.17.93.236:8443/healthz returned 200:
	ok
	I0314 19:41:07.151685    8428 round_trippers.go:463] GET https://172.17.93.236:8443/version
	I0314 19:41:07.151743    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:07.151761    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:07.151761    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:07.162083    8428 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0314 19:41:07.162898    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:07.162898    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:07.162959    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:07.162959    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:07.162959    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:07.162959    8428 round_trippers.go:580]     Content-Length: 264
	I0314 19:41:07.162959    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:07 GMT
	I0314 19:41:07.162959    8428 round_trippers.go:580]     Audit-Id: adc14fa1-3ec8-4ca8-bcbf-285a1d507ddf
	I0314 19:41:07.162959    8428 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.4",
	  "gitCommit": "bae2c62678db2b5053817bc97181fcc2e8388103",
	  "gitTreeState": "clean",
	  "buildDate": "2023-11-15T16:48:54Z",
	  "goVersion": "go1.20.11",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0314 19:41:07.162959    8428 api_server.go:141] control plane version: v1.28.4
	I0314 19:41:07.162959    8428 api_server.go:131] duration metric: took 5.0193918s to wait for apiserver health ...
	I0314 19:41:07.162959    8428 cni.go:84] Creating CNI manager for ""
	I0314 19:41:07.162959    8428 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0314 19:41:07.167153    8428 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0314 19:41:07.180755    8428 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0314 19:41:07.189531    8428 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0314 19:41:07.189531    8428 command_runner.go:130] >   Size: 2694104   	Blocks: 5264       IO Block: 4096   regular file
	I0314 19:41:07.189531    8428 command_runner.go:130] > Device: 0,17	Inode: 3497        Links: 1
	I0314 19:41:07.189531    8428 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0314 19:41:07.189531    8428 command_runner.go:130] > Access: 2024-03-14 19:39:37.562004600 +0000
	I0314 19:41:07.189531    8428 command_runner.go:130] > Modify: 2024-03-13 22:53:41.000000000 +0000
	I0314 19:41:07.189531    8428 command_runner.go:130] > Change: 2024-03-14 19:39:30.743000000 +0000
	I0314 19:41:07.189531    8428 command_runner.go:130] >  Birth: -
	I0314 19:41:07.190135    8428 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0314 19:41:07.190135    8428 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0314 19:41:07.262895    8428 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0314 19:41:08.791840    8428 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0314 19:41:08.791879    8428 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0314 19:41:08.791879    8428 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0314 19:41:08.791879    8428 command_runner.go:130] > daemonset.apps/kindnet configured
	I0314 19:41:08.791934    8428 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.5289228s)
	I0314 19:41:08.791987    8428 system_pods.go:43] waiting for kube-system pods to appear ...
	I0314 19:41:08.792153    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/namespaces/kube-system/pods
	I0314 19:41:08.792153    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:08.792153    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:08.792153    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:08.797722    8428 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0314 19:41:08.797722    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:08.797722    8428 round_trippers.go:580]     Audit-Id: 2b33a3ae-5d46-4e40-a15f-cfca67283dda
	I0314 19:41:08.797722    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:08.797722    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:08.797722    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:08.798730    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:08.798730    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:09 GMT
	I0314 19:41:08.798730    8428 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1729"},"items":[{"metadata":{"name":"coredns-5dd5756b68-d22jc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2a563b3f-a175-4dc2-9f0b-67dbaefbfaac","resourceVersion":"1714","creationTimestamp":"2024-03-14T19:19:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"65689a14-ed43-48af-8104-53ea2e3991f3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:19:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"65689a14-ed43-48af-8104-53ea2e3991f3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 83773 chars]
	I0314 19:41:08.806668    8428 system_pods.go:59] 12 kube-system pods found
	I0314 19:41:08.806668    8428 system_pods.go:61] "coredns-5dd5756b68-d22jc" [2a563b3f-a175-4dc2-9f0b-67dbaefbfaac] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0314 19:41:08.806668    8428 system_pods.go:61] "etcd-multinode-442000" [106cc31d-907f-4853-9e8d-f13c8ac4e398] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0314 19:41:08.806668    8428 system_pods.go:61] "kindnet-7b9lf" [677b9084-0026-4b21-b041-445940624ed7] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0314 19:41:08.806668    8428 system_pods.go:61] "kindnet-c7m4p" [926a47cb-e444-455d-8b74-d17a229020a1] Running
	I0314 19:41:08.806668    8428 system_pods.go:61] "kindnet-r7zdb" [69b103aa-023b-4243-ba7b-875106aac183] Running
	I0314 19:41:08.806668    8428 system_pods.go:61] "kube-apiserver-multinode-442000" [ebdd5ddf-2b02-4315-bc64-1b10c383d507] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0314 19:41:08.806668    8428 system_pods.go:61] "kube-controller-manager-multinode-442000" [b16fc874-ef74-44ca-a54f-bb678bf982df] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0314 19:41:08.806668    8428 system_pods.go:61] "kube-proxy-72dzs" [80b840b0-3803-4102-a966-ea73aed74f49] Running
	I0314 19:41:08.806668    8428 system_pods.go:61] "kube-proxy-cg28g" [c7f798bf-6722-4731-af8d-ccd5703d116e] Running
	I0314 19:41:08.806668    8428 system_pods.go:61] "kube-proxy-w2qls" [7a53e602-282e-4b63-a993-a5d23d3c615f] Running
	I0314 19:41:08.806668    8428 system_pods.go:61] "kube-scheduler-multinode-442000" [76b10598-fe0d-4a14-a8e4-a32221fbb68f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0314 19:41:08.806668    8428 system_pods.go:61] "storage-provisioner" [65d76566-4401-4b28-8452-10ed98624901] Running
	I0314 19:41:08.806668    8428 system_pods.go:74] duration metric: took 14.6396ms to wait for pod list to return data ...
	I0314 19:41:08.806668    8428 node_conditions.go:102] verifying NodePressure condition ...
	I0314 19:41:08.806668    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes
	I0314 19:41:08.806668    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:08.806668    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:08.806668    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:08.814106    8428 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0314 19:41:08.814106    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:08.814106    8428 round_trippers.go:580]     Audit-Id: e0708dc2-5f29-4486-b61c-97fc222cf858
	I0314 19:41:08.814106    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:08.814106    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:08.814106    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:08.814106    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:08.814106    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:09 GMT
	I0314 19:41:08.814106    8428 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1729"},"items":[{"metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1707","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 15627 chars]
	I0314 19:41:08.815709    8428 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0314 19:41:08.815709    8428 node_conditions.go:123] node cpu capacity is 2
	I0314 19:41:08.815709    8428 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0314 19:41:08.815709    8428 node_conditions.go:123] node cpu capacity is 2
	I0314 19:41:08.815709    8428 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0314 19:41:08.815709    8428 node_conditions.go:123] node cpu capacity is 2
	I0314 19:41:08.815709    8428 node_conditions.go:105] duration metric: took 9.0408ms to run NodePressure ...
	I0314 19:41:08.815709    8428 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0314 19:41:09.171059    8428 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0314 19:41:09.171059    8428 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0314 19:41:09.171164    8428 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0314 19:41:09.171320    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/namespaces/kube-system/pods?labelSelector=tier%3Dcontrol-plane
	I0314 19:41:09.171391    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:09.171391    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:09.171391    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:09.175576    8428 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 19:41:09.176542    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:09.176542    8428 round_trippers.go:580]     Audit-Id: 210afc00-498f-40e3-9c5a-8e3b45f11632
	I0314 19:41:09.176581    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:09.176581    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:09.176581    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:09.176581    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:09.176581    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:09 GMT
	I0314 19:41:09.176706    8428 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1732"},"items":[{"metadata":{"name":"etcd-multinode-442000","namespace":"kube-system","uid":"106cc31d-907f-4853-9e8d-f13c8ac4e398","resourceVersion":"1726","creationTimestamp":"2024-03-14T19:41:06Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.17.93.236:2379","kubernetes.io/config.hash":"fa99a5621d016aa714804afcaa1e0a53","kubernetes.io/config.mirror":"fa99a5621d016aa714804afcaa1e0a53","kubernetes.io/config.seen":"2024-03-14T19:41:00.367789550Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:41:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotatio
ns":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-client-urls":{},"f: [truncated 29350 chars]
	I0314 19:41:09.178835    8428 kubeadm.go:733] kubelet initialised
	I0314 19:41:09.178868    8428 kubeadm.go:734] duration metric: took 7.7038ms waiting for restarted kubelet to initialise ...
	I0314 19:41:09.178910    8428 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0314 19:41:09.179016    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/namespaces/kube-system/pods
	I0314 19:41:09.179055    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:09.179055    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:09.179055    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:09.188274    8428 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0314 19:41:09.188335    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:09.188335    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:09.188335    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:09.188383    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:09 GMT
	I0314 19:41:09.188383    8428 round_trippers.go:580]     Audit-Id: 5862cf79-e6ad-440a-b0d3-98c024526415
	I0314 19:41:09.188383    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:09.188383    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:09.189783    8428 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1732"},"items":[{"metadata":{"name":"coredns-5dd5756b68-d22jc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2a563b3f-a175-4dc2-9f0b-67dbaefbfaac","resourceVersion":"1714","creationTimestamp":"2024-03-14T19:19:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"65689a14-ed43-48af-8104-53ea2e3991f3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:19:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"65689a14-ed43-48af-8104-53ea2e3991f3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 83581 chars]
	I0314 19:41:09.193297    8428 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-d22jc" in "kube-system" namespace to be "Ready" ...
	I0314 19:41:09.193297    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-d22jc
	I0314 19:41:09.193297    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:09.193297    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:09.193297    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:09.196852    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:41:09.197333    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:09.197333    8428 round_trippers.go:580]     Audit-Id: 5140e513-374a-4f0c-84d5-c8083d5e75db
	I0314 19:41:09.197333    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:09.197333    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:09.197333    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:09.197408    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:09.197408    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:09 GMT
	I0314 19:41:09.197537    8428 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-d22jc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2a563b3f-a175-4dc2-9f0b-67dbaefbfaac","resourceVersion":"1714","creationTimestamp":"2024-03-14T19:19:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"65689a14-ed43-48af-8104-53ea2e3991f3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:19:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"65689a14-ed43-48af-8104-53ea2e3991f3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I0314 19:41:09.198224    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:41:09.198295    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:09.198295    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:09.198295    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:09.200977    8428 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0314 19:41:09.200977    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:09.201841    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:09 GMT
	I0314 19:41:09.201841    8428 round_trippers.go:580]     Audit-Id: 55b96cd4-989f-4a8a-85a9-359add4fb771
	I0314 19:41:09.201841    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:09.201841    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:09.201876    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:09.201876    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:09.201995    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1707","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I0314 19:41:09.201995    8428 pod_ready.go:97] node "multinode-442000" hosting pod "coredns-5dd5756b68-d22jc" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-442000" has status "Ready":"False"
	I0314 19:41:09.202527    8428 pod_ready.go:81] duration metric: took 9.2294ms for pod "coredns-5dd5756b68-d22jc" in "kube-system" namespace to be "Ready" ...
	E0314 19:41:09.202568    8428 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-442000" hosting pod "coredns-5dd5756b68-d22jc" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-442000" has status "Ready":"False"
	I0314 19:41:09.202584    8428 pod_ready.go:78] waiting up to 4m0s for pod "etcd-multinode-442000" in "kube-system" namespace to be "Ready" ...
	I0314 19:41:09.202696    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-442000
	I0314 19:41:09.202735    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:09.202735    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:09.202735    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:09.205818    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:41:09.205818    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:09.205818    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:09.205818    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:09.205818    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:09 GMT
	I0314 19:41:09.205818    8428 round_trippers.go:580]     Audit-Id: 94feb1b6-bc4f-4304-8f06-b404ed63c50a
	I0314 19:41:09.205818    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:09.205818    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:09.205818    8428 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-442000","namespace":"kube-system","uid":"106cc31d-907f-4853-9e8d-f13c8ac4e398","resourceVersion":"1726","creationTimestamp":"2024-03-14T19:41:06Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.17.93.236:2379","kubernetes.io/config.hash":"fa99a5621d016aa714804afcaa1e0a53","kubernetes.io/config.mirror":"fa99a5621d016aa714804afcaa1e0a53","kubernetes.io/config.seen":"2024-03-14T19:41:00.367789550Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:41:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6087 chars]
	I0314 19:41:09.205818    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:41:09.205818    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:09.205818    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:09.205818    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:09.208955    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:41:09.208955    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:09.209921    8428 round_trippers.go:580]     Audit-Id: 093453b0-7d6d-43e9-9174-a6701217f77c
	I0314 19:41:09.209921    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:09.209921    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:09.209921    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:09.209921    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:09.209921    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:09 GMT
	I0314 19:41:09.209921    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1707","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I0314 19:41:09.210500    8428 pod_ready.go:97] node "multinode-442000" hosting pod "etcd-multinode-442000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-442000" has status "Ready":"False"
	I0314 19:41:09.210500    8428 pod_ready.go:81] duration metric: took 7.9156ms for pod "etcd-multinode-442000" in "kube-system" namespace to be "Ready" ...
	E0314 19:41:09.210500    8428 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-442000" hosting pod "etcd-multinode-442000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-442000" has status "Ready":"False"
	I0314 19:41:09.210587    8428 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-multinode-442000" in "kube-system" namespace to be "Ready" ...
	I0314 19:41:09.210697    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-442000
	I0314 19:41:09.210719    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:09.210719    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:09.210719    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:09.213277    8428 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0314 19:41:09.213911    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:09.213911    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:09.213911    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:09 GMT
	I0314 19:41:09.213911    8428 round_trippers.go:580]     Audit-Id: 7ff85a82-040a-458b-8860-4f2f62773e57
	I0314 19:41:09.213911    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:09.213911    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:09.213911    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:09.214263    8428 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-442000","namespace":"kube-system","uid":"ebdd5ddf-2b02-4315-bc64-1b10c383d507","resourceVersion":"1719","creationTimestamp":"2024-03-14T19:41:06Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.17.93.236:8443","kubernetes.io/config.hash":"7754d2f32966faec8123dc3b8a2af767","kubernetes.io/config.mirror":"7754d2f32966faec8123dc3b8a2af767","kubernetes.io/config.seen":"2024-03-14T19:41:00.350706636Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:41:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7644 chars]
	I0314 19:41:09.214794    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:41:09.214794    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:09.214794    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:09.214794    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:09.218902    8428 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 19:41:09.218902    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:09.218902    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:09.218902    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:09.219026    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:09 GMT
	I0314 19:41:09.219026    8428 round_trippers.go:580]     Audit-Id: 115a34dd-3caa-4ad3-adeb-a34843207664
	I0314 19:41:09.219026    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:09.219026    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:09.219193    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1707","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I0314 19:41:09.219582    8428 pod_ready.go:97] node "multinode-442000" hosting pod "kube-apiserver-multinode-442000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-442000" has status "Ready":"False"
	I0314 19:41:09.219631    8428 pod_ready.go:81] duration metric: took 9.0435ms for pod "kube-apiserver-multinode-442000" in "kube-system" namespace to be "Ready" ...
	E0314 19:41:09.219631    8428 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-442000" hosting pod "kube-apiserver-multinode-442000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-442000" has status "Ready":"False"
	I0314 19:41:09.219631    8428 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-multinode-442000" in "kube-system" namespace to be "Ready" ...
	I0314 19:41:09.219736    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-442000
	I0314 19:41:09.219736    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:09.219736    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:09.219800    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:09.222977    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:41:09.222977    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:09.222977    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:09.223309    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:09.223309    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:09.223309    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:09.223309    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:09 GMT
	I0314 19:41:09.223309    8428 round_trippers.go:580]     Audit-Id: a2ab6a43-1b37-46df-bacf-ec964ada0191
	I0314 19:41:09.223579    8428 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-442000","namespace":"kube-system","uid":"b16fc874-ef74-44ca-a54f-bb678bf982df","resourceVersion":"1717","creationTimestamp":"2024-03-14T19:19:01Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"a7ee530f2bd843eddeace8cd6ec0d204","kubernetes.io/config.mirror":"a7ee530f2bd843eddeace8cd6ec0d204","kubernetes.io/config.seen":"2024-03-14T19:18:55.420205308Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:19:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7437 chars]
	I0314 19:41:09.224149    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:41:09.224149    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:09.224198    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:09.224198    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:09.226944    8428 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0314 19:41:09.226944    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:09.227244    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:09.227244    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:09.227244    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:09.227244    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:09.227244    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:09 GMT
	I0314 19:41:09.227244    8428 round_trippers.go:580]     Audit-Id: 2a7c63aa-1465-4e4c-9f5f-c53b397ad2e1
	I0314 19:41:09.227354    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1707","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I0314 19:41:09.227777    8428 pod_ready.go:97] node "multinode-442000" hosting pod "kube-controller-manager-multinode-442000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-442000" has status "Ready":"False"
	I0314 19:41:09.227853    8428 pod_ready.go:81] duration metric: took 8.2206ms for pod "kube-controller-manager-multinode-442000" in "kube-system" namespace to be "Ready" ...
	E0314 19:41:09.227853    8428 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-442000" hosting pod "kube-controller-manager-multinode-442000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-442000" has status "Ready":"False"
	I0314 19:41:09.227853    8428 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-72dzs" in "kube-system" namespace to be "Ready" ...
	I0314 19:41:09.393363    8428 request.go:629] Waited for 165.0379ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.93.236:8443/api/v1/namespaces/kube-system/pods/kube-proxy-72dzs
	I0314 19:41:09.393363    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/namespaces/kube-system/pods/kube-proxy-72dzs
	I0314 19:41:09.393363    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:09.393363    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:09.393363    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:09.397987    8428 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 19:41:09.397987    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:09.397987    8428 round_trippers.go:580]     Audit-Id: eb7d23d8-e7cd-4193-b454-7524dddfc577
	I0314 19:41:09.397987    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:09.398185    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:09.398259    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:09.398259    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:09.398259    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:09 GMT
	I0314 19:41:09.399033    8428 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-72dzs","generateName":"kube-proxy-","namespace":"kube-system","uid":"80b840b0-3803-4102-a966-ea73aed74f49","resourceVersion":"621","creationTimestamp":"2024-03-14T19:22:02Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"6fc4cc4b-ef3f-4f16-8df5-a146058b364e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:22:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6fc4cc4b-ef3f-4f16-8df5-a146058b364e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5541 chars]
	I0314 19:41:09.596494    8428 request.go:629] Waited for 197.4463ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.93.236:8443/api/v1/nodes/multinode-442000-m02
	I0314 19:41:09.596805    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000-m02
	I0314 19:41:09.596805    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:09.596932    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:09.596932    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:09.599863    8428 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0314 19:41:09.599863    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:09.599863    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:09.599863    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:09.599863    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:09.599863    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:09 GMT
	I0314 19:41:09.599863    8428 round_trippers.go:580]     Audit-Id: 791da0aa-59cc-4e18-8f9c-c00c881216bf
	I0314 19:41:09.599863    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:09.600918    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000-m02","uid":"5f369d83-fce6-47fe-b14b-171ed626975b","resourceVersion":"1346","creationTimestamp":"2024-03-14T19:22:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_14T19_22_02_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:22:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3826 chars]
	I0314 19:41:09.600918    8428 pod_ready.go:92] pod "kube-proxy-72dzs" in "kube-system" namespace has status "Ready":"True"
	I0314 19:41:09.600918    8428 pod_ready.go:81] duration metric: took 373.0376ms for pod "kube-proxy-72dzs" in "kube-system" namespace to be "Ready" ...
	I0314 19:41:09.600918    8428 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-cg28g" in "kube-system" namespace to be "Ready" ...
	I0314 19:41:09.801186    8428 request.go:629] Waited for 199.7314ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.93.236:8443/api/v1/namespaces/kube-system/pods/kube-proxy-cg28g
	I0314 19:41:09.801186    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/namespaces/kube-system/pods/kube-proxy-cg28g
	I0314 19:41:09.801186    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:09.801186    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:09.801186    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:09.805078    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:41:09.805446    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:09.805446    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:09.805446    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:10 GMT
	I0314 19:41:09.805446    8428 round_trippers.go:580]     Audit-Id: 57df20d1-b284-4a39-97c6-a9be036bb196
	I0314 19:41:09.805446    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:09.805446    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:09.805446    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:09.805712    8428 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-cg28g","generateName":"kube-proxy-","namespace":"kube-system","uid":"c7f798bf-6722-4731-af8d-ccd5703d116e","resourceVersion":"1728","creationTimestamp":"2024-03-14T19:19:16Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"6fc4cc4b-ef3f-4f16-8df5-a146058b364e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:19:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6fc4cc4b-ef3f-4f16-8df5-a146058b364e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5739 chars]
	I0314 19:41:10.006267    8428 request.go:629] Waited for 199.5775ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:41:10.006267    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:41:10.006267    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:10.006267    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:10.006267    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:10.009844    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:41:10.010040    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:10.010040    8428 round_trippers.go:580]     Audit-Id: b32ba143-e2a3-4590-b5eb-17d46831f335
	I0314 19:41:10.010040    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:10.010040    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:10.010040    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:10.010040    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:10.010040    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:10 GMT
	I0314 19:41:10.010305    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1707","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I0314 19:41:10.010451    8428 pod_ready.go:97] node "multinode-442000" hosting pod "kube-proxy-cg28g" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-442000" has status "Ready":"False"
	I0314 19:41:10.010451    8428 pod_ready.go:81] duration metric: took 409.5011ms for pod "kube-proxy-cg28g" in "kube-system" namespace to be "Ready" ...
	E0314 19:41:10.010451    8428 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-442000" hosting pod "kube-proxy-cg28g" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-442000" has status "Ready":"False"
	I0314 19:41:10.010451    8428 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-w2qls" in "kube-system" namespace to be "Ready" ...
	I0314 19:41:10.193151    8428 request.go:629] Waited for 182.6868ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.93.236:8443/api/v1/namespaces/kube-system/pods/kube-proxy-w2qls
	I0314 19:41:10.193353    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/namespaces/kube-system/pods/kube-proxy-w2qls
	I0314 19:41:10.193353    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:10.193738    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:10.193777    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:10.197761    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:41:10.197819    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:10.197819    8428 round_trippers.go:580]     Audit-Id: 02391dd9-57bc-4e58-8d28-4228817b2666
	I0314 19:41:10.197819    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:10.197819    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:10.197819    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:10.197819    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:10.197872    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:10 GMT
	I0314 19:41:10.197872    8428 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-w2qls","generateName":"kube-proxy-","namespace":"kube-system","uid":"7a53e602-282e-4b63-a993-a5d23d3c615f","resourceVersion":"1678","creationTimestamp":"2024-03-14T19:26:25Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"6fc4cc4b-ef3f-4f16-8df5-a146058b364e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:26:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6fc4cc4b-ef3f-4f16-8df5-a146058b364e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5767 chars]
	I0314 19:41:10.398299    8428 request.go:629] Waited for 199.7405ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.93.236:8443/api/v1/nodes/multinode-442000-m03
	I0314 19:41:10.398801    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000-m03
	I0314 19:41:10.398801    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:10.398801    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:10.398801    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:10.402482    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:41:10.402482    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:10.403100    8428 round_trippers.go:580]     Audit-Id: daf7de4c-5774-4946-8e30-78a41a1a1ff5
	I0314 19:41:10.403100    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:10.403100    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:10.403100    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:10.403100    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:10.403100    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:10 GMT
	I0314 19:41:10.403480    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000-m03","uid":"1b8e342b-6e96-49e8-a22c-874445d29fe3","resourceVersion":"1688","creationTimestamp":"2024-03-14T19:36:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_14T19_36_47_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:36:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4399 chars]
	I0314 19:41:10.404010    8428 pod_ready.go:97] node "multinode-442000-m03" hosting pod "kube-proxy-w2qls" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-442000-m03" has status "Ready":"Unknown"
	I0314 19:41:10.404075    8428 pod_ready.go:81] duration metric: took 393.5951ms for pod "kube-proxy-w2qls" in "kube-system" namespace to be "Ready" ...
	E0314 19:41:10.404075    8428 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-442000-m03" hosting pod "kube-proxy-w2qls" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-442000-m03" has status "Ready":"Unknown"
	I0314 19:41:10.404075    8428 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-multinode-442000" in "kube-system" namespace to be "Ready" ...
	I0314 19:41:10.601485    8428 request.go:629] Waited for 197.3948ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.93.236:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-442000
	I0314 19:41:10.601708    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-442000
	I0314 19:41:10.601708    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:10.601708    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:10.601708    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:10.606546    8428 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 19:41:10.606546    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:10.606546    8428 round_trippers.go:580]     Audit-Id: 94bf8275-796e-459b-8502-5cfeed46fae1
	I0314 19:41:10.606546    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:10.606546    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:10.606546    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:10.606546    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:10.607571    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:10 GMT
	I0314 19:41:10.608010    8428 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-442000","namespace":"kube-system","uid":"76b10598-fe0d-4a14-a8e4-a32221fbb68f","resourceVersion":"1716","creationTimestamp":"2024-03-14T19:19:01Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"2b2434280023596d1e3c90125a7219ed","kubernetes.io/config.mirror":"2b2434280023596d1e3c90125a7219ed","kubernetes.io/config.seen":"2024-03-14T19:18:55.420206709Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:19:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 5149 chars]
	I0314 19:41:10.804758    8428 request.go:629] Waited for 195.6455ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:41:10.804921    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:41:10.804984    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:10.805035    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:10.805035    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:10.809625    8428 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 19:41:10.809625    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:10.809625    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:10.809625    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:10.809625    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:10.809625    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:11 GMT
	I0314 19:41:10.809625    8428 round_trippers.go:580]     Audit-Id: be49477e-f53e-4f00-9413-f03c1ac9aa0d
	I0314 19:41:10.809625    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:10.810319    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1707","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I0314 19:41:10.810319    8428 pod_ready.go:97] node "multinode-442000" hosting pod "kube-scheduler-multinode-442000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-442000" has status "Ready":"False"
	I0314 19:41:10.810853    8428 pod_ready.go:81] duration metric: took 406.7464ms for pod "kube-scheduler-multinode-442000" in "kube-system" namespace to be "Ready" ...
	E0314 19:41:10.810853    8428 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-442000" hosting pod "kube-scheduler-multinode-442000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-442000" has status "Ready":"False"
	I0314 19:41:10.810940    8428 pod_ready.go:38] duration metric: took 1.6319064s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0314 19:41:10.810940    8428 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0314 19:41:10.830620    8428 command_runner.go:130] > -16
	I0314 19:41:10.830765    8428 ops.go:34] apiserver oom_adj: -16
	I0314 19:41:10.830765    8428 kubeadm.go:591] duration metric: took 12.8374176s to restartPrimaryControlPlane
	I0314 19:41:10.830765    8428 kubeadm.go:393] duration metric: took 12.8962854s to StartCluster
	I0314 19:41:10.830818    8428 settings.go:142] acquiring lock: {Name:mk2f48f1c2db86c45c5c20d13312e07e9c171d09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 19:41:10.830884    8428 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0314 19:41:10.832480    8428 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\kubeconfig: {Name:mk3b9816be6135fef42a073128e9ed356868417a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 19:41:10.833753    8428 start.go:234] Will wait 6m0s for node &{Name: IP:172.17.93.236 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0314 19:41:10.833753    8428 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0314 19:41:10.836872    8428 out.go:177] * Verifying Kubernetes components...
	I0314 19:41:10.839781    8428 out.go:177] * Enabled addons: 
	I0314 19:41:10.834364    8428 config.go:182] Loaded profile config "multinode-442000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0314 19:41:10.843389    8428 addons.go:505] duration metric: took 9.6864ms for enable addons: enabled=[]
	I0314 19:41:10.854601    8428 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 19:41:11.154424    8428 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0314 19:41:11.194059    8428 node_ready.go:35] waiting up to 6m0s for node "multinode-442000" to be "Ready" ...
	I0314 19:41:11.194232    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:41:11.194232    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:11.194232    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:11.194232    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:11.196374    8428 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0314 19:41:11.196374    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:11.196374    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:11.197177    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:11.197177    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:11.197177    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:11 GMT
	I0314 19:41:11.197177    8428 round_trippers.go:580]     Audit-Id: fbf20db4-2496-4d1f-a43f-a2ff2f9ea23b
	I0314 19:41:11.197177    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:11.197841    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1707","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I0314 19:41:11.701344    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:41:11.701344    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:11.701436    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:11.701436    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:11.706144    8428 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 19:41:11.706144    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:11.706144    8428 round_trippers.go:580]     Audit-Id: 8b6754d9-6d6d-4ba4-ae2d-b6e56683db54
	I0314 19:41:11.706144    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:11.706144    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:11.706144    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:11.706144    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:11.706144    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:11 GMT
	I0314 19:41:11.706572    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1707","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I0314 19:41:12.200807    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:41:12.200885    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:12.200885    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:12.200885    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:12.204116    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:41:12.205168    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:12.205168    8428 round_trippers.go:580]     Audit-Id: bf1982f4-7453-45f2-a6e1-10adc79e2f21
	I0314 19:41:12.205168    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:12.205168    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:12.205168    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:12.205168    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:12.205168    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:12 GMT
	I0314 19:41:12.205470    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1707","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I0314 19:41:12.703948    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:41:12.703948    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:12.703948    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:12.703948    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:12.708354    8428 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 19:41:12.708354    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:12.708354    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:12.708354    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:12.708354    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:12.708354    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:12.708354    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:12 GMT
	I0314 19:41:12.708354    8428 round_trippers.go:580]     Audit-Id: 7f301715-1049-4f73-a7c0-a33d0761e77c
	I0314 19:41:12.708354    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1707","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I0314 19:41:13.204021    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:41:13.204021    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:13.204021    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:13.204021    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:13.208948    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:41:13.208948    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:13.208948    8428 round_trippers.go:580]     Audit-Id: e73fe370-3658-4436-a73f-36b8bbbafdba
	I0314 19:41:13.209070    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:13.209070    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:13.209070    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:13.209070    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:13.209070    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:13 GMT
	I0314 19:41:13.209270    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1707","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I0314 19:41:13.210162    8428 node_ready.go:53] node "multinode-442000" has status "Ready":"False"
	I0314 19:41:13.706141    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:41:13.706141    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:13.706211    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:13.706211    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:13.709768    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:41:13.709768    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:13.709768    8428 round_trippers.go:580]     Audit-Id: d58f281a-b114-4d38-b710-a7d7929aceb7
	I0314 19:41:13.709768    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:13.709768    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:13.709768    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:13.709768    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:13.709768    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:13 GMT
	I0314 19:41:13.710632    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1707","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I0314 19:41:14.207204    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:41:14.207274    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:14.207274    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:14.207274    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:14.211425    8428 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 19:41:14.211521    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:14.211521    8428 round_trippers.go:580]     Audit-Id: 2e34aa9a-d1e2-48cf-8bc5-b1d3bbfe6e0a
	I0314 19:41:14.211521    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:14.211521    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:14.211521    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:14.211521    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:14.211521    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:14 GMT
	I0314 19:41:14.212063    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1707","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I0314 19:41:14.708444    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:41:14.708692    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:14.708692    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:14.708692    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:14.713259    8428 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 19:41:14.713338    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:14.713405    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:14.713405    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:14.713405    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:14.713405    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:14 GMT
	I0314 19:41:14.713405    8428 round_trippers.go:580]     Audit-Id: 37c5de44-3c87-407b-9fa1-9bfad7343a75
	I0314 19:41:14.713405    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:14.713405    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1707","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I0314 19:41:15.196361    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:41:15.196361    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:15.196361    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:15.196361    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:15.200705    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:41:15.200705    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:15.200705    8428 round_trippers.go:580]     Audit-Id: 30f34791-a826-4657-9149-524b87a5b814
	I0314 19:41:15.200705    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:15.200705    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:15.200705    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:15.200705    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:15.200705    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:15 GMT
	I0314 19:41:15.200705    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1707","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I0314 19:41:15.696025    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:41:15.696107    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:15.696107    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:15.696107    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:15.703736    8428 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0314 19:41:15.703736    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:15.703736    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:15 GMT
	I0314 19:41:15.703736    8428 round_trippers.go:580]     Audit-Id: ea741c36-6145-4cbb-a156-62765d4c3552
	I0314 19:41:15.703736    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:15.703736    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:15.703736    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:15.703736    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:15.704211    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1707","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I0314 19:41:15.704409    8428 node_ready.go:53] node "multinode-442000" has status "Ready":"False"
	I0314 19:41:16.196894    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:41:16.196894    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:16.196968    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:16.196968    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:16.201032    8428 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 19:41:16.201032    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:16.201032    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:16 GMT
	I0314 19:41:16.201032    8428 round_trippers.go:580]     Audit-Id: 067b1039-0f92-4a7b-932f-2d641038029e
	I0314 19:41:16.201032    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:16.201032    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:16.201032    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:16.201032    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:16.201032    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1707","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I0314 19:41:16.696270    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:41:16.696360    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:16.696434    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:16.696434    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:16.703698    8428 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0314 19:41:16.703698    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:16.703698    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:16.703698    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:16.703698    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:16 GMT
	I0314 19:41:16.703698    8428 round_trippers.go:580]     Audit-Id: 0c75326e-7ea4-4e83-8aea-0eb90c485978
	I0314 19:41:16.703698    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:16.703698    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:16.704230    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1707","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I0314 19:41:17.196761    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:41:17.196830    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:17.196830    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:17.196830    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:17.200827    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:41:17.201243    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:17.201243    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:17.201243    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:17.201243    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:17 GMT
	I0314 19:41:17.201243    8428 round_trippers.go:580]     Audit-Id: a218d508-4b85-472d-b5f0-04fea64336c2
	I0314 19:41:17.201243    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:17.201243    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:17.201545    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1707","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I0314 19:41:17.697485    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:41:17.697810    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:17.697810    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:17.697810    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:17.703943    8428 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0314 19:41:17.703943    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:17.703943    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:17.703943    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:17.703943    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:17.703943    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:17.703943    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:17 GMT
	I0314 19:41:17.703943    8428 round_trippers.go:580]     Audit-Id: 13017c24-1c4c-49d1-9277-195a59a7263a
	I0314 19:41:17.704545    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1707","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I0314 19:41:17.705229    8428 node_ready.go:53] node "multinode-442000" has status "Ready":"False"
	I0314 19:41:18.198356    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:41:18.198436    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:18.198436    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:18.198436    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:18.202797    8428 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 19:41:18.203639    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:18.203639    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:18.203639    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:18 GMT
	I0314 19:41:18.203639    8428 round_trippers.go:580]     Audit-Id: 8298cabd-c94d-43e2-89b4-2651e7265b30
	I0314 19:41:18.203639    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:18.203639    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:18.203639    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:18.204033    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1707","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I0314 19:41:18.704147    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:41:18.704147    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:18.704220    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:18.704220    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:18.707736    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:41:18.707736    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:18.707736    8428 round_trippers.go:580]     Audit-Id: 6c81b5fa-b9e1-453e-bcf6-cddc21e9be5b
	I0314 19:41:18.707736    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:18.707736    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:18.707736    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:18.707736    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:18.707736    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:18 GMT
	I0314 19:41:18.708334    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1707","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I0314 19:41:19.209680    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:41:19.209680    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:19.209680    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:19.209680    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:19.213464    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:41:19.213464    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:19.213464    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:19.213464    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:19.213464    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:19.213464    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:19 GMT
	I0314 19:41:19.213464    8428 round_trippers.go:580]     Audit-Id: bc7eb2b2-49cf-4b50-9552-8a0f91590f1b
	I0314 19:41:19.213464    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:19.213464    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1830","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0314 19:41:19.695554    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:41:19.695554    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:19.695554    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:19.695723    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:19.698463    8428 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0314 19:41:19.698463    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:19.699421    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:19 GMT
	I0314 19:41:19.699421    8428 round_trippers.go:580]     Audit-Id: d4ae5292-034d-4f0b-b79b-25f6792d7cfb
	I0314 19:41:19.699421    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:19.699421    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:19.699421    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:19.699421    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:19.699572    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1830","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0314 19:41:20.198853    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:41:20.198853    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:20.198853    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:20.198853    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:20.202694    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:41:20.202694    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:20.202694    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:20.202694    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:20 GMT
	I0314 19:41:20.202694    8428 round_trippers.go:580]     Audit-Id: e21ed7cb-6d6e-4e88-b60d-a87969b0179f
	I0314 19:41:20.202694    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:20.202694    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:20.202694    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:20.202694    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1830","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0314 19:41:20.203410    8428 node_ready.go:53] node "multinode-442000" has status "Ready":"False"
	I0314 19:41:20.702467    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:41:20.702467    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:20.702467    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:20.702467    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:20.706042    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:41:20.706042    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:20.706042    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:20.706042    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:20.706042    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:20.706042    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:20 GMT
	I0314 19:41:20.706042    8428 round_trippers.go:580]     Audit-Id: 0a229a6b-a0b1-4341-95f8-5bca0160e1a5
	I0314 19:41:20.706042    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:20.707040    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1830","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0314 19:41:21.205117    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:41:21.205506    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:21.205538    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:21.205538    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:21.214694    8428 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0314 19:41:21.214694    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:21.214694    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:21.214694    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:21.214694    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:21.214694    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:21 GMT
	I0314 19:41:21.214694    8428 round_trippers.go:580]     Audit-Id: 821abdd6-7c52-4ed1-8f58-e9bee1b83e46
	I0314 19:41:21.214694    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:21.215229    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1830","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0314 19:41:21.704306    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:41:21.704380    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:21.704380    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:21.704380    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:21.707571    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:41:21.708471    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:21.708471    8428 round_trippers.go:580]     Audit-Id: 3be7aecf-35c9-447c-97f4-c81e0d047d94
	I0314 19:41:21.708471    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:21.708471    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:21.708471    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:21.708596    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:21.708596    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:21 GMT
	I0314 19:41:21.708881    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1830","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0314 19:41:22.205630    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:41:22.205630    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:22.205630    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:22.205630    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:22.210204    8428 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 19:41:22.210763    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:22.210763    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:22.210763    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:22.210763    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:22.210763    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:22 GMT
	I0314 19:41:22.210763    8428 round_trippers.go:580]     Audit-Id: 56e4f5e8-4bfd-485a-ab28-c96aa0a28bc9
	I0314 19:41:22.210894    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:22.211210    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1830","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0314 19:41:22.211926    8428 node_ready.go:53] node "multinode-442000" has status "Ready":"False"
	I0314 19:41:22.704264    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:41:22.704264    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:22.704365    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:22.704365    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:22.710046    8428 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0314 19:41:22.710046    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:22.710046    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:22 GMT
	I0314 19:41:22.710046    8428 round_trippers.go:580]     Audit-Id: 66f64f63-03e2-4fe1-a022-264696375071
	I0314 19:41:22.710046    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:22.710046    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:22.710046    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:22.711058    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:22.711220    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1830","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0314 19:41:23.204508    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:41:23.204508    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:23.204508    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:23.204567    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:23.208039    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:41:23.208039    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:23.208039    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:23 GMT
	I0314 19:41:23.208039    8428 round_trippers.go:580]     Audit-Id: b9c5ac56-9f98-4a63-9652-0c24db61ba8a
	I0314 19:41:23.208039    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:23.208039    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:23.208039    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:23.208039    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:23.208795    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1830","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0314 19:41:23.702700    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:41:23.702700    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:23.702700    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:23.702700    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:23.705762    8428 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0314 19:41:23.706323    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:23.706323    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:23.706323    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:23.706323    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:23 GMT
	I0314 19:41:23.706323    8428 round_trippers.go:580]     Audit-Id: cab92b8e-c7b7-400d-8246-deac63b3eb4d
	I0314 19:41:23.706323    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:23.706323    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:23.706559    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1830","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0314 19:41:24.203705    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:41:24.203705    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:24.203705    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:24.203705    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:24.208317    8428 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 19:41:24.208317    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:24.208317    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:24 GMT
	I0314 19:41:24.208317    8428 round_trippers.go:580]     Audit-Id: f346c234-b78b-444e-a2f4-80249f4bad42
	I0314 19:41:24.208317    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:24.208317    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:24.208317    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:24.208317    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:24.208317    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1830","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0314 19:41:24.706100    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:41:24.706156    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:24.706156    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:24.706156    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:24.709644    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:41:24.710403    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:24.710403    8428 round_trippers.go:580]     Audit-Id: 8e67a93a-d8e5-41c3-a1f1-34af91412eef
	I0314 19:41:24.710403    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:24.710403    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:24.710403    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:24.710403    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:24.710403    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:24 GMT
	I0314 19:41:24.710538    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1830","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0314 19:41:24.711437    8428 node_ready.go:53] node "multinode-442000" has status "Ready":"False"
	I0314 19:41:25.206876    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:41:25.207163    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:25.207163    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:25.207163    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:25.211348    8428 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 19:41:25.211859    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:25.211859    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:25.211859    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:25.211859    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:25 GMT
	I0314 19:41:25.211859    8428 round_trippers.go:580]     Audit-Id: 51ddccec-cefc-48db-885c-0bee4de68761
	I0314 19:41:25.211859    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:25.211859    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:25.212094    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1830","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0314 19:41:25.710310    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:41:25.710310    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:25.710310    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:25.710310    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:25.714070    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:41:25.714070    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:25.715021    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:25.715261    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:25.715317    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:25 GMT
	I0314 19:41:25.715317    8428 round_trippers.go:580]     Audit-Id: 9bba2453-fe1b-4c00-aa33-27178358573e
	I0314 19:41:25.715317    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:25.715317    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:25.715317    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1830","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0314 19:41:26.196781    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:41:26.196781    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:26.196781    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:26.196781    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:26.200374    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:41:26.200374    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:26.200374    8428 round_trippers.go:580]     Audit-Id: 38a0865d-e9fd-4357-ae22-310a2ca8054e
	I0314 19:41:26.200374    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:26.200374    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:26.200374    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:26.200374    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:26.200374    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:26 GMT
	I0314 19:41:26.201377    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1830","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0314 19:41:26.711357    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:41:26.711437    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:26.711437    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:26.711437    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:26.715606    8428 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 19:41:26.715606    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:26.715606    8428 round_trippers.go:580]     Audit-Id: 83f705e2-d6ce-4277-a141-bbf7fb20cb36
	I0314 19:41:26.715606    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:26.715606    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:26.715606    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:26.715606    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:26.715606    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:26 GMT
	I0314 19:41:26.715606    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1830","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0314 19:41:26.716145    8428 node_ready.go:53] node "multinode-442000" has status "Ready":"False"
	I0314 19:41:27.197788    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:41:27.197788    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:27.197788    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:27.197788    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:27.202284    8428 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 19:41:27.202284    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:27.202284    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:27.202284    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:27.202284    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:27.202284    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:27.202284    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:27 GMT
	I0314 19:41:27.202284    8428 round_trippers.go:580]     Audit-Id: 89780342-689a-4b4b-9a53-507e4becaf42
	I0314 19:41:27.202284    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1830","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0314 19:41:27.700410    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:41:27.700410    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:27.700410    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:27.700410    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:27.703997    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:41:27.704520    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:27.704520    8428 round_trippers.go:580]     Audit-Id: 9d64fa9e-f1ad-4b25-a409-a011218db958
	I0314 19:41:27.704520    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:27.704520    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:27.704520    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:27.704520    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:27.704520    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:27 GMT
	I0314 19:41:27.705028    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1830","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0314 19:41:28.197800    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:41:28.197800    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:28.197800    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:28.197884    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:28.202176    8428 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 19:41:28.202176    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:28.202176    8428 round_trippers.go:580]     Audit-Id: 1b5f2e31-1bec-41fd-ace2-56b5a084375e
	I0314 19:41:28.202176    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:28.202176    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:28.202176    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:28.202176    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:28.202176    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:28 GMT
	I0314 19:41:28.202176    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1830","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0314 19:41:28.696108    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:41:28.696184    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:28.696184    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:28.696240    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:28.702088    8428 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0314 19:41:28.702088    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:28.702088    8428 round_trippers.go:580]     Audit-Id: 0b41cd5b-99cc-4e50-bc6a-4bf59ddc2c08
	I0314 19:41:28.702088    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:28.702088    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:28.702088    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:28.702088    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:28.702088    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:28 GMT
	I0314 19:41:28.702688    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1830","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0314 19:41:29.202761    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:41:29.202761    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:29.202761    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:29.202838    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:29.207149    8428 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 19:41:29.207194    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:29.207194    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:29.207194    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:29.207194    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:29.207194    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:29 GMT
	I0314 19:41:29.207194    8428 round_trippers.go:580]     Audit-Id: fc335680-e751-44a1-b304-a9d9ba9270e4
	I0314 19:41:29.207254    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:29.207577    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1830","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0314 19:41:29.208204    8428 node_ready.go:53] node "multinode-442000" has status "Ready":"False"
	I0314 19:41:29.706721    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:41:29.706721    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:29.706721    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:29.706721    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:29.710405    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:41:29.710405    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:29.710405    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:29 GMT
	I0314 19:41:29.710405    8428 round_trippers.go:580]     Audit-Id: c1c053bb-259b-400f-b791-63de33b648b9
	I0314 19:41:29.710405    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:29.710405    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:29.710405    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:29.710405    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:29.711309    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1830","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0314 19:41:30.205481    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:41:30.205591    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:30.205591    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:30.205591    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:30.209317    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:41:30.209317    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:30.209317    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:30.209502    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:30.209502    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:30 GMT
	I0314 19:41:30.209502    8428 round_trippers.go:580]     Audit-Id: 4525adea-2f03-4922-ba43-c78e4863bb3b
	I0314 19:41:30.209502    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:30.209502    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:30.209694    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1830","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0314 19:41:30.710690    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:41:30.710690    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:30.710690    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:30.710690    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:30.715042    8428 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 19:41:30.715289    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:30.715289    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:30.715289    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:30.715289    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:30.715289    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:30.715289    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:30 GMT
	I0314 19:41:30.715289    8428 round_trippers.go:580]     Audit-Id: 8b5ec208-d0d4-4558-81ba-0373ff9b6752
	I0314 19:41:30.715496    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1830","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0314 19:41:31.209024    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:41:31.209059    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:31.209113    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:31.209145    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:31.212799    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:41:31.213090    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:31.213090    8428 round_trippers.go:580]     Audit-Id: 3c362792-722e-431f-b1ff-782f8acf2474
	I0314 19:41:31.213146    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:31.213146    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:31.213146    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:31.213146    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:31.213146    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:31 GMT
	I0314 19:41:31.213393    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1830","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0314 19:41:31.213860    8428 node_ready.go:53] node "multinode-442000" has status "Ready":"False"
	I0314 19:41:31.710401    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:41:31.710452    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:31.710522    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:31.710522    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:31.713831    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:41:31.713831    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:31.713831    8428 round_trippers.go:580]     Audit-Id: 212a743c-6403-4f5b-88ce-91fc302ad0ae
	I0314 19:41:31.713831    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:31.713831    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:31.713831    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:31.713831    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:31.713831    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:31 GMT
	I0314 19:41:31.713831    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1830","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0314 19:41:32.207804    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:41:32.207804    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:32.207804    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:32.207804    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:32.211862    8428 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 19:41:32.212050    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:32.212050    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:32.212050    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:32.212050    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:32 GMT
	I0314 19:41:32.212050    8428 round_trippers.go:580]     Audit-Id: b6f497b2-4c77-4c58-b8d8-746b5b80300a
	I0314 19:41:32.212050    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:32.212050    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:32.212358    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1830","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0314 19:41:32.707961    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:41:32.707961    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:32.707961    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:32.707961    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:32.712077    8428 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 19:41:32.712077    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:32.712077    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:32.712142    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:32.712142    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:32 GMT
	I0314 19:41:32.712142    8428 round_trippers.go:580]     Audit-Id: 53ee98bb-9195-4391-92a1-ba1146bb275f
	I0314 19:41:32.712142    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:32.712142    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:32.712142    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1830","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0314 19:41:33.207974    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:41:33.207974    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:33.207974    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:33.207974    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:33.211561    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:41:33.212170    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:33.212227    8428 round_trippers.go:580]     Audit-Id: 3089fa3e-969f-4d05-aae5-937d050da974
	I0314 19:41:33.212227    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:33.212227    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:33.212227    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:33.212227    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:33.212227    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:33 GMT
	I0314 19:41:33.212227    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1830","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0314 19:41:33.710070    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:41:33.710294    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:33.710294    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:33.710294    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:33.716000    8428 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0314 19:41:33.716000    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:33.716000    8428 round_trippers.go:580]     Audit-Id: 23fe6eb1-f627-4944-b86e-691cc4dc3568
	I0314 19:41:33.716000    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:33.716000    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:33.716000    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:33.716000    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:33.716000    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:33 GMT
	I0314 19:41:33.716601    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1830","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0314 19:41:33.716632    8428 node_ready.go:53] node "multinode-442000" has status "Ready":"False"
	I0314 19:41:34.210253    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:41:34.210348    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:34.210348    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:34.210348    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:34.215843    8428 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0314 19:41:34.216538    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:34.216538    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:34.216538    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:34.216538    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:34 GMT
	I0314 19:41:34.216538    8428 round_trippers.go:580]     Audit-Id: a10e4223-f4bb-4f14-b72c-4bd44fd67b81
	I0314 19:41:34.216538    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:34.216538    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:34.216990    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1830","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0314 19:41:34.709540    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:41:34.709540    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:34.709540    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:34.709540    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:34.713643    8428 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 19:41:34.713643    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:34.713643    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:34.713643    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:34.713643    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:34.713643    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:34.713643    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:34 GMT
	I0314 19:41:34.713643    8428 round_trippers.go:580]     Audit-Id: 9dcfb0cf-c239-4d37-a4bc-190169535f2b
	I0314 19:41:34.713643    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1830","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0314 19:41:35.208688    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:41:35.208688    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:35.208688    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:35.208688    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:35.213436    8428 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 19:41:35.213436    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:35.213436    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:35.213436    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:35.213436    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:35.213530    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:35 GMT
	I0314 19:41:35.213530    8428 round_trippers.go:580]     Audit-Id: 80e2a421-0cbe-40e9-b801-bd9f038f6e2d
	I0314 19:41:35.213530    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:35.213595    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1830","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0314 19:41:35.708424    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:41:35.708512    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:35.708512    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:35.708605    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:35.712889    8428 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 19:41:35.712956    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:35.712956    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:35.712956    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:35 GMT
	I0314 19:41:35.712956    8428 round_trippers.go:580]     Audit-Id: 00e913b2-1b38-424c-8611-652a8f9f4f52
	I0314 19:41:35.712956    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:35.713021    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:35.713021    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:35.713328    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1830","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0314 19:41:36.209782    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:41:36.209782    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:36.209880    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:36.209880    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:36.213180    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:41:36.213180    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:36.213180    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:36 GMT
	I0314 19:41:36.213180    8428 round_trippers.go:580]     Audit-Id: 22025d39-8ff9-4b37-90bd-50da8c10b3d3
	I0314 19:41:36.213180    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:36.213180    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:36.213180    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:36.213180    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:36.213992    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1830","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0314 19:41:36.214449    8428 node_ready.go:53] node "multinode-442000" has status "Ready":"False"
	I0314 19:41:36.710015    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:41:36.710015    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:36.710015    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:36.710015    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:36.715827    8428 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0314 19:41:36.715968    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:36.715968    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:36.715968    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:36.715968    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:36.715968    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:36 GMT
	I0314 19:41:36.715968    8428 round_trippers.go:580]     Audit-Id: 3f0c16bb-a945-4a3e-bd1c-d07c66b61ef9
	I0314 19:41:36.715968    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:36.715968    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1830","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0314 19:41:37.197552    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:41:37.197552    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:37.197552    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:37.197552    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:37.201960    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:41:37.201960    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:37.201960    8428 round_trippers.go:580]     Audit-Id: 183eb24f-291b-4cfa-bb64-b49c3c05e888
	I0314 19:41:37.201960    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:37.201960    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:37.201960    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:37.201960    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:37.201960    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:37 GMT
	I0314 19:41:37.201960    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1830","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0314 19:41:37.698833    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:41:37.698901    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:37.698901    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:37.698968    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:37.705769    8428 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0314 19:41:37.705769    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:37.705769    8428 round_trippers.go:580]     Audit-Id: aa5cbed7-5978-45ea-b68b-9fa5eaac33e9
	I0314 19:41:37.705769    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:37.705769    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:37.705769    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:37.705769    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:37.705769    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:37 GMT
	I0314 19:41:37.705769    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1830","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0314 19:41:38.211491    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:41:38.211491    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:38.211491    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:38.211491    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:38.216247    8428 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 19:41:38.216247    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:38.216247    8428 round_trippers.go:580]     Audit-Id: 5983a43c-4199-482c-81a4-89b23ace5760
	I0314 19:41:38.216247    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:38.216247    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:38.216247    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:38.216247    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:38.216247    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:38 GMT
	I0314 19:41:38.216799    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1830","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0314 19:41:38.217298    8428 node_ready.go:53] node "multinode-442000" has status "Ready":"False"
	I0314 19:41:38.709246    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:41:38.709246    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:38.709246    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:38.709246    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:38.712497    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:41:38.713368    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:38.713368    8428 round_trippers.go:580]     Audit-Id: 74eb41ff-7336-4f60-a913-313c50fc0b27
	I0314 19:41:38.713368    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:38.713368    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:38.713469    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:38.713469    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:38.713469    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:38 GMT
	I0314 19:41:38.713787    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1830","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0314 19:41:39.206718    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:41:39.206718    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:39.206718    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:39.206718    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:39.211305    8428 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 19:41:39.211305    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:39.211305    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:39.211305    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:39.212017    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:39 GMT
	I0314 19:41:39.212017    8428 round_trippers.go:580]     Audit-Id: d0bd5345-1f27-44f5-bd58-b5aac7cd8f01
	I0314 19:41:39.212017    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:39.212017    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:39.212378    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1830","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0314 19:41:39.697087    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:41:39.697499    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:39.697499    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:39.697594    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:39.703696    8428 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0314 19:41:39.703777    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:39.703777    8428 round_trippers.go:580]     Audit-Id: c6041361-a19e-4eb4-82ab-118084172ce8
	I0314 19:41:39.703777    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:39.703868    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:39.703868    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:39.703868    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:39.703868    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:39 GMT
	I0314 19:41:39.704194    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1830","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0314 19:41:40.208478    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:41:40.208478    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:40.208695    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:40.208695    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:40.213404    8428 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 19:41:40.213404    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:40.213404    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:40.213404    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:40.213404    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:40 GMT
	I0314 19:41:40.213404    8428 round_trippers.go:580]     Audit-Id: 1d8a0d30-a790-4fb3-8344-52a305e27afa
	I0314 19:41:40.213404    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:40.213404    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:40.213793    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1830","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0314 19:41:40.709352    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:41:40.709429    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:40.709429    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:40.709429    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:40.716639    8428 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0314 19:41:40.716639    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:40.716639    8428 round_trippers.go:580]     Audit-Id: 6d2df5bf-207e-4bd9-a80b-67330db0e987
	I0314 19:41:40.716639    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:40.716639    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:40.716639    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:40.716639    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:40.716639    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:40 GMT
	I0314 19:41:40.716639    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1830","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0314 19:41:40.717652    8428 node_ready.go:53] node "multinode-442000" has status "Ready":"False"
	I0314 19:41:41.211418    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:41:41.211627    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:41.211627    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:41.211627    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:41.215210    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:41:41.215700    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:41.215700    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:41.215700    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:41 GMT
	I0314 19:41:41.215700    8428 round_trippers.go:580]     Audit-Id: b1fe46bb-dcce-4475-99c8-116dc549e69e
	I0314 19:41:41.215700    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:41.215700    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:41.215700    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:41.215700    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1830","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0314 19:41:41.709834    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:41:41.709834    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:41.709834    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:41.709834    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:41.713901    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:41:41.713901    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:41.713901    8428 round_trippers.go:580]     Audit-Id: 4d2f24b7-e161-47d0-8da5-96c9e378d420
	I0314 19:41:41.713901    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:41.713901    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:41.713901    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:41.713901    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:41.713901    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:41 GMT
	I0314 19:41:41.714047    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1867","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5357 chars]
	I0314 19:41:41.714511    8428 node_ready.go:49] node "multinode-442000" has status "Ready":"True"
	I0314 19:41:41.714610    8428 node_ready.go:38] duration metric: took 30.5181132s for node "multinode-442000" to be "Ready" ...
	I0314 19:41:41.714610    8428 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0314 19:41:41.714762    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/namespaces/kube-system/pods
	I0314 19:41:41.714762    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:41.714762    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:41.714762    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:41.720492    8428 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0314 19:41:41.720492    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:41.720492    8428 round_trippers.go:580]     Audit-Id: 23467580-7140-4737-ae92-fc35303fd912
	I0314 19:41:41.720492    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:41.720492    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:41.720492    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:41.720492    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:41.720492    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:41 GMT
	I0314 19:41:41.722598    8428 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1867"},"items":[{"metadata":{"name":"coredns-5dd5756b68-d22jc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2a563b3f-a175-4dc2-9f0b-67dbaefbfaac","resourceVersion":"1714","creationTimestamp":"2024-03-14T19:19:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"65689a14-ed43-48af-8104-53ea2e3991f3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:19:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"65689a14-ed43-48af-8104-53ea2e3991f3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 83020 chars]
	I0314 19:41:41.726160    8428 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-d22jc" in "kube-system" namespace to be "Ready" ...
	I0314 19:41:41.726686    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-d22jc
	I0314 19:41:41.726749    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:41.726749    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:41.726749    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:41.729506    8428 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0314 19:41:41.730210    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:41.730210    8428 round_trippers.go:580]     Audit-Id: aac4ba97-4438-4e5c-b248-0e23c1db98a1
	I0314 19:41:41.730210    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:41.730210    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:41.730210    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:41.730210    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:41.730210    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:41 GMT
	I0314 19:41:41.730821    8428 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-d22jc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2a563b3f-a175-4dc2-9f0b-67dbaefbfaac","resourceVersion":"1714","creationTimestamp":"2024-03-14T19:19:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"65689a14-ed43-48af-8104-53ea2e3991f3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:19:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"65689a14-ed43-48af-8104-53ea2e3991f3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I0314 19:41:41.731431    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:41:41.731431    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:41.731431    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:41.731504    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:41.734317    8428 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0314 19:41:41.734317    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:41.734317    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:41.734317    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:41.734317    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:41.734317    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:41 GMT
	I0314 19:41:41.734317    8428 round_trippers.go:580]     Audit-Id: a86aada6-4dcd-4600-8413-567c0ef68fe1
	I0314 19:41:41.734317    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:41.734824    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1867","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5357 chars]
	I0314 19:41:42.239804    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-d22jc
	I0314 19:41:42.239804    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:42.239875    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:42.239875    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:42.243092    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:41:42.243868    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:42.243868    8428 round_trippers.go:580]     Audit-Id: c27723f4-9f49-4996-a08e-d841a22a19a8
	I0314 19:41:42.243868    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:42.243868    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:42.243868    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:42.243868    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:42.243868    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:42 GMT
	I0314 19:41:42.244108    8428 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-d22jc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2a563b3f-a175-4dc2-9f0b-67dbaefbfaac","resourceVersion":"1714","creationTimestamp":"2024-03-14T19:19:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"65689a14-ed43-48af-8104-53ea2e3991f3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:19:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"65689a14-ed43-48af-8104-53ea2e3991f3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I0314 19:41:42.245153    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:41:42.245153    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:42.245244    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:42.245244    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:42.248392    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:41:42.248447    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:42.248447    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:42.248447    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:42 GMT
	I0314 19:41:42.248447    8428 round_trippers.go:580]     Audit-Id: f2ec66a6-a178-4878-ad81-569404ee6f75
	I0314 19:41:42.248447    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:42.248447    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:42.248447    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:42.248762    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1867","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5357 chars]
	I0314 19:41:42.739855    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-d22jc
	I0314 19:41:42.739883    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:42.739923    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:42.739923    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:42.743580    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:41:42.743580    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:42.743580    8428 round_trippers.go:580]     Audit-Id: d80107bf-25a5-4cb9-99a5-84b14b857b50
	I0314 19:41:42.743580    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:42.743580    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:42.743580    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:42.743580    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:42.743580    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:43 GMT
	I0314 19:41:42.744187    8428 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-d22jc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2a563b3f-a175-4dc2-9f0b-67dbaefbfaac","resourceVersion":"1714","creationTimestamp":"2024-03-14T19:19:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"65689a14-ed43-48af-8104-53ea2e3991f3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:19:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"65689a14-ed43-48af-8104-53ea2e3991f3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I0314 19:41:42.744838    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:41:42.744900    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:42.744900    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:42.744948    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:42.749179    8428 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 19:41:42.749179    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:42.749179    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:43 GMT
	I0314 19:41:42.749179    8428 round_trippers.go:580]     Audit-Id: 7f61f411-82b1-4b3d-a89e-1667431fc0b0
	I0314 19:41:42.749179    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:42.749179    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:42.749179    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:42.749179    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:42.749179    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1867","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5357 chars]
	I0314 19:41:43.241429    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-d22jc
	I0314 19:41:43.241429    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:43.241429    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:43.241429    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:43.245469    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:41:43.245469    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:43.245469    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:43.245469    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:43.245469    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:43 GMT
	I0314 19:41:43.245469    8428 round_trippers.go:580]     Audit-Id: 947f9ba5-6674-4db6-b848-ccfcab0b246c
	I0314 19:41:43.245469    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:43.245469    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:43.245756    8428 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-d22jc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2a563b3f-a175-4dc2-9f0b-67dbaefbfaac","resourceVersion":"1714","creationTimestamp":"2024-03-14T19:19:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"65689a14-ed43-48af-8104-53ea2e3991f3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:19:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"65689a14-ed43-48af-8104-53ea2e3991f3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I0314 19:41:43.246225    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:41:43.246225    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:43.246225    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:43.246225    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:43.253802    8428 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0314 19:41:43.254028    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:43.254028    8428 round_trippers.go:580]     Audit-Id: cbe17f86-20eb-4b61-96fe-0a6469ce5633
	I0314 19:41:43.254028    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:43.254028    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:43.254028    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:43.254101    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:43.254101    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:43 GMT
	I0314 19:41:43.254248    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1867","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5357 chars]
	I0314 19:41:43.727571    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-d22jc
	I0314 19:41:43.727571    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:43.727571    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:43.727571    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:43.731149    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:41:43.731149    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:43.731149    8428 round_trippers.go:580]     Audit-Id: 0be58a2d-4b28-46b4-a274-6ea303323589
	I0314 19:41:43.731893    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:43.731893    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:43.731893    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:43.731893    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:43.731893    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:43 GMT
	I0314 19:41:43.731990    8428 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-d22jc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2a563b3f-a175-4dc2-9f0b-67dbaefbfaac","resourceVersion":"1714","creationTimestamp":"2024-03-14T19:19:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"65689a14-ed43-48af-8104-53ea2e3991f3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:19:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"65689a14-ed43-48af-8104-53ea2e3991f3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I0314 19:41:43.732631    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:41:43.732631    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:43.732631    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:43.732631    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:43.735636    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:41:43.735959    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:43.735959    8428 round_trippers.go:580]     Audit-Id: b23b7a93-0d29-4cc3-b018-7ed68b875015
	I0314 19:41:43.736062    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:43.736062    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:43.736062    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:43.736105    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:43.736129    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:43 GMT
	I0314 19:41:43.736408    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1867","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5357 chars]
	I0314 19:41:43.737128    8428 pod_ready.go:102] pod "coredns-5dd5756b68-d22jc" in "kube-system" namespace has status "Ready":"False"
	I0314 19:41:44.241265    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-d22jc
	I0314 19:41:44.241407    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:44.241407    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:44.241407    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:44.245567    8428 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 19:41:44.245644    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:44.245644    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:44.245644    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:44 GMT
	I0314 19:41:44.245644    8428 round_trippers.go:580]     Audit-Id: 584dcd68-36d2-4116-baca-d1a18fd29ceb
	I0314 19:41:44.245644    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:44.245717    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:44.245717    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:44.245756    8428 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-d22jc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2a563b3f-a175-4dc2-9f0b-67dbaefbfaac","resourceVersion":"1714","creationTimestamp":"2024-03-14T19:19:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"65689a14-ed43-48af-8104-53ea2e3991f3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:19:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"65689a14-ed43-48af-8104-53ea2e3991f3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I0314 19:41:44.246727    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:41:44.246727    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:44.246798    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:44.246798    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:44.249946    8428 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0314 19:41:44.249982    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:44.249982    8428 round_trippers.go:580]     Audit-Id: e4e08b0e-dfde-4f43-b12d-9a0af751dcf0
	I0314 19:41:44.249982    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:44.250019    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:44.250019    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:44.250019    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:44.250019    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:44 GMT
	I0314 19:41:44.250193    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1868","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0314 19:41:44.727030    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-d22jc
	I0314 19:41:44.727120    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:44.727120    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:44.727120    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:44.730910    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:41:44.730910    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:44.730910    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:44 GMT
	I0314 19:41:44.730910    8428 round_trippers.go:580]     Audit-Id: 3110bc10-3305-47a3-a749-914a214634a6
	I0314 19:41:44.731283    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:44.731283    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:44.731283    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:44.731373    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:44.731643    8428 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-d22jc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2a563b3f-a175-4dc2-9f0b-67dbaefbfaac","resourceVersion":"1714","creationTimestamp":"2024-03-14T19:19:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"65689a14-ed43-48af-8104-53ea2e3991f3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:19:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"65689a14-ed43-48af-8104-53ea2e3991f3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I0314 19:41:44.732587    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:41:44.732587    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:44.732587    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:44.732587    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:44.736273    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:41:44.736273    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:44.736273    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:44.736273    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:44.736273    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:44 GMT
	I0314 19:41:44.736273    8428 round_trippers.go:580]     Audit-Id: f62f6f7f-95a6-4353-beb9-e80878ffde8a
	I0314 19:41:44.736273    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:44.736273    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:44.736273    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1868","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0314 19:41:45.227108    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-d22jc
	I0314 19:41:45.227108    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:45.227108    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:45.227108    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:45.231696    8428 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 19:41:45.231990    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:45.231990    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:45 GMT
	I0314 19:41:45.231990    8428 round_trippers.go:580]     Audit-Id: 97b8382c-47ac-416e-8716-2df36bd5a581
	I0314 19:41:45.231990    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:45.231990    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:45.231990    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:45.231990    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:45.231990    8428 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-d22jc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2a563b3f-a175-4dc2-9f0b-67dbaefbfaac","resourceVersion":"1714","creationTimestamp":"2024-03-14T19:19:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"65689a14-ed43-48af-8104-53ea2e3991f3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:19:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"65689a14-ed43-48af-8104-53ea2e3991f3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I0314 19:41:45.232844    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:41:45.232844    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:45.232844    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:45.232844    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:45.236196    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:41:45.236196    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:45.236196    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:45.236196    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:45 GMT
	I0314 19:41:45.236196    8428 round_trippers.go:580]     Audit-Id: cf556e50-baf0-4c08-9d7f-7c4a66616030
	I0314 19:41:45.236196    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:45.236196    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:45.236196    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:45.236196    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1868","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0314 19:41:45.740583    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-d22jc
	I0314 19:41:45.740583    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:45.740583    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:45.740583    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:45.746440    8428 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 19:41:45.746440    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:45.746506    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:45.746506    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:46 GMT
	I0314 19:41:45.746506    8428 round_trippers.go:580]     Audit-Id: d9d5ca48-f1fb-4c5e-8f6f-5bdae1731f49
	I0314 19:41:45.746506    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:45.746506    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:45.746506    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:45.746506    8428 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-d22jc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2a563b3f-a175-4dc2-9f0b-67dbaefbfaac","resourceVersion":"1714","creationTimestamp":"2024-03-14T19:19:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"65689a14-ed43-48af-8104-53ea2e3991f3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:19:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"65689a14-ed43-48af-8104-53ea2e3991f3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I0314 19:41:45.747838    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:41:45.747905    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:45.747905    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:45.747905    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:45.751281    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:41:45.751281    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:45.751281    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:45.751281    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:46 GMT
	I0314 19:41:45.751281    8428 round_trippers.go:580]     Audit-Id: 0c4766c7-1604-40b9-bb87-8f37a401dd23
	I0314 19:41:45.751281    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:45.751281    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:45.751281    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:45.751551    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1868","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0314 19:41:45.752086    8428 pod_ready.go:102] pod "coredns-5dd5756b68-d22jc" in "kube-system" namespace has status "Ready":"False"
	I0314 19:41:46.241471    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-d22jc
	I0314 19:41:46.241471    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:46.241471    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:46.241572    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:46.247054    8428 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0314 19:41:46.247054    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:46.247054    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:46.247054    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:46 GMT
	I0314 19:41:46.247054    8428 round_trippers.go:580]     Audit-Id: c1dad6d1-7449-479c-abeb-2282320122ee
	I0314 19:41:46.247054    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:46.247054    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:46.247054    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:46.247730    8428 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-d22jc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2a563b3f-a175-4dc2-9f0b-67dbaefbfaac","resourceVersion":"1714","creationTimestamp":"2024-03-14T19:19:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"65689a14-ed43-48af-8104-53ea2e3991f3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:19:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"65689a14-ed43-48af-8104-53ea2e3991f3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I0314 19:41:46.248313    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:41:46.248399    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:46.248431    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:46.248431    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:46.251703    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:41:46.251909    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:46.251909    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:46.251909    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:46.251909    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:46.251955    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:46 GMT
	I0314 19:41:46.251955    8428 round_trippers.go:580]     Audit-Id: fa1c67ec-7b0e-4db5-aac9-165132cc7099
	I0314 19:41:46.251955    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:46.252053    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1868","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0314 19:41:46.741033    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-d22jc
	I0314 19:41:46.741033    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:46.741033    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:46.741033    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:46.745201    8428 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 19:41:46.745201    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:46.745201    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:46.745201    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:46.745201    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:47 GMT
	I0314 19:41:46.745201    8428 round_trippers.go:580]     Audit-Id: cc3b7c35-34e2-42d2-9419-6fea1ee700eb
	I0314 19:41:46.745201    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:46.745201    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:46.745201    8428 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-d22jc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2a563b3f-a175-4dc2-9f0b-67dbaefbfaac","resourceVersion":"1714","creationTimestamp":"2024-03-14T19:19:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"65689a14-ed43-48af-8104-53ea2e3991f3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:19:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"65689a14-ed43-48af-8104-53ea2e3991f3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I0314 19:41:46.746627    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:41:46.746684    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:46.746684    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:46.746740    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:46.749448    8428 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0314 19:41:46.749448    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:46.749448    8428 round_trippers.go:580]     Audit-Id: ae614bd3-d3a7-4628-83a8-3682e4cbbd6c
	I0314 19:41:46.749448    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:46.749448    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:46.749448    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:46.749448    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:46.749448    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:47 GMT
	I0314 19:41:46.750654    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1868","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0314 19:41:47.238686    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-d22jc
	I0314 19:41:47.238686    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:47.238686    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:47.238686    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:47.242009    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:41:47.242009    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:47.242009    8428 round_trippers.go:580]     Audit-Id: d2208470-c7bc-43de-a253-0f0ffbfcfd90
	I0314 19:41:47.242009    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:47.242009    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:47.242009    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:47.242009    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:47.242009    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:47 GMT
	I0314 19:41:47.242935    8428 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-d22jc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2a563b3f-a175-4dc2-9f0b-67dbaefbfaac","resourceVersion":"1714","creationTimestamp":"2024-03-14T19:19:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"65689a14-ed43-48af-8104-53ea2e3991f3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:19:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"65689a14-ed43-48af-8104-53ea2e3991f3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I0314 19:41:47.243985    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:41:47.243985    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:47.244061    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:47.244061    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:47.247654    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:41:47.247654    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:47.247654    8428 round_trippers.go:580]     Audit-Id: 24bb94a0-efaa-42e8-9b9f-c9241da45bc6
	I0314 19:41:47.247654    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:47.247654    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:47.247654    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:47.247654    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:47.247654    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:47 GMT
	I0314 19:41:47.247654    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1868","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0314 19:41:47.736515    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-d22jc
	I0314 19:41:47.736515    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:47.736515    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:47.736515    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:47.740100    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:41:47.741005    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:47.741005    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:47.741005    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:47.741005    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:47.741005    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:48 GMT
	I0314 19:41:47.741005    8428 round_trippers.go:580]     Audit-Id: 5261e89f-cc9c-4705-9bd4-f65118544af1
	I0314 19:41:47.741005    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:47.741232    8428 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-d22jc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2a563b3f-a175-4dc2-9f0b-67dbaefbfaac","resourceVersion":"1714","creationTimestamp":"2024-03-14T19:19:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"65689a14-ed43-48af-8104-53ea2e3991f3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:19:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"65689a14-ed43-48af-8104-53ea2e3991f3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I0314 19:41:47.741918    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:41:47.741918    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:47.741918    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:47.741976    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:47.745656    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:41:47.745656    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:47.745656    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:47.745656    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:47.745656    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:47.745656    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:47.745656    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:48 GMT
	I0314 19:41:47.745656    8428 round_trippers.go:580]     Audit-Id: a2a4b706-ae08-4f3e-9fe2-200535ce7ba3
	I0314 19:41:47.745656    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1868","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0314 19:41:48.235940    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-d22jc
	I0314 19:41:48.235940    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:48.235940    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:48.235940    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:48.239513    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:41:48.240145    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:48.240145    8428 round_trippers.go:580]     Audit-Id: 3217eaed-1637-43ad-bcc7-b0fc46882d37
	I0314 19:41:48.240145    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:48.240145    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:48.240145    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:48.240207    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:48.240207    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:48 GMT
	I0314 19:41:48.240546    8428 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-d22jc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2a563b3f-a175-4dc2-9f0b-67dbaefbfaac","resourceVersion":"1714","creationTimestamp":"2024-03-14T19:19:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"65689a14-ed43-48af-8104-53ea2e3991f3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:19:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"65689a14-ed43-48af-8104-53ea2e3991f3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I0314 19:41:48.241232    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:41:48.241232    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:48.241232    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:48.241232    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:48.247394    8428 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0314 19:41:48.247394    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:48.247394    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:48.247394    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:48.247394    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:48 GMT
	I0314 19:41:48.247394    8428 round_trippers.go:580]     Audit-Id: 843378bf-de35-4827-a8cb-3f161b3eda27
	I0314 19:41:48.247394    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:48.247394    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:48.247925    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1868","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0314 19:41:48.248061    8428 pod_ready.go:102] pod "coredns-5dd5756b68-d22jc" in "kube-system" namespace has status "Ready":"False"
	I0314 19:41:48.737654    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-d22jc
	I0314 19:41:48.737654    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:48.737654    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:48.737654    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:48.743116    8428 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0314 19:41:48.743649    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:48.743649    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:48.743649    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:48.743649    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:49 GMT
	I0314 19:41:48.743649    8428 round_trippers.go:580]     Audit-Id: 7e5c8c94-1664-44e8-b72d-0c8d80f907a7
	I0314 19:41:48.743649    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:48.743649    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:48.743954    8428 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-d22jc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2a563b3f-a175-4dc2-9f0b-67dbaefbfaac","resourceVersion":"1714","creationTimestamp":"2024-03-14T19:19:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"65689a14-ed43-48af-8104-53ea2e3991f3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:19:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"65689a14-ed43-48af-8104-53ea2e3991f3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I0314 19:41:48.744713    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:41:48.744713    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:48.744713    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:48.744713    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:48.748424    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:41:48.748424    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:48.748424    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:48.748424    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:48.748424    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:48.748424    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:49 GMT
	I0314 19:41:48.748424    8428 round_trippers.go:580]     Audit-Id: c9ff291c-66fd-4679-b6ef-7a6cd2d63484
	I0314 19:41:48.748424    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:48.748424    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1868","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0314 19:41:49.237453    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-d22jc
	I0314 19:41:49.237520    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:49.237575    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:49.237575    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:49.243325    8428 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0314 19:41:49.243325    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:49.243325    8428 round_trippers.go:580]     Audit-Id: 385afd4b-5ebe-4a86-875f-c8bd54987cc8
	I0314 19:41:49.243325    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:49.243325    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:49.243325    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:49.243325    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:49.243325    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:49 GMT
	I0314 19:41:49.243325    8428 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-d22jc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2a563b3f-a175-4dc2-9f0b-67dbaefbfaac","resourceVersion":"1714","creationTimestamp":"2024-03-14T19:19:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"65689a14-ed43-48af-8104-53ea2e3991f3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:19:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"65689a14-ed43-48af-8104-53ea2e3991f3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I0314 19:41:49.244173    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:41:49.244281    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:49.244281    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:49.244281    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:49.247889    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:41:49.247889    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:49.247889    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:49.247889    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:49.247889    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:49.247889    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:49.247889    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:49 GMT
	I0314 19:41:49.247889    8428 round_trippers.go:580]     Audit-Id: 35d46784-af67-4676-83ef-fc873ff549ba
	I0314 19:41:49.247889    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1868","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0314 19:41:49.734075    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-d22jc
	I0314 19:41:49.734075    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:49.734075    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:49.734075    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:49.737547    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:41:49.737547    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:49.737547    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:50 GMT
	I0314 19:41:49.737547    8428 round_trippers.go:580]     Audit-Id: 02d243c5-fb2d-402f-945b-ba9bac53d9d0
	I0314 19:41:49.737547    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:49.737547    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:49.737547    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:49.737547    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:49.738320    8428 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-d22jc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2a563b3f-a175-4dc2-9f0b-67dbaefbfaac","resourceVersion":"1714","creationTimestamp":"2024-03-14T19:19:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"65689a14-ed43-48af-8104-53ea2e3991f3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:19:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"65689a14-ed43-48af-8104-53ea2e3991f3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I0314 19:41:49.738947    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:41:49.739027    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:49.739027    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:49.739027    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:49.742171    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:41:49.742171    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:49.742547    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:49.742547    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:49.742547    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:50 GMT
	I0314 19:41:49.742547    8428 round_trippers.go:580]     Audit-Id: a78c0188-d2db-461b-9f7f-9356cdcbe18e
	I0314 19:41:49.742547    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:49.742547    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:49.743013    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1868","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0314 19:41:50.232310    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-d22jc
	I0314 19:41:50.232310    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:50.232310    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:50.232310    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:50.235903    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:41:50.235903    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:50.235903    8428 round_trippers.go:580]     Audit-Id: d6100e54-68ff-438a-8532-332fd7561488
	I0314 19:41:50.235903    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:50.236732    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:50.236732    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:50.236732    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:50.236732    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:50 GMT
	I0314 19:41:50.236812    8428 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-d22jc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2a563b3f-a175-4dc2-9f0b-67dbaefbfaac","resourceVersion":"1714","creationTimestamp":"2024-03-14T19:19:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"65689a14-ed43-48af-8104-53ea2e3991f3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:19:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"65689a14-ed43-48af-8104-53ea2e3991f3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I0314 19:41:50.237941    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:41:50.237941    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:50.238013    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:50.238013    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:50.243647    8428 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0314 19:41:50.243647    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:50.243647    8428 round_trippers.go:580]     Audit-Id: 5f589a5a-5cb6-4cb7-b5af-682fc6eb04ea
	I0314 19:41:50.243647    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:50.243647    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:50.243647    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:50.243647    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:50.243647    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:50 GMT
	I0314 19:41:50.243647    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1868","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0314 19:41:50.736406    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-d22jc
	I0314 19:41:50.736459    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:50.736512    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:50.736512    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:50.739743    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:41:50.739743    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:50.739743    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:51 GMT
	I0314 19:41:50.739743    8428 round_trippers.go:580]     Audit-Id: 6c2b19ec-101b-4222-99e5-6693e17bca16
	I0314 19:41:50.739743    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:50.739743    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:50.739743    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:50.739743    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:50.740345    8428 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-d22jc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2a563b3f-a175-4dc2-9f0b-67dbaefbfaac","resourceVersion":"1714","creationTimestamp":"2024-03-14T19:19:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"65689a14-ed43-48af-8104-53ea2e3991f3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:19:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"65689a14-ed43-48af-8104-53ea2e3991f3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I0314 19:41:50.742395    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:41:50.742395    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:50.742395    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:50.742395    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:50.747035    8428 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 19:41:50.748055    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:50.748055    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:50.748055    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:51 GMT
	I0314 19:41:50.748100    8428 round_trippers.go:580]     Audit-Id: de2a2856-5d20-4199-a3b7-b99d26947ef8
	I0314 19:41:50.748100    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:50.748100    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:50.748100    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:50.748434    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1868","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0314 19:41:50.749228    8428 pod_ready.go:102] pod "coredns-5dd5756b68-d22jc" in "kube-system" namespace has status "Ready":"False"
	I0314 19:41:51.241145    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-d22jc
	I0314 19:41:51.241145    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:51.241145    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:51.241145    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:51.244992    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:41:51.245474    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:51.245474    8428 round_trippers.go:580]     Audit-Id: 5b556003-1bca-4780-8667-e672da262494
	I0314 19:41:51.245474    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:51.245474    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:51.245474    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:51.245474    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:51.245474    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:51 GMT
	I0314 19:41:51.245474    8428 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-d22jc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2a563b3f-a175-4dc2-9f0b-67dbaefbfaac","resourceVersion":"1714","creationTimestamp":"2024-03-14T19:19:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"65689a14-ed43-48af-8104-53ea2e3991f3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:19:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"65689a14-ed43-48af-8104-53ea2e3991f3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I0314 19:41:51.246175    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:41:51.246175    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:51.246175    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:51.246175    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:51.249390    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:41:51.249390    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:51.249390    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:51.249390    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:51.249390    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:51.249390    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:51 GMT
	I0314 19:41:51.249390    8428 round_trippers.go:580]     Audit-Id: e4d510dd-3726-4496-99c4-51f527015b16
	I0314 19:41:51.249390    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:51.249390    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1868","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0314 19:41:51.740068    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-d22jc
	I0314 19:41:51.740068    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:51.740068    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:51.740068    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:51.744445    8428 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 19:41:51.744445    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:51.744445    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:51.744445    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:52 GMT
	I0314 19:41:51.744445    8428 round_trippers.go:580]     Audit-Id: 6e203b11-4300-4326-ad82-3dda87102f01
	I0314 19:41:51.744445    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:51.744445    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:51.744445    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:51.744445    8428 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-d22jc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2a563b3f-a175-4dc2-9f0b-67dbaefbfaac","resourceVersion":"1714","creationTimestamp":"2024-03-14T19:19:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"65689a14-ed43-48af-8104-53ea2e3991f3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:19:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"65689a14-ed43-48af-8104-53ea2e3991f3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I0314 19:41:51.745189    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:41:51.745282    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:51.745282    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:51.745282    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:51.748583    8428 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0314 19:41:51.748583    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:51.748583    8428 round_trippers.go:580]     Audit-Id: 42ddeaaf-b878-4272-bbc4-5bf85e5bd669
	I0314 19:41:51.748583    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:51.748583    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:51.748583    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:51.748583    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:51.748583    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:52 GMT
	I0314 19:41:51.748583    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1868","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0314 19:41:52.240983    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-d22jc
	I0314 19:41:52.240983    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:52.240983    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:52.240983    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:52.245656    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:41:52.245656    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:52.245656    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:52.245656    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:52 GMT
	I0314 19:41:52.245656    8428 round_trippers.go:580]     Audit-Id: f79c0237-2bb3-4edf-ad81-093b0acadba7
	I0314 19:41:52.245656    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:52.245656    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:52.245656    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:52.245882    8428 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-d22jc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2a563b3f-a175-4dc2-9f0b-67dbaefbfaac","resourceVersion":"1714","creationTimestamp":"2024-03-14T19:19:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"65689a14-ed43-48af-8104-53ea2e3991f3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:19:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"65689a14-ed43-48af-8104-53ea2e3991f3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I0314 19:41:52.246348    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:41:52.246348    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:52.246348    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:52.246348    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:52.250199    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:41:52.250199    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:52.250199    8428 round_trippers.go:580]     Audit-Id: 77784c83-5ef4-4188-900a-0c33cfbe7fdb
	I0314 19:41:52.250199    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:52.250199    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:52.250199    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:52.250199    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:52.250199    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:52 GMT
	I0314 19:41:52.250461    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1868","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0314 19:41:52.740949    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-d22jc
	I0314 19:41:52.740949    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:52.740949    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:52.740949    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:52.744521    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:41:52.745302    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:52.745446    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:52.745495    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:52.745495    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:52.745538    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:52.745538    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:53 GMT
	I0314 19:41:52.745538    8428 round_trippers.go:580]     Audit-Id: ae3ed482-02ea-468b-9fbc-f88ee73df7a3
	I0314 19:41:52.745538    8428 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-d22jc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2a563b3f-a175-4dc2-9f0b-67dbaefbfaac","resourceVersion":"1714","creationTimestamp":"2024-03-14T19:19:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"65689a14-ed43-48af-8104-53ea2e3991f3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:19:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"65689a14-ed43-48af-8104-53ea2e3991f3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I0314 19:41:52.746139    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:41:52.746139    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:52.746139    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:52.746139    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:52.749789    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:41:52.749789    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:52.749789    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:52.749789    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:53 GMT
	I0314 19:41:52.749789    8428 round_trippers.go:580]     Audit-Id: 1cbd399f-4bdd-4263-bde3-1e8c70e0f4ee
	I0314 19:41:52.749789    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:52.749789    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:52.749789    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:52.749789    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1868","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0314 19:41:52.749789    8428 pod_ready.go:102] pod "coredns-5dd5756b68-d22jc" in "kube-system" namespace has status "Ready":"False"
	I0314 19:41:53.242741    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-d22jc
	I0314 19:41:53.242816    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:53.242816    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:53.242816    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:53.251110    8428 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0314 19:41:53.251110    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:53.251110    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:53.251110    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:53.251110    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:53.251110    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:53.251110    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:53 GMT
	I0314 19:41:53.251110    8428 round_trippers.go:580]     Audit-Id: b58e29e5-31b8-4827-af82-5fce39f6a3a6
	I0314 19:41:53.251320    8428 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-d22jc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2a563b3f-a175-4dc2-9f0b-67dbaefbfaac","resourceVersion":"1714","creationTimestamp":"2024-03-14T19:19:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"65689a14-ed43-48af-8104-53ea2e3991f3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:19:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"65689a14-ed43-48af-8104-53ea2e3991f3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I0314 19:41:53.252077    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:41:53.252130    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:53.252130    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:53.252130    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:53.261917    8428 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0314 19:41:53.261917    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:53.262672    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:53.262672    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:53.262672    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:53 GMT
	I0314 19:41:53.262672    8428 round_trippers.go:580]     Audit-Id: 27c11afa-2538-45c2-ac85-8c6da5e883e5
	I0314 19:41:53.262672    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:53.262672    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:53.262870    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1868","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0314 19:41:53.731603    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-d22jc
	I0314 19:41:53.731697    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:53.731697    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:53.731697    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:53.735003    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:41:53.735003    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:53.735003    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:53 GMT
	I0314 19:41:53.735003    8428 round_trippers.go:580]     Audit-Id: 4ba9afd3-943a-4005-b634-47a8d090d386
	I0314 19:41:53.735003    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:53.735003    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:53.735003    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:53.735003    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:53.736241    8428 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-d22jc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2a563b3f-a175-4dc2-9f0b-67dbaefbfaac","resourceVersion":"1714","creationTimestamp":"2024-03-14T19:19:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"65689a14-ed43-48af-8104-53ea2e3991f3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:19:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"65689a14-ed43-48af-8104-53ea2e3991f3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I0314 19:41:53.736661    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:41:53.736661    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:53.736661    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:53.736661    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:53.741612    8428 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 19:41:53.741612    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:53.741612    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:54 GMT
	I0314 19:41:53.741612    8428 round_trippers.go:580]     Audit-Id: ab5aea84-1b1b-4625-829e-1cd5ec19ce09
	I0314 19:41:53.741612    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:53.741612    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:53.741612    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:53.741612    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:53.741612    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1868","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0314 19:41:54.232631    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-d22jc
	I0314 19:41:54.232702    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:54.232702    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:54.232702    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:54.237118    8428 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 19:41:54.237118    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:54.237118    8428 round_trippers.go:580]     Audit-Id: d51376bd-bdf4-4df7-ac56-52dbb0a5ed83
	I0314 19:41:54.237118    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:54.237118    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:54.237118    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:54.237118    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:54.237118    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:54 GMT
	I0314 19:41:54.237118    8428 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-d22jc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2a563b3f-a175-4dc2-9f0b-67dbaefbfaac","resourceVersion":"1714","creationTimestamp":"2024-03-14T19:19:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"65689a14-ed43-48af-8104-53ea2e3991f3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:19:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"65689a14-ed43-48af-8104-53ea2e3991f3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I0314 19:41:54.237946    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:41:54.237946    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:54.238035    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:54.238035    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:54.241168    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:41:54.241168    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:54.241168    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:54.241168    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:54.241168    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:54.241168    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:54.241407    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:54 GMT
	I0314 19:41:54.241407    8428 round_trippers.go:580]     Audit-Id: ba7b9edb-ba56-4dd0-82fb-6e338a923bea
	I0314 19:41:54.241688    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1868","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0314 19:41:54.735452    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-d22jc
	I0314 19:41:54.735681    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:54.735681    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:54.735681    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:54.739830    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:41:54.739830    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:54.739830    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:55 GMT
	I0314 19:41:54.739830    8428 round_trippers.go:580]     Audit-Id: 7769d62e-01cc-4e9b-9ca4-163dff0075f8
	I0314 19:41:54.739910    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:54.739910    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:54.739910    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:54.739910    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:54.740058    8428 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-d22jc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2a563b3f-a175-4dc2-9f0b-67dbaefbfaac","resourceVersion":"1714","creationTimestamp":"2024-03-14T19:19:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"65689a14-ed43-48af-8104-53ea2e3991f3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:19:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"65689a14-ed43-48af-8104-53ea2e3991f3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I0314 19:41:54.740648    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:41:54.740648    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:54.740741    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:54.740741    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:54.743856    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:41:54.744066    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:54.744066    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:54.744066    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:54.744066    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:54.744066    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:54.744066    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:55 GMT
	I0314 19:41:54.744066    8428 round_trippers.go:580]     Audit-Id: b12d7112-5dd2-492b-858b-938837f3ae8f
	I0314 19:41:54.744333    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1868","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0314 19:41:55.235138    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-d22jc
	I0314 19:41:55.235200    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:55.235200    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:55.235200    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:55.239334    8428 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 19:41:55.239334    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:55.239334    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:55.239334    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:55.239334    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:55 GMT
	I0314 19:41:55.239334    8428 round_trippers.go:580]     Audit-Id: cd5954d9-323a-4a86-a178-85ceb9b09d8e
	I0314 19:41:55.239334    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:55.239334    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:55.239334    8428 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-d22jc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2a563b3f-a175-4dc2-9f0b-67dbaefbfaac","resourceVersion":"1714","creationTimestamp":"2024-03-14T19:19:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"65689a14-ed43-48af-8104-53ea2e3991f3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:19:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"65689a14-ed43-48af-8104-53ea2e3991f3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I0314 19:41:55.240666    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:41:55.240666    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:55.240666    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:55.240666    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:55.243468    8428 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0314 19:41:55.244171    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:55.244171    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:55.244171    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:55 GMT
	I0314 19:41:55.244171    8428 round_trippers.go:580]     Audit-Id: d9dac7b5-10eb-4532-81b2-4793c32f00b7
	I0314 19:41:55.244171    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:55.244171    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:55.244171    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:55.244260    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1868","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0314 19:41:55.244260    8428 pod_ready.go:102] pod "coredns-5dd5756b68-d22jc" in "kube-system" namespace has status "Ready":"False"
	I0314 19:41:55.732956    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-d22jc
	I0314 19:41:55.733040    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:55.733040    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:55.733040    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:55.739580    8428 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0314 19:41:55.739580    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:55.739580    8428 round_trippers.go:580]     Audit-Id: aceb49bb-f93d-4ca1-8a97-0e26f4c29e3c
	I0314 19:41:55.739580    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:55.739580    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:55.739580    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:55.739580    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:55.739580    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:56 GMT
	I0314 19:41:55.739580    8428 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-d22jc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2a563b3f-a175-4dc2-9f0b-67dbaefbfaac","resourceVersion":"1714","creationTimestamp":"2024-03-14T19:19:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"65689a14-ed43-48af-8104-53ea2e3991f3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:19:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"65689a14-ed43-48af-8104-53ea2e3991f3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I0314 19:41:55.740251    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:41:55.740251    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:55.740251    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:55.740251    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:55.743931    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:41:55.743931    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:55.743931    8428 round_trippers.go:580]     Audit-Id: e3197417-e96b-461e-9a13-2cb3173e135e
	I0314 19:41:55.743931    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:55.743931    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:55.743931    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:55.743931    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:55.743931    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:56 GMT
	I0314 19:41:55.743931    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1868","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0314 19:41:56.233760    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-d22jc
	I0314 19:41:56.233760    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:56.233760    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:56.233954    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:56.237689    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:41:56.238016    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:56.238078    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:56.238078    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:56.238078    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:56 GMT
	I0314 19:41:56.238109    8428 round_trippers.go:580]     Audit-Id: 9528c104-50cb-4376-858d-1405c722b092
	I0314 19:41:56.238109    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:56.238109    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:56.238620    8428 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-d22jc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2a563b3f-a175-4dc2-9f0b-67dbaefbfaac","resourceVersion":"1714","creationTimestamp":"2024-03-14T19:19:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"65689a14-ed43-48af-8104-53ea2e3991f3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:19:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"65689a14-ed43-48af-8104-53ea2e3991f3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I0314 19:41:56.239454    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:41:56.239489    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:56.239538    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:56.239538    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:56.242341    8428 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0314 19:41:56.243190    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:56.243190    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:56.243190    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:56.243249    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:56 GMT
	I0314 19:41:56.243249    8428 round_trippers.go:580]     Audit-Id: ba502ac7-f672-45dd-a1c9-01359b92f829
	I0314 19:41:56.243249    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:56.243249    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:56.243490    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1868","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0314 19:41:56.736161    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-d22jc
	I0314 19:41:56.736395    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:56.736492    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:56.736492    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:56.739940    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:41:56.739940    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:56.739940    8428 round_trippers.go:580]     Audit-Id: 7cd1e3df-1d36-4139-a505-6b4ef9fbfc38
	I0314 19:41:56.739940    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:56.739940    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:56.739940    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:56.739940    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:56.739940    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:57 GMT
	I0314 19:41:56.740521    8428 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-d22jc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2a563b3f-a175-4dc2-9f0b-67dbaefbfaac","resourceVersion":"1714","creationTimestamp":"2024-03-14T19:19:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"65689a14-ed43-48af-8104-53ea2e3991f3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:19:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"65689a14-ed43-48af-8104-53ea2e3991f3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I0314 19:41:56.740832    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:41:56.740832    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:56.740832    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:56.740832    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:56.744515    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:41:56.744515    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:56.744515    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:56.744515    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:56.744515    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:57 GMT
	I0314 19:41:56.744515    8428 round_trippers.go:580]     Audit-Id: fd91bcae-a05d-4e01-8a97-e0dcd67588b7
	I0314 19:41:56.744515    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:56.744515    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:56.744515    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1868","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0314 19:41:57.236132    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-d22jc
	I0314 19:41:57.236226    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:57.236226    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:57.236226    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:57.239674    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:41:57.240134    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:57.240134    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:57.240134    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:57.240134    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:57.240316    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:57 GMT
	I0314 19:41:57.240316    8428 round_trippers.go:580]     Audit-Id: 4b1c797b-3ff3-48ca-aab3-36ac1c0711af
	I0314 19:41:57.240382    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:57.240603    8428 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-d22jc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2a563b3f-a175-4dc2-9f0b-67dbaefbfaac","resourceVersion":"1714","creationTimestamp":"2024-03-14T19:19:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"65689a14-ed43-48af-8104-53ea2e3991f3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:19:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"65689a14-ed43-48af-8104-53ea2e3991f3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I0314 19:41:57.241386    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:41:57.241465    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:57.241465    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:57.241465    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:57.243719    8428 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0314 19:41:57.243719    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:57.243719    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:57.243719    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:57.243719    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:57 GMT
	I0314 19:41:57.243719    8428 round_trippers.go:580]     Audit-Id: 26450d5b-26a3-4cd8-8f33-02d5ab1ae860
	I0314 19:41:57.243719    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:57.243719    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:57.244898    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1868","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0314 19:41:57.245413    8428 pod_ready.go:102] pod "coredns-5dd5756b68-d22jc" in "kube-system" namespace has status "Ready":"False"
	I0314 19:41:57.738333    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-d22jc
	I0314 19:41:57.738333    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:57.738333    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:57.738488    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:57.741530    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:41:57.742606    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:57.742606    8428 round_trippers.go:580]     Audit-Id: 830f79ab-9c16-4427-8397-41ac517a92a1
	I0314 19:41:57.742606    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:57.742606    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:57.742606    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:57.742606    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:57.742606    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:58 GMT
	I0314 19:41:57.742606    8428 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-d22jc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2a563b3f-a175-4dc2-9f0b-67dbaefbfaac","resourceVersion":"1714","creationTimestamp":"2024-03-14T19:19:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"65689a14-ed43-48af-8104-53ea2e3991f3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:19:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"65689a14-ed43-48af-8104-53ea2e3991f3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I0314 19:41:57.743222    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:41:57.743222    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:57.743222    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:57.743222    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:57.746912    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:41:57.746912    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:57.746912    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:57.746912    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:57.746912    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:57.746912    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:57.746912    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:58 GMT
	I0314 19:41:57.746912    8428 round_trippers.go:580]     Audit-Id: 893542fa-f875-439d-aad8-28747860d32a
	I0314 19:41:57.747905    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1868","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0314 19:41:58.236358    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-d22jc
	I0314 19:41:58.236627    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:58.236627    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:58.236627    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:58.243493    8428 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0314 19:41:58.243493    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:58.243493    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:58.243493    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:58.243493    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:58.243493    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:58.243493    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:58 GMT
	I0314 19:41:58.243493    8428 round_trippers.go:580]     Audit-Id: c80943cd-5242-403d-9d55-1e999b5e636a
	I0314 19:41:58.243493    8428 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-d22jc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2a563b3f-a175-4dc2-9f0b-67dbaefbfaac","resourceVersion":"1714","creationTimestamp":"2024-03-14T19:19:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"65689a14-ed43-48af-8104-53ea2e3991f3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:19:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"65689a14-ed43-48af-8104-53ea2e3991f3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I0314 19:41:58.244863    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:41:58.244863    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:58.244863    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:58.244922    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:58.247056    8428 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0314 19:41:58.248045    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:58.248045    8428 round_trippers.go:580]     Audit-Id: 8a5d57d2-c0bc-488c-8ec1-338b9bdbc1e2
	I0314 19:41:58.248045    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:58.248045    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:58.248045    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:58.248045    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:58.248045    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:58 GMT
	I0314 19:41:58.248195    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1868","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0314 19:41:58.736525    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-d22jc
	I0314 19:41:58.736525    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:58.736525    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:58.736525    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:58.740082    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:41:58.740697    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:58.740697    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:58.740697    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:58.740697    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:58.740697    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:58.740697    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:59 GMT
	I0314 19:41:58.740697    8428 round_trippers.go:580]     Audit-Id: 698e99c9-f9cf-435f-a9b2-4d55da6aaf9d
	I0314 19:41:58.740697    8428 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-d22jc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2a563b3f-a175-4dc2-9f0b-67dbaefbfaac","resourceVersion":"1714","creationTimestamp":"2024-03-14T19:19:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"65689a14-ed43-48af-8104-53ea2e3991f3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:19:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"65689a14-ed43-48af-8104-53ea2e3991f3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I0314 19:41:58.741651    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:41:58.741651    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:58.741651    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:58.741651    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:58.745370    8428 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0314 19:41:58.745448    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:58.745448    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:58.745448    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:59 GMT
	I0314 19:41:58.745448    8428 round_trippers.go:580]     Audit-Id: 87e15df5-959e-4241-8847-e50d77646b8f
	I0314 19:41:58.745525    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:58.745525    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:58.745525    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:58.745656    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1868","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0314 19:41:59.236628    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-d22jc
	I0314 19:41:59.236704    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:59.236704    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:59.236704    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:59.240032    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:41:59.240032    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:59.240032    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:59.240032    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:59.240032    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:59 GMT
	I0314 19:41:59.240032    8428 round_trippers.go:580]     Audit-Id: 478e8674-e91f-4d38-a9cb-95e94c626c72
	I0314 19:41:59.240032    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:59.240032    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:59.241104    8428 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-d22jc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2a563b3f-a175-4dc2-9f0b-67dbaefbfaac","resourceVersion":"1714","creationTimestamp":"2024-03-14T19:19:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"65689a14-ed43-48af-8104-53ea2e3991f3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:19:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"65689a14-ed43-48af-8104-53ea2e3991f3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I0314 19:41:59.241503    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:41:59.241503    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:59.241503    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:59.241503    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:59.245078    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:41:59.245078    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:59.245078    8428 round_trippers.go:580]     Audit-Id: 2b7f401f-e703-45c2-9a5d-495575c8c0e5
	I0314 19:41:59.245078    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:59.245078    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:59.245078    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:59.245078    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:59.245078    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:59 GMT
	I0314 19:41:59.245354    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1868","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0314 19:41:59.245722    8428 pod_ready.go:102] pod "coredns-5dd5756b68-d22jc" in "kube-system" namespace has status "Ready":"False"
	I0314 19:41:59.737360    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-d22jc
	I0314 19:41:59.737360    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:59.737360    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:59.737360    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:59.741585    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:41:59.741585    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:59.741585    8428 round_trippers.go:580]     Audit-Id: 6cf475cf-1b8c-430e-b8e8-b2e6a3c78b4e
	I0314 19:41:59.741585    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:59.741585    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:59.741585    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:59.741585    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:59.741585    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:42:00 GMT
	I0314 19:41:59.741585    8428 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-d22jc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2a563b3f-a175-4dc2-9f0b-67dbaefbfaac","resourceVersion":"1714","creationTimestamp":"2024-03-14T19:19:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"65689a14-ed43-48af-8104-53ea2e3991f3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:19:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"65689a14-ed43-48af-8104-53ea2e3991f3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I0314 19:41:59.742502    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:41:59.742502    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:59.742561    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:59.742561    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:59.745473    8428 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0314 19:41:59.745473    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:59.745473    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:59.745666    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:59.745666    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:59.745666    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:59.745666    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:42:00 GMT
	I0314 19:41:59.745666    8428 round_trippers.go:580]     Audit-Id: 4dde0bf4-b75d-4fc6-98d5-fcf9394192ff
	I0314 19:41:59.745778    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1868","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0314 19:42:00.237659    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-d22jc
	I0314 19:42:00.237659    8428 round_trippers.go:469] Request Headers:
	I0314 19:42:00.237659    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:42:00.237659    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:42:00.242315    8428 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 19:42:00.242315    8428 round_trippers.go:577] Response Headers:
	I0314 19:42:00.242315    8428 round_trippers.go:580]     Audit-Id: 83cd1f48-c08c-4020-ba77-7476a6b0355b
	I0314 19:42:00.242315    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:42:00.242315    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:42:00.242315    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:42:00.242315    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:42:00.242315    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:42:00 GMT
	I0314 19:42:00.242315    8428 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-d22jc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2a563b3f-a175-4dc2-9f0b-67dbaefbfaac","resourceVersion":"1714","creationTimestamp":"2024-03-14T19:19:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"65689a14-ed43-48af-8104-53ea2e3991f3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:19:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"65689a14-ed43-48af-8104-53ea2e3991f3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I0314 19:42:00.244009    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:42:00.244009    8428 round_trippers.go:469] Request Headers:
	I0314 19:42:00.244057    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:42:00.244057    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:42:00.247316    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:42:00.247316    8428 round_trippers.go:577] Response Headers:
	I0314 19:42:00.248137    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:42:00.248137    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:42:00.248137    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:42:00.248137    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:42:00 GMT
	I0314 19:42:00.248137    8428 round_trippers.go:580]     Audit-Id: 271e1356-4e71-4e5e-b664-021974773825
	I0314 19:42:00.248137    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:42:00.248383    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1868","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0314 19:42:00.738502    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-d22jc
	I0314 19:42:00.738502    8428 round_trippers.go:469] Request Headers:
	I0314 19:42:00.738502    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:42:00.738502    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:42:00.741984    8428 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0314 19:42:00.741984    8428 round_trippers.go:577] Response Headers:
	I0314 19:42:00.741984    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:42:01 GMT
	I0314 19:42:00.741984    8428 round_trippers.go:580]     Audit-Id: 6312c984-4977-4cd0-ae1d-06915e634932
	I0314 19:42:00.741984    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:42:00.742066    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:42:00.742066    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:42:00.742066    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:42:00.742262    8428 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-d22jc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2a563b3f-a175-4dc2-9f0b-67dbaefbfaac","resourceVersion":"1714","creationTimestamp":"2024-03-14T19:19:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"65689a14-ed43-48af-8104-53ea2e3991f3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:19:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"65689a14-ed43-48af-8104-53ea2e3991f3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I0314 19:42:00.742849    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:42:00.742849    8428 round_trippers.go:469] Request Headers:
	I0314 19:42:00.742849    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:42:00.742849    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:42:00.746109    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:42:00.746402    8428 round_trippers.go:577] Response Headers:
	I0314 19:42:00.746402    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:42:00.746402    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:42:00.746402    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:42:00.746402    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:42:01 GMT
	I0314 19:42:00.746402    8428 round_trippers.go:580]     Audit-Id: 3b438d53-d996-4b3c-bc42-9f76b1d61219
	I0314 19:42:00.746402    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:42:00.746607    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1868","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0314 19:42:01.239325    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-d22jc
	I0314 19:42:01.239432    8428 round_trippers.go:469] Request Headers:
	I0314 19:42:01.239432    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:42:01.239432    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:42:01.243386    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:42:01.243386    8428 round_trippers.go:577] Response Headers:
	I0314 19:42:01.243443    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:42:01.243443    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:42:01.243443    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:42:01.243443    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:42:01.243443    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:42:01 GMT
	I0314 19:42:01.243443    8428 round_trippers.go:580]     Audit-Id: 9a98388c-e53c-46e4-a571-a6595d25c3fe
	I0314 19:42:01.243618    8428 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-d22jc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2a563b3f-a175-4dc2-9f0b-67dbaefbfaac","resourceVersion":"1714","creationTimestamp":"2024-03-14T19:19:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"65689a14-ed43-48af-8104-53ea2e3991f3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:19:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"65689a14-ed43-48af-8104-53ea2e3991f3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I0314 19:42:01.244170    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:42:01.244170    8428 round_trippers.go:469] Request Headers:
	I0314 19:42:01.244253    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:42:01.244253    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:42:01.247411    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:42:01.247411    8428 round_trippers.go:577] Response Headers:
	I0314 19:42:01.247411    8428 round_trippers.go:580]     Audit-Id: 29a5494b-bfbd-4e9e-8267-7bad36e0193e
	I0314 19:42:01.247411    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:42:01.247411    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:42:01.247411    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:42:01.247411    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:42:01.247411    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:42:01 GMT
	I0314 19:42:01.247636    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1868","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0314 19:42:01.248497    8428 pod_ready.go:102] pod "coredns-5dd5756b68-d22jc" in "kube-system" namespace has status "Ready":"False"
	I0314 19:42:01.741540    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-d22jc
	I0314 19:42:01.741540    8428 round_trippers.go:469] Request Headers:
	I0314 19:42:01.741540    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:42:01.741540    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:42:01.745137    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:42:01.745137    8428 round_trippers.go:577] Response Headers:
	I0314 19:42:01.745781    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:42:01.745781    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:42:01.745781    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:42:01.745781    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:42:02 GMT
	I0314 19:42:01.745781    8428 round_trippers.go:580]     Audit-Id: 7523b4ac-d583-4c2c-a7cf-162269363a9d
	I0314 19:42:01.745781    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:42:01.745952    8428 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-d22jc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2a563b3f-a175-4dc2-9f0b-67dbaefbfaac","resourceVersion":"1714","creationTimestamp":"2024-03-14T19:19:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"65689a14-ed43-48af-8104-53ea2e3991f3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:19:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"65689a14-ed43-48af-8104-53ea2e3991f3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I0314 19:42:01.747078    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:42:01.747123    8428 round_trippers.go:469] Request Headers:
	I0314 19:42:01.747152    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:42:01.747152    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:42:01.754049    8428 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0314 19:42:01.754049    8428 round_trippers.go:577] Response Headers:
	I0314 19:42:01.754049    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:42:02 GMT
	I0314 19:42:01.754049    8428 round_trippers.go:580]     Audit-Id: a4b2916e-1a12-4979-89b7-d30146016d26
	I0314 19:42:01.754049    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:42:01.754049    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:42:01.754049    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:42:01.754049    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:42:01.754728    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1868","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0314 19:42:02.240260    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-d22jc
	I0314 19:42:02.240479    8428 round_trippers.go:469] Request Headers:
	I0314 19:42:02.240479    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:42:02.240479    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:42:02.244198    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:42:02.244885    8428 round_trippers.go:577] Response Headers:
	I0314 19:42:02.244885    8428 round_trippers.go:580]     Audit-Id: ef542ebf-a9cd-434b-99a6-ff7e2ba78cc1
	I0314 19:42:02.244885    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:42:02.244885    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:42:02.244885    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:42:02.244885    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:42:02.244885    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:42:02 GMT
	I0314 19:42:02.244984    8428 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-d22jc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2a563b3f-a175-4dc2-9f0b-67dbaefbfaac","resourceVersion":"1714","creationTimestamp":"2024-03-14T19:19:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"65689a14-ed43-48af-8104-53ea2e3991f3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:19:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"65689a14-ed43-48af-8104-53ea2e3991f3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I0314 19:42:02.245581    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:42:02.245581    8428 round_trippers.go:469] Request Headers:
	I0314 19:42:02.245581    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:42:02.245581    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:42:02.248151    8428 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0314 19:42:02.249023    8428 round_trippers.go:577] Response Headers:
	I0314 19:42:02.249023    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:42:02 GMT
	I0314 19:42:02.249023    8428 round_trippers.go:580]     Audit-Id: 89621acb-a1a8-4f06-bbcc-116916bbc135
	I0314 19:42:02.249072    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:42:02.249072    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:42:02.249072    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:42:02.249072    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:42:02.249072    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1868","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0314 19:42:02.731334    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-d22jc
	I0314 19:42:02.731334    8428 round_trippers.go:469] Request Headers:
	I0314 19:42:02.731334    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:42:02.731334    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:42:02.735042    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:42:02.735042    8428 round_trippers.go:577] Response Headers:
	I0314 19:42:02.735042    8428 round_trippers.go:580]     Audit-Id: be59076a-63ef-443c-a5ca-b85b80e401f1
	I0314 19:42:02.735042    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:42:02.735042    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:42:02.735042    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:42:02.735042    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:42:02.735042    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:42:02 GMT
	I0314 19:42:02.735735    8428 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-d22jc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2a563b3f-a175-4dc2-9f0b-67dbaefbfaac","resourceVersion":"1714","creationTimestamp":"2024-03-14T19:19:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"65689a14-ed43-48af-8104-53ea2e3991f3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:19:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"65689a14-ed43-48af-8104-53ea2e3991f3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I0314 19:42:02.736325    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:42:02.736403    8428 round_trippers.go:469] Request Headers:
	I0314 19:42:02.736403    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:42:02.736403    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:42:02.739187    8428 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0314 19:42:02.739187    8428 round_trippers.go:577] Response Headers:
	I0314 19:42:02.739187    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:42:02.739187    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:42:02.739187    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:42:02.739187    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:42:02.739187    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:42:03 GMT
	I0314 19:42:02.739187    8428 round_trippers.go:580]     Audit-Id: 580d22b5-82e0-46ca-ac4d-1ce2e25b6f79
	I0314 19:42:02.740300    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1868","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0314 19:42:03.231511    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-d22jc
	I0314 19:42:03.231511    8428 round_trippers.go:469] Request Headers:
	I0314 19:42:03.231511    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:42:03.231764    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:42:03.235008    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:42:03.235008    8428 round_trippers.go:577] Response Headers:
	I0314 19:42:03.235008    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:42:03.235855    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:42:03.235855    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:42:03 GMT
	I0314 19:42:03.235855    8428 round_trippers.go:580]     Audit-Id: e906cfeb-5372-419f-841e-36275aef69b9
	I0314 19:42:03.235855    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:42:03.235855    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:42:03.236020    8428 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-d22jc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2a563b3f-a175-4dc2-9f0b-67dbaefbfaac","resourceVersion":"1714","creationTimestamp":"2024-03-14T19:19:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"65689a14-ed43-48af-8104-53ea2e3991f3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:19:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"65689a14-ed43-48af-8104-53ea2e3991f3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I0314 19:42:03.236668    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:42:03.236668    8428 round_trippers.go:469] Request Headers:
	I0314 19:42:03.236668    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:42:03.236668    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:42:03.239949    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:42:03.240265    8428 round_trippers.go:577] Response Headers:
	I0314 19:42:03.240265    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:42:03.240265    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:42:03 GMT
	I0314 19:42:03.240265    8428 round_trippers.go:580]     Audit-Id: ad260da1-e1a4-425a-8de3-0f4dc9f8611d
	I0314 19:42:03.240265    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:42:03.240265    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:42:03.240265    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:42:03.240335    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1868","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0314 19:42:03.729709    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-d22jc
	I0314 19:42:03.729789    8428 round_trippers.go:469] Request Headers:
	I0314 19:42:03.729789    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:42:03.729867    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:42:03.733582    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:42:03.733823    8428 round_trippers.go:577] Response Headers:
	I0314 19:42:03.733823    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:42:03.733823    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:42:03 GMT
	I0314 19:42:03.733823    8428 round_trippers.go:580]     Audit-Id: 9c7ca33a-df0e-48d2-a598-5dcd3e8ebca8
	I0314 19:42:03.733823    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:42:03.733823    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:42:03.733823    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:42:03.733910    8428 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-d22jc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2a563b3f-a175-4dc2-9f0b-67dbaefbfaac","resourceVersion":"1714","creationTimestamp":"2024-03-14T19:19:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"65689a14-ed43-48af-8104-53ea2e3991f3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:19:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"65689a14-ed43-48af-8104-53ea2e3991f3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I0314 19:42:03.734622    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:42:03.734622    8428 round_trippers.go:469] Request Headers:
	I0314 19:42:03.734622    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:42:03.734622    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:42:03.737356    8428 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0314 19:42:03.737356    8428 round_trippers.go:577] Response Headers:
	I0314 19:42:03.737356    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:42:03.737356    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:42:03.737356    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:42:04 GMT
	I0314 19:42:03.737356    8428 round_trippers.go:580]     Audit-Id: 79313240-6cb5-4091-8cdc-1c165c397efc
	I0314 19:42:03.737356    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:42:03.737356    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:42:03.738528    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1868","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0314 19:42:03.738905    8428 pod_ready.go:102] pod "coredns-5dd5756b68-d22jc" in "kube-system" namespace has status "Ready":"False"
	I0314 19:42:04.228336    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-d22jc
	I0314 19:42:04.228642    8428 round_trippers.go:469] Request Headers:
	I0314 19:42:04.228642    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:42:04.228642    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:42:04.232927    8428 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 19:42:04.233000    8428 round_trippers.go:577] Response Headers:
	I0314 19:42:04.233000    8428 round_trippers.go:580]     Audit-Id: eb2d5e11-8e00-4103-abb1-ec6cde0e6c3c
	I0314 19:42:04.233000    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:42:04.233059    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:42:04.233059    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:42:04.233059    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:42:04.233059    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:42:04 GMT
	I0314 19:42:04.233235    8428 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-d22jc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2a563b3f-a175-4dc2-9f0b-67dbaefbfaac","resourceVersion":"1714","creationTimestamp":"2024-03-14T19:19:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"65689a14-ed43-48af-8104-53ea2e3991f3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:19:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"65689a14-ed43-48af-8104-53ea2e3991f3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I0314 19:42:04.233235    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:42:04.233235    8428 round_trippers.go:469] Request Headers:
	I0314 19:42:04.233235    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:42:04.233235    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:42:04.237328    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:42:04.237587    8428 round_trippers.go:577] Response Headers:
	I0314 19:42:04.237587    8428 round_trippers.go:580]     Audit-Id: 7fd398a0-c6b7-499c-ba31-b6af8485812a
	I0314 19:42:04.237587    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:42:04.237587    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:42:04.237587    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:42:04.237587    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:42:04.237671    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:42:04 GMT
	I0314 19:42:04.237717    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1868","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0314 19:42:04.742889    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-d22jc
	I0314 19:42:04.742889    8428 round_trippers.go:469] Request Headers:
	I0314 19:42:04.742889    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:42:04.742889    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:42:04.747035    8428 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 19:42:04.747035    8428 round_trippers.go:577] Response Headers:
	I0314 19:42:04.747035    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:42:04.747275    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:42:05 GMT
	I0314 19:42:04.747275    8428 round_trippers.go:580]     Audit-Id: 26ce9fc8-268f-4ac5-b718-8bffe0eb4bcb
	I0314 19:42:04.747275    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:42:04.747275    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:42:04.747275    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:42:04.747382    8428 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-d22jc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2a563b3f-a175-4dc2-9f0b-67dbaefbfaac","resourceVersion":"1714","creationTimestamp":"2024-03-14T19:19:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"65689a14-ed43-48af-8104-53ea2e3991f3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:19:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"65689a14-ed43-48af-8104-53ea2e3991f3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I0314 19:42:04.747748    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:42:04.747748    8428 round_trippers.go:469] Request Headers:
	I0314 19:42:04.747748    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:42:04.747748    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:42:04.752637    8428 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 19:42:04.752674    8428 round_trippers.go:577] Response Headers:
	I0314 19:42:04.752674    8428 round_trippers.go:580]     Audit-Id: 26aa6448-4af0-4f68-8fd1-335763c40acb
	I0314 19:42:04.752674    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:42:04.752674    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:42:04.752674    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:42:04.752674    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:42:04.752674    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:42:05 GMT
	I0314 19:42:04.752885    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1868","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0314 19:42:05.242615    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-d22jc
	I0314 19:42:05.242729    8428 round_trippers.go:469] Request Headers:
	I0314 19:42:05.242729    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:42:05.242729    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:42:05.247091    8428 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 19:42:05.247202    8428 round_trippers.go:577] Response Headers:
	I0314 19:42:05.247202    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:42:05 GMT
	I0314 19:42:05.247202    8428 round_trippers.go:580]     Audit-Id: 83152fd1-4358-4253-96c4-ed80fec3a0dd
	I0314 19:42:05.247202    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:42:05.247202    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:42:05.247202    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:42:05.247202    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:42:05.247431    8428 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-d22jc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2a563b3f-a175-4dc2-9f0b-67dbaefbfaac","resourceVersion":"1714","creationTimestamp":"2024-03-14T19:19:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"65689a14-ed43-48af-8104-53ea2e3991f3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:19:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"65689a14-ed43-48af-8104-53ea2e3991f3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I0314 19:42:05.248071    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:42:05.248071    8428 round_trippers.go:469] Request Headers:
	I0314 19:42:05.248071    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:42:05.248071    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:42:05.251148    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:42:05.251486    8428 round_trippers.go:577] Response Headers:
	I0314 19:42:05.251486    8428 round_trippers.go:580]     Audit-Id: 1c6b2950-dc46-42b5-993c-5a7839c5f703
	I0314 19:42:05.251486    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:42:05.251486    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:42:05.251486    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:42:05.251486    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:42:05.251486    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:42:05 GMT
	I0314 19:42:05.251706    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1868","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0314 19:42:05.742679    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-d22jc
	I0314 19:42:05.742766    8428 round_trippers.go:469] Request Headers:
	I0314 19:42:05.742766    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:42:05.742766    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:42:05.747090    8428 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 19:42:05.747090    8428 round_trippers.go:577] Response Headers:
	I0314 19:42:05.747180    8428 round_trippers.go:580]     Audit-Id: 3f8e906f-a2b8-4403-923d-01781968ce22
	I0314 19:42:05.747180    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:42:05.747180    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:42:05.747180    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:42:05.747180    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:42:05.747180    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:42:06 GMT
	I0314 19:42:05.747466    8428 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-d22jc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2a563b3f-a175-4dc2-9f0b-67dbaefbfaac","resourceVersion":"1714","creationTimestamp":"2024-03-14T19:19:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"65689a14-ed43-48af-8104-53ea2e3991f3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:19:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"65689a14-ed43-48af-8104-53ea2e3991f3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I0314 19:42:05.748415    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:42:05.748523    8428 round_trippers.go:469] Request Headers:
	I0314 19:42:05.748523    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:42:05.748523    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:42:05.752290    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:42:05.752290    8428 round_trippers.go:577] Response Headers:
	I0314 19:42:05.752290    8428 round_trippers.go:580]     Audit-Id: 664e836c-1dcc-4aec-b9f8-bce2c313e960
	I0314 19:42:05.752290    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:42:05.752290    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:42:05.752290    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:42:05.752290    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:42:05.752290    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:42:06 GMT
	I0314 19:42:05.753281    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1868","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0314 19:42:05.753281    8428 pod_ready.go:102] pod "coredns-5dd5756b68-d22jc" in "kube-system" namespace has status "Ready":"False"
	I0314 19:42:06.241637    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-d22jc
	I0314 19:42:06.241714    8428 round_trippers.go:469] Request Headers:
	I0314 19:42:06.241714    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:42:06.241714    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:42:06.247011    8428 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0314 19:42:06.247011    8428 round_trippers.go:577] Response Headers:
	I0314 19:42:06.247011    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:42:06.247011    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:42:06.247011    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:42:06.247011    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:42:06.247011    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:42:06 GMT
	I0314 19:42:06.247011    8428 round_trippers.go:580]     Audit-Id: b87bb486-f954-4d46-9cff-74be2314856f
	I0314 19:42:06.247545    8428 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-d22jc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2a563b3f-a175-4dc2-9f0b-67dbaefbfaac","resourceVersion":"1714","creationTimestamp":"2024-03-14T19:19:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"65689a14-ed43-48af-8104-53ea2e3991f3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:19:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"65689a14-ed43-48af-8104-53ea2e3991f3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I0314 19:42:06.247773    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:42:06.247773    8428 round_trippers.go:469] Request Headers:
	I0314 19:42:06.247773    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:42:06.247773    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:42:06.250994    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:42:06.250994    8428 round_trippers.go:577] Response Headers:
	I0314 19:42:06.250994    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:42:06.250994    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:42:06.250994    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:42:06.250994    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:42:06 GMT
	I0314 19:42:06.250994    8428 round_trippers.go:580]     Audit-Id: 6afeb2d0-e202-46d4-aef7-05dc411c17a6
	I0314 19:42:06.250994    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:42:06.250994    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1868","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0314 19:42:06.728875    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-d22jc
	I0314 19:42:06.728930    8428 round_trippers.go:469] Request Headers:
	I0314 19:42:06.728930    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:42:06.728989    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:42:06.732622    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:42:06.732622    8428 round_trippers.go:577] Response Headers:
	I0314 19:42:06.732622    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:42:06 GMT
	I0314 19:42:06.732622    8428 round_trippers.go:580]     Audit-Id: 7396783a-6c98-4017-bc95-9bb85a5d0bb4
	I0314 19:42:06.732622    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:42:06.732622    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:42:06.732622    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:42:06.732622    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:42:06.733094    8428 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-d22jc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2a563b3f-a175-4dc2-9f0b-67dbaefbfaac","resourceVersion":"1714","creationTimestamp":"2024-03-14T19:19:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"65689a14-ed43-48af-8104-53ea2e3991f3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:19:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"65689a14-ed43-48af-8104-53ea2e3991f3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I0314 19:42:06.733633    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:42:06.733746    8428 round_trippers.go:469] Request Headers:
	I0314 19:42:06.733746    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:42:06.733746    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:42:06.737912    8428 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 19:42:06.737912    8428 round_trippers.go:577] Response Headers:
	I0314 19:42:06.737912    8428 round_trippers.go:580]     Audit-Id: b352f220-862a-4d87-8daa-e7b8deade649
	I0314 19:42:06.738243    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:42:06.738243    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:42:06.738243    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:42:06.738243    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:42:06.738243    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:42:07 GMT
	I0314 19:42:06.738428    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1868","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0314 19:42:07.229844    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-d22jc
	I0314 19:42:07.229844    8428 round_trippers.go:469] Request Headers:
	I0314 19:42:07.229934    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:42:07.229934    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:42:07.234621    8428 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 19:42:07.235018    8428 round_trippers.go:577] Response Headers:
	I0314 19:42:07.235018    8428 round_trippers.go:580]     Audit-Id: d64d8135-68ba-4a81-b428-4698ce7398aa
	I0314 19:42:07.235018    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:42:07.235018    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:42:07.235018    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:42:07.235018    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:42:07.235018    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:42:07 GMT
	I0314 19:42:07.236873    8428 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-d22jc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2a563b3f-a175-4dc2-9f0b-67dbaefbfaac","resourceVersion":"1714","creationTimestamp":"2024-03-14T19:19:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"65689a14-ed43-48af-8104-53ea2e3991f3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:19:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"65689a14-ed43-48af-8104-53ea2e3991f3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I0314 19:42:07.237494    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:42:07.237569    8428 round_trippers.go:469] Request Headers:
	I0314 19:42:07.237569    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:42:07.237569    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:42:07.240784    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:42:07.240784    8428 round_trippers.go:577] Response Headers:
	I0314 19:42:07.240784    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:42:07.240784    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:42:07.240784    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:42:07 GMT
	I0314 19:42:07.240784    8428 round_trippers.go:580]     Audit-Id: 1d041fa9-1876-44ed-8c7b-a1e3db6260b5
	I0314 19:42:07.240784    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:42:07.240784    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:42:07.241055    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1868","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0314 19:42:07.729149    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-d22jc
	I0314 19:42:07.729149    8428 round_trippers.go:469] Request Headers:
	I0314 19:42:07.729149    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:42:07.729149    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:42:07.734340    8428 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0314 19:42:07.734340    8428 round_trippers.go:577] Response Headers:
	I0314 19:42:07.734340    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:42:07.734340    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:42:07.734340    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:42:07.734340    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:42:07 GMT
	I0314 19:42:07.734340    8428 round_trippers.go:580]     Audit-Id: 7c666071-9683-4eed-802f-e45f37f4feb1
	I0314 19:42:07.734340    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:42:07.734340    8428 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-d22jc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2a563b3f-a175-4dc2-9f0b-67dbaefbfaac","resourceVersion":"1714","creationTimestamp":"2024-03-14T19:19:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"65689a14-ed43-48af-8104-53ea2e3991f3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:19:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"65689a14-ed43-48af-8104-53ea2e3991f3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I0314 19:42:07.735181    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:42:07.735242    8428 round_trippers.go:469] Request Headers:
	I0314 19:42:07.735242    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:42:07.735242    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:42:07.737909    8428 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0314 19:42:07.737909    8428 round_trippers.go:577] Response Headers:
	I0314 19:42:07.737909    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:42:07.737909    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:42:07.737909    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:42:07.737909    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:42:08 GMT
	I0314 19:42:07.737909    8428 round_trippers.go:580]     Audit-Id: 3e5700c3-9c68-44da-a739-c03ad4a563b0
	I0314 19:42:07.737909    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:42:07.738651    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1868","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0314 19:42:08.229377    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-d22jc
	I0314 19:42:08.229377    8428 round_trippers.go:469] Request Headers:
	I0314 19:42:08.229377    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:42:08.229377    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:42:08.233298    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:42:08.233842    8428 round_trippers.go:577] Response Headers:
	I0314 19:42:08.233842    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:42:08 GMT
	I0314 19:42:08.233842    8428 round_trippers.go:580]     Audit-Id: 5c0c8501-fa6d-46ce-8a5e-d87ea412e436
	I0314 19:42:08.233842    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:42:08.233896    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:42:08.233896    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:42:08.233896    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:42:08.234194    8428 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-d22jc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2a563b3f-a175-4dc2-9f0b-67dbaefbfaac","resourceVersion":"1714","creationTimestamp":"2024-03-14T19:19:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"65689a14-ed43-48af-8104-53ea2e3991f3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:19:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"65689a14-ed43-48af-8104-53ea2e3991f3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I0314 19:42:08.235135    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:42:08.235135    8428 round_trippers.go:469] Request Headers:
	I0314 19:42:08.235135    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:42:08.235225    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:42:08.237464    8428 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0314 19:42:08.238456    8428 round_trippers.go:577] Response Headers:
	I0314 19:42:08.238456    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:42:08.238456    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:42:08 GMT
	I0314 19:42:08.238456    8428 round_trippers.go:580]     Audit-Id: f5a0f9a4-c9f6-4803-945e-d70652c0a646
	I0314 19:42:08.238456    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:42:08.238456    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:42:08.238456    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:42:08.238548    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1868","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0314 19:42:08.238548    8428 pod_ready.go:102] pod "coredns-5dd5756b68-d22jc" in "kube-system" namespace has status "Ready":"False"
	I0314 19:42:08.743066    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-d22jc
	I0314 19:42:08.743066    8428 round_trippers.go:469] Request Headers:
	I0314 19:42:08.743066    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:42:08.743066    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:42:08.746635    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:42:08.747021    8428 round_trippers.go:577] Response Headers:
	I0314 19:42:08.747021    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:42:08.747021    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:42:08.747021    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:42:09 GMT
	I0314 19:42:08.747021    8428 round_trippers.go:580]     Audit-Id: 0b27a5a8-562c-4ebc-9d13-568b35903b6f
	I0314 19:42:08.747021    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:42:08.747021    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:42:08.747021    8428 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-d22jc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2a563b3f-a175-4dc2-9f0b-67dbaefbfaac","resourceVersion":"1714","creationTimestamp":"2024-03-14T19:19:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"65689a14-ed43-48af-8104-53ea2e3991f3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:19:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"65689a14-ed43-48af-8104-53ea2e3991f3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I0314 19:42:08.747774    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:42:08.747873    8428 round_trippers.go:469] Request Headers:
	I0314 19:42:08.747873    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:42:08.747873    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:42:08.751078    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:42:08.751078    8428 round_trippers.go:577] Response Headers:
	I0314 19:42:08.751078    8428 round_trippers.go:580]     Audit-Id: 152da8a3-937f-4453-bd42-fd9b50749dad
	I0314 19:42:08.751078    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:42:08.751078    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:42:08.751078    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:42:08.751078    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:42:08.751078    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:42:09 GMT
	I0314 19:42:08.751666    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1868","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0314 19:42:09.243547    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-d22jc
	I0314 19:42:09.243547    8428 round_trippers.go:469] Request Headers:
	I0314 19:42:09.243547    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:42:09.243547    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:42:09.247210    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:42:09.247210    8428 round_trippers.go:577] Response Headers:
	I0314 19:42:09.247210    8428 round_trippers.go:580]     Audit-Id: 68fb300b-914f-4a93-86ad-db4b67e9e8e6
	I0314 19:42:09.247210    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:42:09.247210    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:42:09.247210    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:42:09.247210    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:42:09.247210    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:42:09 GMT
	I0314 19:42:09.247903    8428 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-d22jc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2a563b3f-a175-4dc2-9f0b-67dbaefbfaac","resourceVersion":"1714","creationTimestamp":"2024-03-14T19:19:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"65689a14-ed43-48af-8104-53ea2e3991f3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:19:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"65689a14-ed43-48af-8104-53ea2e3991f3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I0314 19:42:09.248450    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:42:09.248563    8428 round_trippers.go:469] Request Headers:
	I0314 19:42:09.248592    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:42:09.248592    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:42:09.252639    8428 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 19:42:09.252639    8428 round_trippers.go:577] Response Headers:
	I0314 19:42:09.252639    8428 round_trippers.go:580]     Audit-Id: ae0ec6ac-eb04-4861-a718-6628930ff0ba
	I0314 19:42:09.252639    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:42:09.252639    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:42:09.252639    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:42:09.252639    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:42:09.252639    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:42:09 GMT
	I0314 19:42:09.253439    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1868","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0314 19:42:09.728694    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-d22jc
	I0314 19:42:09.728694    8428 round_trippers.go:469] Request Headers:
	I0314 19:42:09.728694    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:42:09.728948    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:42:09.733667    8428 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 19:42:09.733793    8428 round_trippers.go:577] Response Headers:
	I0314 19:42:09.733793    8428 round_trippers.go:580]     Audit-Id: f00c7d01-5c88-4ea4-910d-33b306d4aacf
	I0314 19:42:09.733793    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:42:09.733793    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:42:09.733793    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:42:09.733793    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:42:09.733793    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:42:09 GMT
	I0314 19:42:09.733793    8428 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-d22jc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2a563b3f-a175-4dc2-9f0b-67dbaefbfaac","resourceVersion":"1714","creationTimestamp":"2024-03-14T19:19:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"65689a14-ed43-48af-8104-53ea2e3991f3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:19:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"65689a14-ed43-48af-8104-53ea2e3991f3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I0314 19:42:09.734601    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:42:09.734601    8428 round_trippers.go:469] Request Headers:
	I0314 19:42:09.734601    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:42:09.734601    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:42:09.738417    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:42:09.738417    8428 round_trippers.go:577] Response Headers:
	I0314 19:42:09.738417    8428 round_trippers.go:580]     Audit-Id: abc3d71c-48dc-4fa3-b91f-c371bfa16a2d
	I0314 19:42:09.738417    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:42:09.738417    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:42:09.738417    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:42:09.738417    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:42:09.738417    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:42:10 GMT
	I0314 19:42:09.738669    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1868","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0314 19:42:10.232402    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-d22jc
	I0314 19:42:10.232637    8428 round_trippers.go:469] Request Headers:
	I0314 19:42:10.232690    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:42:10.232690    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:42:10.238460    8428 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 19:42:10.238460    8428 round_trippers.go:577] Response Headers:
	I0314 19:42:10.238523    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:42:10.238523    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:42:10 GMT
	I0314 19:42:10.238523    8428 round_trippers.go:580]     Audit-Id: 971c256b-a813-4d0a-a55a-c59e9e1b460e
	I0314 19:42:10.238523    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:42:10.238523    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:42:10.238523    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:42:10.238976    8428 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-d22jc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2a563b3f-a175-4dc2-9f0b-67dbaefbfaac","resourceVersion":"1714","creationTimestamp":"2024-03-14T19:19:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"65689a14-ed43-48af-8104-53ea2e3991f3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:19:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"65689a14-ed43-48af-8104-53ea2e3991f3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I0314 19:42:10.239913    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:42:10.239960    8428 round_trippers.go:469] Request Headers:
	I0314 19:42:10.239960    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:42:10.239960    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:42:10.242307    8428 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0314 19:42:10.242307    8428 round_trippers.go:577] Response Headers:
	I0314 19:42:10.242307    8428 round_trippers.go:580]     Audit-Id: 309a0d3c-a75d-4ddd-8544-2d3fda2ce586
	I0314 19:42:10.243313    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:42:10.243313    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:42:10.243313    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:42:10.243313    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:42:10.243313    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:42:10 GMT
	I0314 19:42:10.243568    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1868","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0314 19:42:10.243968    8428 pod_ready.go:102] pod "coredns-5dd5756b68-d22jc" in "kube-system" namespace has status "Ready":"False"
	I0314 19:42:10.736996    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-d22jc
	I0314 19:42:10.736996    8428 round_trippers.go:469] Request Headers:
	I0314 19:42:10.736996    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:42:10.736996    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:42:10.743087    8428 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0314 19:42:10.743087    8428 round_trippers.go:577] Response Headers:
	I0314 19:42:10.743087    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:42:10.743087    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:42:10.743087    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:42:10.743087    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:42:10.743087    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:42:11 GMT
	I0314 19:42:10.743087    8428 round_trippers.go:580]     Audit-Id: 661290cf-6117-4781-af8a-804cfab2f5a3
	I0314 19:42:10.743087    8428 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-d22jc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2a563b3f-a175-4dc2-9f0b-67dbaefbfaac","resourceVersion":"1714","creationTimestamp":"2024-03-14T19:19:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"65689a14-ed43-48af-8104-53ea2e3991f3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:19:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"65689a14-ed43-48af-8104-53ea2e3991f3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I0314 19:42:10.743905    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:42:10.743905    8428 round_trippers.go:469] Request Headers:
	I0314 19:42:10.743905    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:42:10.743905    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:42:10.747196    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:42:10.747196    8428 round_trippers.go:577] Response Headers:
	I0314 19:42:10.747196    8428 round_trippers.go:580]     Audit-Id: 63a55bb5-6c16-415f-8f3b-d07dd9c9951b
	I0314 19:42:10.747196    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:42:10.747196    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:42:10.747196    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:42:10.747196    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:42:10.747196    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:42:11 GMT
	I0314 19:42:10.747713    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1868","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0314 19:42:11.241876    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-d22jc
	I0314 19:42:11.242090    8428 round_trippers.go:469] Request Headers:
	I0314 19:42:11.242090    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:42:11.242090    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:42:11.245858    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:42:11.245858    8428 round_trippers.go:577] Response Headers:
	I0314 19:42:11.245996    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:42:11.245996    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:42:11.245996    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:42:11 GMT
	I0314 19:42:11.245996    8428 round_trippers.go:580]     Audit-Id: ee7f495e-c249-435e-b817-cf3b140b6cbe
	I0314 19:42:11.245996    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:42:11.245996    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:42:11.246116    8428 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-d22jc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2a563b3f-a175-4dc2-9f0b-67dbaefbfaac","resourceVersion":"1714","creationTimestamp":"2024-03-14T19:19:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"65689a14-ed43-48af-8104-53ea2e3991f3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:19:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"65689a14-ed43-48af-8104-53ea2e3991f3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I0314 19:42:11.246770    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:42:11.246770    8428 round_trippers.go:469] Request Headers:
	I0314 19:42:11.246770    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:42:11.246770    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:42:11.249101    8428 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0314 19:42:11.249101    8428 round_trippers.go:577] Response Headers:
	I0314 19:42:11.249101    8428 round_trippers.go:580]     Audit-Id: a4a5c67d-4539-4d97-a01d-e9c54b59c140
	I0314 19:42:11.249101    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:42:11.249101    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:42:11.249101    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:42:11.249101    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:42:11.249101    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:42:11 GMT
	I0314 19:42:11.250230    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1868","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0314 19:42:11.743089    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-d22jc
	I0314 19:42:11.743089    8428 round_trippers.go:469] Request Headers:
	I0314 19:42:11.743089    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:42:11.743089    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:42:11.746675    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:42:11.746675    8428 round_trippers.go:577] Response Headers:
	I0314 19:42:11.746675    8428 round_trippers.go:580]     Audit-Id: abdd4982-e798-4fcd-97d6-7f4f7563370d
	I0314 19:42:11.746675    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:42:11.746675    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:42:11.746675    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:42:11.746675    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:42:11.746675    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:42:12 GMT
	I0314 19:42:11.747569    8428 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-d22jc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2a563b3f-a175-4dc2-9f0b-67dbaefbfaac","resourceVersion":"1714","creationTimestamp":"2024-03-14T19:19:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"65689a14-ed43-48af-8104-53ea2e3991f3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:19:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"65689a14-ed43-48af-8104-53ea2e3991f3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I0314 19:42:11.748272    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:42:11.748345    8428 round_trippers.go:469] Request Headers:
	I0314 19:42:11.748345    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:42:11.748345    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:42:11.751002    8428 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0314 19:42:11.751002    8428 round_trippers.go:577] Response Headers:
	I0314 19:42:11.751002    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:42:11.751002    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:42:11.751002    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:42:11.751002    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:42:11.751002    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:42:12 GMT
	I0314 19:42:11.751002    8428 round_trippers.go:580]     Audit-Id: 960555fa-54b5-4555-aeb8-964909247f6e
	I0314 19:42:11.751827    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1868","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0314 19:42:12.233574    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-d22jc
	I0314 19:42:12.233574    8428 round_trippers.go:469] Request Headers:
	I0314 19:42:12.233574    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:42:12.233574    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:42:12.237703    8428 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 19:42:12.237703    8428 round_trippers.go:577] Response Headers:
	I0314 19:42:12.237703    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:42:12.237703    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:42:12.237703    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:42:12.237703    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:42:12.237703    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:42:12 GMT
	I0314 19:42:12.237703    8428 round_trippers.go:580]     Audit-Id: 51fd0700-93c9-4b80-b937-ede353918635
	I0314 19:42:12.237703    8428 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-d22jc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2a563b3f-a175-4dc2-9f0b-67dbaefbfaac","resourceVersion":"1908","creationTimestamp":"2024-03-14T19:19:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"65689a14-ed43-48af-8104-53ea2e3991f3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:19:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"65689a14-ed43-48af-8104-53ea2e3991f3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6493 chars]
	I0314 19:42:12.238597    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:42:12.238597    8428 round_trippers.go:469] Request Headers:
	I0314 19:42:12.238649    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:42:12.238649    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:42:12.241456    8428 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0314 19:42:12.241657    8428 round_trippers.go:577] Response Headers:
	I0314 19:42:12.241715    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:42:12.241715    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:42:12.241715    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:42:12.241715    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:42:12.241715    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:42:12 GMT
	I0314 19:42:12.241715    8428 round_trippers.go:580]     Audit-Id: 9d0a5006-489e-419e-b269-4cbc6810419e
	I0314 19:42:12.241929    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1868","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0314 19:42:12.242206    8428 pod_ready.go:92] pod "coredns-5dd5756b68-d22jc" in "kube-system" namespace has status "Ready":"True"
	I0314 19:42:12.242206    8428 pod_ready.go:81] duration metric: took 30.5137419s for pod "coredns-5dd5756b68-d22jc" in "kube-system" namespace to be "Ready" ...
	I0314 19:42:12.242206    8428 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-442000" in "kube-system" namespace to be "Ready" ...
	I0314 19:42:12.242206    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-442000
	I0314 19:42:12.242206    8428 round_trippers.go:469] Request Headers:
	I0314 19:42:12.242206    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:42:12.242206    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:42:12.244894    8428 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0314 19:42:12.245712    8428 round_trippers.go:577] Response Headers:
	I0314 19:42:12.245712    8428 round_trippers.go:580]     Audit-Id: ed709778-c966-4692-ab45-fcb485388b4d
	I0314 19:42:12.245712    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:42:12.245712    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:42:12.245712    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:42:12.245712    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:42:12.245712    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:42:12 GMT
	I0314 19:42:12.245712    8428 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-442000","namespace":"kube-system","uid":"106cc31d-907f-4853-9e8d-f13c8ac4e398","resourceVersion":"1808","creationTimestamp":"2024-03-14T19:41:06Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.17.93.236:2379","kubernetes.io/config.hash":"fa99a5621d016aa714804afcaa1e0a53","kubernetes.io/config.mirror":"fa99a5621d016aa714804afcaa1e0a53","kubernetes.io/config.seen":"2024-03-14T19:41:00.367789550Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:41:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 5863 chars]
	I0314 19:42:12.246780    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:42:12.246780    8428 round_trippers.go:469] Request Headers:
	I0314 19:42:12.246852    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:42:12.246852    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:42:12.249709    8428 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0314 19:42:12.249751    8428 round_trippers.go:577] Response Headers:
	I0314 19:42:12.249751    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:42:12 GMT
	I0314 19:42:12.249751    8428 round_trippers.go:580]     Audit-Id: 90a3bdad-45ce-47b2-8b2f-5249440ad9b3
	I0314 19:42:12.249751    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:42:12.249751    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:42:12.249751    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:42:12.249825    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:42:12.250135    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1868","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0314 19:42:12.250135    8428 pod_ready.go:92] pod "etcd-multinode-442000" in "kube-system" namespace has status "Ready":"True"
	I0314 19:42:12.250135    8428 pod_ready.go:81] duration metric: took 7.9285ms for pod "etcd-multinode-442000" in "kube-system" namespace to be "Ready" ...
	I0314 19:42:12.250135    8428 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-442000" in "kube-system" namespace to be "Ready" ...
	I0314 19:42:12.250707    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-442000
	I0314 19:42:12.250743    8428 round_trippers.go:469] Request Headers:
	I0314 19:42:12.250743    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:42:12.250743    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:42:12.252964    8428 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0314 19:42:12.252964    8428 round_trippers.go:577] Response Headers:
	I0314 19:42:12.252964    8428 round_trippers.go:580]     Audit-Id: 1fa2a565-92e8-4391-a6d7-66bc22bbc0ee
	I0314 19:42:12.252964    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:42:12.252964    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:42:12.252964    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:42:12.252964    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:42:12.252964    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:42:12 GMT
	I0314 19:42:12.253977    8428 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-442000","namespace":"kube-system","uid":"ebdd5ddf-2b02-4315-bc64-1b10c383d507","resourceVersion":"1817","creationTimestamp":"2024-03-14T19:41:06Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.17.93.236:8443","kubernetes.io/config.hash":"7754d2f32966faec8123dc3b8a2af767","kubernetes.io/config.mirror":"7754d2f32966faec8123dc3b8a2af767","kubernetes.io/config.seen":"2024-03-14T19:41:00.350706636Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:41:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7400 chars]
	I0314 19:42:12.254468    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:42:12.254525    8428 round_trippers.go:469] Request Headers:
	I0314 19:42:12.254525    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:42:12.254525    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:42:12.257014    8428 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0314 19:42:12.257411    8428 round_trippers.go:577] Response Headers:
	I0314 19:42:12.257411    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:42:12.257411    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:42:12.257411    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:42:12.257411    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:42:12 GMT
	I0314 19:42:12.257411    8428 round_trippers.go:580]     Audit-Id: 446cd449-9ee0-40f9-b0ac-290ab6ed6599
	I0314 19:42:12.257411    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:42:12.257570    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1868","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0314 19:42:12.257623    8428 pod_ready.go:92] pod "kube-apiserver-multinode-442000" in "kube-system" namespace has status "Ready":"True"
	I0314 19:42:12.257623    8428 pod_ready.go:81] duration metric: took 7.4873ms for pod "kube-apiserver-multinode-442000" in "kube-system" namespace to be "Ready" ...
	I0314 19:42:12.257623    8428 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-442000" in "kube-system" namespace to be "Ready" ...
	I0314 19:42:12.257623    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-442000
	I0314 19:42:12.257623    8428 round_trippers.go:469] Request Headers:
	I0314 19:42:12.257623    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:42:12.257623    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:42:12.260355    8428 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0314 19:42:12.260355    8428 round_trippers.go:577] Response Headers:
	I0314 19:42:12.260355    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:42:12.260355    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:42:12 GMT
	I0314 19:42:12.260355    8428 round_trippers.go:580]     Audit-Id: 0d35a5dd-89dc-42a6-8d55-25cbe17507ed
	I0314 19:42:12.260355    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:42:12.260355    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:42:12.260355    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:42:12.261203    8428 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-442000","namespace":"kube-system","uid":"b16fc874-ef74-44ca-a54f-bb678bf982df","resourceVersion":"1813","creationTimestamp":"2024-03-14T19:19:01Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"a7ee530f2bd843eddeace8cd6ec0d204","kubernetes.io/config.mirror":"a7ee530f2bd843eddeace8cd6ec0d204","kubernetes.io/config.seen":"2024-03-14T19:18:55.420205308Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:19:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7175 chars]
	I0314 19:42:12.261801    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:42:12.261801    8428 round_trippers.go:469] Request Headers:
	I0314 19:42:12.261861    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:42:12.261861    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:42:12.264651    8428 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0314 19:42:12.264651    8428 round_trippers.go:577] Response Headers:
	I0314 19:42:12.264651    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:42:12.264651    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:42:12.264651    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:42:12.264651    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:42:12 GMT
	I0314 19:42:12.264651    8428 round_trippers.go:580]     Audit-Id: 45fed321-4ad9-4a25-be94-77537a34fc26
	I0314 19:42:12.264651    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:42:12.264651    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1868","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0314 19:42:12.264651    8428 pod_ready.go:92] pod "kube-controller-manager-multinode-442000" in "kube-system" namespace has status "Ready":"True"
	I0314 19:42:12.265185    8428 pod_ready.go:81] duration metric: took 7.5614ms for pod "kube-controller-manager-multinode-442000" in "kube-system" namespace to be "Ready" ...
	I0314 19:42:12.265185    8428 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-72dzs" in "kube-system" namespace to be "Ready" ...
	I0314 19:42:12.265185    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/namespaces/kube-system/pods/kube-proxy-72dzs
	I0314 19:42:12.265305    8428 round_trippers.go:469] Request Headers:
	I0314 19:42:12.265305    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:42:12.265305    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:42:12.267504    8428 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0314 19:42:12.267504    8428 round_trippers.go:577] Response Headers:
	I0314 19:42:12.267504    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:42:12.267504    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:42:12.267504    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:42:12.267504    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:42:12 GMT
	I0314 19:42:12.267504    8428 round_trippers.go:580]     Audit-Id: fc854cd4-bc4e-4993-9bee-909163a89efe
	I0314 19:42:12.267504    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:42:12.268357    8428 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-72dzs","generateName":"kube-proxy-","namespace":"kube-system","uid":"80b840b0-3803-4102-a966-ea73aed74f49","resourceVersion":"1892","creationTimestamp":"2024-03-14T19:22:02Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"6fc4cc4b-ef3f-4f16-8df5-a146058b364e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:22:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6fc4cc4b-ef3f-4f16-8df5-a146058b364e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5767 chars]
	I0314 19:42:12.268821    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000-m02
	I0314 19:42:12.268821    8428 round_trippers.go:469] Request Headers:
	I0314 19:42:12.268821    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:42:12.268821    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:42:12.271025    8428 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0314 19:42:12.271025    8428 round_trippers.go:577] Response Headers:
	I0314 19:42:12.271025    8428 round_trippers.go:580]     Audit-Id: 9cb3f786-8f00-40f6-9fde-bf6ead449876
	I0314 19:42:12.271025    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:42:12.271025    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:42:12.271520    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:42:12.271520    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:42:12.271520    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:42:12 GMT
	I0314 19:42:12.271675    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000-m02","uid":"5f369d83-fce6-47fe-b14b-171ed626975b","resourceVersion":"1896","creationTimestamp":"2024-03-14T19:22:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_14T19_22_02_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:22:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4582 chars]
	I0314 19:42:12.271675    8428 pod_ready.go:97] node "multinode-442000-m02" hosting pod "kube-proxy-72dzs" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-442000-m02" has status "Ready":"Unknown"
	I0314 19:42:12.271675    8428 pod_ready.go:81] duration metric: took 6.4894ms for pod "kube-proxy-72dzs" in "kube-system" namespace to be "Ready" ...
	E0314 19:42:12.271675    8428 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-442000-m02" hosting pod "kube-proxy-72dzs" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-442000-m02" has status "Ready":"Unknown"
	I0314 19:42:12.271675    8428 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-cg28g" in "kube-system" namespace to be "Ready" ...
	I0314 19:42:12.435681    8428 request.go:629] Waited for 163.4705ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.93.236:8443/api/v1/namespaces/kube-system/pods/kube-proxy-cg28g
	I0314 19:42:12.435795    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/namespaces/kube-system/pods/kube-proxy-cg28g
	I0314 19:42:12.435795    8428 round_trippers.go:469] Request Headers:
	I0314 19:42:12.435795    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:42:12.435903    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:42:12.439240    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:42:12.440099    8428 round_trippers.go:577] Response Headers:
	I0314 19:42:12.440099    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:42:12.440099    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:42:12 GMT
	I0314 19:42:12.440099    8428 round_trippers.go:580]     Audit-Id: 2c36939a-e5b0-4793-a24f-88836a45324b
	I0314 19:42:12.440099    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:42:12.440099    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:42:12.440099    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:42:12.440340    8428 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-cg28g","generateName":"kube-proxy-","namespace":"kube-system","uid":"c7f798bf-6722-4731-af8d-ccd5703d116e","resourceVersion":"1728","creationTimestamp":"2024-03-14T19:19:16Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"6fc4cc4b-ef3f-4f16-8df5-a146058b364e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:19:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6fc4cc4b-ef3f-4f16-8df5-a146058b364e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5739 chars]
	I0314 19:42:12.637591    8428 request.go:629] Waited for 196.3988ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:42:12.637712    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:42:12.637712    8428 round_trippers.go:469] Request Headers:
	I0314 19:42:12.637712    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:42:12.637712    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:42:12.642075    8428 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 19:42:12.647477    8428 round_trippers.go:577] Response Headers:
	I0314 19:42:12.647865    8428 round_trippers.go:580]     Audit-Id: cf04f011-97b2-4e74-b284-1cfb245a502c
	I0314 19:42:12.647865    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:42:12.647865    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:42:12.647865    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:42:12.647865    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:42:12.647865    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:42:12 GMT
	I0314 19:42:12.648173    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1868","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0314 19:42:12.648321    8428 pod_ready.go:92] pod "kube-proxy-cg28g" in "kube-system" namespace has status "Ready":"True"
	I0314 19:42:12.648321    8428 pod_ready.go:81] duration metric: took 376.6178ms for pod "kube-proxy-cg28g" in "kube-system" namespace to be "Ready" ...
	I0314 19:42:12.648321    8428 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-w2qls" in "kube-system" namespace to be "Ready" ...
	I0314 19:42:12.841758    8428 request.go:629] Waited for 193.4221ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.93.236:8443/api/v1/namespaces/kube-system/pods/kube-proxy-w2qls
	I0314 19:42:12.842110    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/namespaces/kube-system/pods/kube-proxy-w2qls
	I0314 19:42:12.842110    8428 round_trippers.go:469] Request Headers:
	I0314 19:42:12.842110    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:42:12.842110    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:42:12.845842    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:42:12.845842    8428 round_trippers.go:577] Response Headers:
	I0314 19:42:12.845842    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:42:13 GMT
	I0314 19:42:12.845842    8428 round_trippers.go:580]     Audit-Id: f84cb0eb-1f70-4c2e-945d-34ff75c5056d
	I0314 19:42:12.845842    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:42:12.845842    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:42:12.845842    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:42:12.845842    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:42:12.846256    8428 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-w2qls","generateName":"kube-proxy-","namespace":"kube-system","uid":"7a53e602-282e-4b63-a993-a5d23d3c615f","resourceVersion":"1678","creationTimestamp":"2024-03-14T19:26:25Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"6fc4cc4b-ef3f-4f16-8df5-a146058b364e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:26:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6fc4cc4b-ef3f-4f16-8df5-a146058b364e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5767 chars]
	I0314 19:42:13.043213    8428 request.go:629] Waited for 196.0671ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.93.236:8443/api/v1/nodes/multinode-442000-m03
	I0314 19:42:13.043316    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000-m03
	I0314 19:42:13.043316    8428 round_trippers.go:469] Request Headers:
	I0314 19:42:13.043460    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:42:13.043536    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:42:13.046717    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:42:13.046717    8428 round_trippers.go:577] Response Headers:
	I0314 19:42:13.046717    8428 round_trippers.go:580]     Audit-Id: 653afd05-a719-49ed-90fc-277195de6957
	I0314 19:42:13.046717    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:42:13.046717    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:42:13.046717    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:42:13.046717    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:42:13.046717    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:42:13 GMT
	I0314 19:42:13.047337    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000-m03","uid":"1b8e342b-6e96-49e8-a22c-874445d29fe3","resourceVersion":"1846","creationTimestamp":"2024-03-14T19:36:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_14T19_36_47_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:36:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4399 chars]
	I0314 19:42:13.047455    8428 pod_ready.go:97] node "multinode-442000-m03" hosting pod "kube-proxy-w2qls" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-442000-m03" has status "Ready":"Unknown"
	I0314 19:42:13.047455    8428 pod_ready.go:81] duration metric: took 399.1034ms for pod "kube-proxy-w2qls" in "kube-system" namespace to be "Ready" ...
	E0314 19:42:13.047455    8428 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-442000-m03" hosting pod "kube-proxy-w2qls" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-442000-m03" has status "Ready":"Unknown"
	I0314 19:42:13.047455    8428 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-442000" in "kube-system" namespace to be "Ready" ...
	I0314 19:42:13.246321    8428 request.go:629] Waited for 198.6029ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.93.236:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-442000
	I0314 19:42:13.246864    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-442000
	I0314 19:42:13.246864    8428 round_trippers.go:469] Request Headers:
	I0314 19:42:13.246938    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:42:13.246938    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:42:13.250294    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:42:13.250294    8428 round_trippers.go:577] Response Headers:
	I0314 19:42:13.250294    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:42:13 GMT
	I0314 19:42:13.250294    8428 round_trippers.go:580]     Audit-Id: 082897c3-4608-499b-a9d7-2d539edadd7f
	I0314 19:42:13.250294    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:42:13.250294    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:42:13.250294    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:42:13.250294    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:42:13.250931    8428 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-442000","namespace":"kube-system","uid":"76b10598-fe0d-4a14-a8e4-a32221fbb68f","resourceVersion":"1803","creationTimestamp":"2024-03-14T19:19:01Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"2b2434280023596d1e3c90125a7219ed","kubernetes.io/config.mirror":"2b2434280023596d1e3c90125a7219ed","kubernetes.io/config.seen":"2024-03-14T19:18:55.420206709Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:19:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 4905 chars]
	I0314 19:42:13.434404    8428 request.go:629] Waited for 182.7389ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:42:13.434528    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:42:13.434528    8428 round_trippers.go:469] Request Headers:
	I0314 19:42:13.434528    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:42:13.434528    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:42:13.437885    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:42:13.438393    8428 round_trippers.go:577] Response Headers:
	I0314 19:42:13.438393    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:42:13.438393    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:42:13.438393    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:42:13.438393    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:42:13.438393    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:42:13 GMT
	I0314 19:42:13.438393    8428 round_trippers.go:580]     Audit-Id: 7991ac5c-b2ff-42f2-b767-b0276c04ddff
	I0314 19:42:13.438599    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1868","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0314 19:42:13.438772    8428 pod_ready.go:92] pod "kube-scheduler-multinode-442000" in "kube-system" namespace has status "Ready":"True"
	I0314 19:42:13.438772    8428 pod_ready.go:81] duration metric: took 391.2879ms for pod "kube-scheduler-multinode-442000" in "kube-system" namespace to be "Ready" ...
	I0314 19:42:13.438772    8428 pod_ready.go:38] duration metric: took 31.7217073s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0314 19:42:13.438772    8428 api_server.go:52] waiting for apiserver process to appear ...
	I0314 19:42:13.446450    8428 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0314 19:42:13.471875    8428 command_runner.go:130] > a598d24960de
	I0314 19:42:13.471923    8428 logs.go:276] 1 containers: [a598d24960de]
	I0314 19:42:13.478296    8428 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0314 19:42:13.503761    8428 command_runner.go:130] > a81a9c43c355
	I0314 19:42:13.503916    8428 logs.go:276] 1 containers: [a81a9c43c355]
	I0314 19:42:13.511298    8428 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0314 19:42:13.537609    8428 command_runner.go:130] > b159aedddf94
	I0314 19:42:13.537691    8428 command_runner.go:130] > 8899bc003893
	I0314 19:42:13.537852    8428 logs.go:276] 2 containers: [b159aedddf94 8899bc003893]
	I0314 19:42:13.544603    8428 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0314 19:42:13.572306    8428 command_runner.go:130] > 32d90a3ea213
	I0314 19:42:13.572441    8428 command_runner.go:130] > dbb603289bf1
	I0314 19:42:13.572520    8428 logs.go:276] 2 containers: [32d90a3ea213 dbb603289bf1]
	I0314 19:42:13.580913    8428 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0314 19:42:13.605087    8428 command_runner.go:130] > 497007582e44
	I0314 19:42:13.605087    8428 command_runner.go:130] > 2a62baf3f1b4
	I0314 19:42:13.605087    8428 logs.go:276] 2 containers: [497007582e44 2a62baf3f1b4]
	I0314 19:42:13.614960    8428 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0314 19:42:13.639965    8428 command_runner.go:130] > 12baf105f0bb
	I0314 19:42:13.640856    8428 command_runner.go:130] > 16b80f73683d
	I0314 19:42:13.641108    8428 logs.go:276] 2 containers: [12baf105f0bb 16b80f73683d]
	I0314 19:42:13.648277    8428 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0314 19:42:13.672225    8428 command_runner.go:130] > 999e4c168afe
	I0314 19:42:13.672628    8428 command_runner.go:130] > 1a321c0e8997
	I0314 19:42:13.672824    8428 logs.go:276] 2 containers: [999e4c168afe 1a321c0e8997]
	I0314 19:42:13.672824    8428 logs.go:123] Gathering logs for kindnet [1a321c0e8997] ...
	I0314 19:42:13.672824    8428 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a321c0e8997"
	I0314 19:42:13.710719    8428 command_runner.go:130] ! I0314 19:27:36.366640       1 main.go:227] handling current node
	I0314 19:42:13.710719    8428 command_runner.go:130] ! I0314 19:27:36.366652       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:13.710719    8428 command_runner.go:130] ! I0314 19:27:36.366658       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:13.710719    8428 command_runner.go:130] ! I0314 19:27:36.366818       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:13.710719    8428 command_runner.go:130] ! I0314 19:27:36.366827       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:13.710719    8428 command_runner.go:130] ! I0314 19:27:46.378468       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:13.710719    8428 command_runner.go:130] ! I0314 19:27:46.378496       1 main.go:227] handling current node
	I0314 19:42:13.710719    8428 command_runner.go:130] ! I0314 19:27:46.378506       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:13.710719    8428 command_runner.go:130] ! I0314 19:27:46.378513       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:13.710719    8428 command_runner.go:130] ! I0314 19:27:46.379039       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:13.710719    8428 command_runner.go:130] ! I0314 19:27:46.379130       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:13.710719    8428 command_runner.go:130] ! I0314 19:27:56.393642       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:13.710719    8428 command_runner.go:130] ! I0314 19:27:56.393700       1 main.go:227] handling current node
	I0314 19:42:13.710719    8428 command_runner.go:130] ! I0314 19:27:56.393723       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:13.710719    8428 command_runner.go:130] ! I0314 19:27:56.393733       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:13.710719    8428 command_runner.go:130] ! I0314 19:27:56.394716       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:13.710719    8428 command_runner.go:130] ! I0314 19:27:56.394779       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:13.710719    8428 command_runner.go:130] ! I0314 19:28:06.403171       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:13.710719    8428 command_runner.go:130] ! I0314 19:28:06.403199       1 main.go:227] handling current node
	I0314 19:42:13.710719    8428 command_runner.go:130] ! I0314 19:28:06.403212       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:13.710719    8428 command_runner.go:130] ! I0314 19:28:06.403219       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:13.710719    8428 command_runner.go:130] ! I0314 19:28:06.403663       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:13.710719    8428 command_runner.go:130] ! I0314 19:28:06.403834       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:13.710719    8428 command_runner.go:130] ! I0314 19:28:16.415146       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:13.710719    8428 command_runner.go:130] ! I0314 19:28:16.415237       1 main.go:227] handling current node
	I0314 19:42:13.710719    8428 command_runner.go:130] ! I0314 19:28:16.415250       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:13.710719    8428 command_runner.go:130] ! I0314 19:28:16.415260       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:13.710719    8428 command_runner.go:130] ! I0314 19:28:16.415497       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:13.710719    8428 command_runner.go:130] ! I0314 19:28:16.415703       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:13.710719    8428 command_runner.go:130] ! I0314 19:28:26.430257       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:13.710719    8428 command_runner.go:130] ! I0314 19:28:26.430350       1 main.go:227] handling current node
	I0314 19:42:13.710719    8428 command_runner.go:130] ! I0314 19:28:26.430364       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:13.710719    8428 command_runner.go:130] ! I0314 19:28:26.430372       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:13.711739    8428 command_runner.go:130] ! I0314 19:28:26.430709       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:13.711739    8428 command_runner.go:130] ! I0314 19:28:26.430804       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:13.711739    8428 command_runner.go:130] ! I0314 19:28:36.445854       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:13.711739    8428 command_runner.go:130] ! I0314 19:28:36.445897       1 main.go:227] handling current node
	I0314 19:42:13.711871    8428 command_runner.go:130] ! I0314 19:28:36.445915       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:13.711871    8428 command_runner.go:130] ! I0314 19:28:36.446285       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:13.711871    8428 command_runner.go:130] ! I0314 19:28:36.446702       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:13.711871    8428 command_runner.go:130] ! I0314 19:28:36.446731       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:13.711981    8428 command_runner.go:130] ! I0314 19:28:46.461369       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:13.711981    8428 command_runner.go:130] ! I0314 19:28:46.462057       1 main.go:227] handling current node
	I0314 19:42:13.711981    8428 command_runner.go:130] ! I0314 19:28:46.462235       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:13.711981    8428 command_runner.go:130] ! I0314 19:28:46.462250       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:13.712076    8428 command_runner.go:130] ! I0314 19:28:46.462593       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:13.712076    8428 command_runner.go:130] ! I0314 19:28:46.462770       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:13.712076    8428 command_runner.go:130] ! I0314 19:28:56.477451       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:13.712201    8428 command_runner.go:130] ! I0314 19:28:56.477483       1 main.go:227] handling current node
	I0314 19:42:13.712201    8428 command_runner.go:130] ! I0314 19:28:56.477496       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:13.712201    8428 command_runner.go:130] ! I0314 19:28:56.477508       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:13.712298    8428 command_runner.go:130] ! I0314 19:28:56.478007       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:13.712298    8428 command_runner.go:130] ! I0314 19:28:56.478089       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:13.712298    8428 command_runner.go:130] ! I0314 19:29:06.484423       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:13.712298    8428 command_runner.go:130] ! I0314 19:29:06.484497       1 main.go:227] handling current node
	I0314 19:42:13.712298    8428 command_runner.go:130] ! I0314 19:29:06.484559       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:13.712406    8428 command_runner.go:130] ! I0314 19:29:06.484624       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:13.712756    8428 command_runner.go:130] ! I0314 19:29:06.484852       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:13.712864    8428 command_runner.go:130] ! I0314 19:29:06.484945       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:13.712864    8428 command_runner.go:130] ! I0314 19:29:16.500812       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:13.712864    8428 command_runner.go:130] ! I0314 19:29:16.500909       1 main.go:227] handling current node
	I0314 19:42:13.712961    8428 command_runner.go:130] ! I0314 19:29:16.500924       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:13.712983    8428 command_runner.go:130] ! I0314 19:29:16.500932       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:13.713061    8428 command_runner.go:130] ! I0314 19:29:16.501505       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:13.713061    8428 command_runner.go:130] ! I0314 19:29:16.501585       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:13.713061    8428 command_runner.go:130] ! I0314 19:29:26.508494       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:13.713061    8428 command_runner.go:130] ! I0314 19:29:26.508585       1 main.go:227] handling current node
	I0314 19:42:13.713061    8428 command_runner.go:130] ! I0314 19:29:26.508601       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:13.713171    8428 command_runner.go:130] ! I0314 19:29:26.508609       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:13.713171    8428 command_runner.go:130] ! I0314 19:29:26.508822       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:13.713171    8428 command_runner.go:130] ! I0314 19:29:26.508837       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:13.713171    8428 command_runner.go:130] ! I0314 19:29:36.517002       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:13.713279    8428 command_runner.go:130] ! I0314 19:29:36.517123       1 main.go:227] handling current node
	I0314 19:42:13.713279    8428 command_runner.go:130] ! I0314 19:29:36.517142       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:13.713279    8428 command_runner.go:130] ! I0314 19:29:36.517155       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:13.713279    8428 command_runner.go:130] ! I0314 19:29:36.517648       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:13.713382    8428 command_runner.go:130] ! I0314 19:29:36.517836       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:13.713382    8428 command_runner.go:130] ! I0314 19:29:46.530826       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:13.713382    8428 command_runner.go:130] ! I0314 19:29:46.530962       1 main.go:227] handling current node
	I0314 19:42:13.713476    8428 command_runner.go:130] ! I0314 19:29:46.530978       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:13.713476    8428 command_runner.go:130] ! I0314 19:29:46.531314       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:13.713476    8428 command_runner.go:130] ! I0314 19:29:46.531557       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:13.713568    8428 command_runner.go:130] ! I0314 19:29:46.531706       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:13.713568    8428 command_runner.go:130] ! I0314 19:29:56.551916       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:13.713568    8428 command_runner.go:130] ! I0314 19:29:56.551953       1 main.go:227] handling current node
	I0314 19:42:13.713568    8428 command_runner.go:130] ! I0314 19:29:56.551965       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:13.713702    8428 command_runner.go:130] ! I0314 19:29:56.551971       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:13.713789    8428 command_runner.go:130] ! I0314 19:29:56.552084       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:13.713789    8428 command_runner.go:130] ! I0314 19:29:56.552107       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:13.713789    8428 command_runner.go:130] ! I0314 19:30:06.560066       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:13.713864    8428 command_runner.go:130] ! I0314 19:30:06.560115       1 main.go:227] handling current node
	I0314 19:42:13.713864    8428 command_runner.go:130] ! I0314 19:30:06.560129       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:13.713933    8428 command_runner.go:130] ! I0314 19:30:06.560136       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:13.713956    8428 command_runner.go:130] ! I0314 19:30:06.560429       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:13.714039    8428 command_runner.go:130] ! I0314 19:30:06.560534       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:13.714062    8428 command_runner.go:130] ! I0314 19:30:16.573690       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:13.714135    8428 command_runner.go:130] ! I0314 19:30:16.573731       1 main.go:227] handling current node
	I0314 19:42:13.714135    8428 command_runner.go:130] ! I0314 19:30:16.573978       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:13.714208    8428 command_runner.go:130] ! I0314 19:30:16.574067       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:13.714208    8428 command_runner.go:130] ! I0314 19:30:16.574385       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:13.714256    8428 command_runner.go:130] ! I0314 19:30:16.574414       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:13.714256    8428 command_runner.go:130] ! I0314 19:30:26.589277       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:13.714256    8428 command_runner.go:130] ! I0314 19:30:26.589488       1 main.go:227] handling current node
	I0314 19:42:13.714331    8428 command_runner.go:130] ! I0314 19:30:26.589534       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:13.714415    8428 command_runner.go:130] ! I0314 19:30:26.589557       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:13.714438    8428 command_runner.go:130] ! I0314 19:30:26.589802       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:13.714438    8428 command_runner.go:130] ! I0314 19:30:26.589885       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:13.714438    8428 command_runner.go:130] ! I0314 19:30:36.605356       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:13.714438    8428 command_runner.go:130] ! I0314 19:30:36.605400       1 main.go:227] handling current node
	I0314 19:42:13.714438    8428 command_runner.go:130] ! I0314 19:30:36.605412       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:13.714438    8428 command_runner.go:130] ! I0314 19:30:36.605418       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:13.714438    8428 command_runner.go:130] ! I0314 19:30:36.605556       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:13.714438    8428 command_runner.go:130] ! I0314 19:30:36.605625       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:13.714438    8428 command_runner.go:130] ! I0314 19:30:46.612911       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:13.714438    8428 command_runner.go:130] ! I0314 19:30:46.613010       1 main.go:227] handling current node
	I0314 19:42:13.714438    8428 command_runner.go:130] ! I0314 19:30:46.613025       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:13.714438    8428 command_runner.go:130] ! I0314 19:30:46.613034       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:13.714438    8428 command_runner.go:130] ! I0314 19:30:46.613445       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:13.714438    8428 command_runner.go:130] ! I0314 19:30:46.615380       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:13.714438    8428 command_runner.go:130] ! I0314 19:30:56.630605       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:13.714438    8428 command_runner.go:130] ! I0314 19:30:56.630965       1 main.go:227] handling current node
	I0314 19:42:13.714438    8428 command_runner.go:130] ! I0314 19:30:56.631076       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:13.714438    8428 command_runner.go:130] ! I0314 19:30:56.631132       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:13.714438    8428 command_runner.go:130] ! I0314 19:30:56.631442       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:13.714974    8428 command_runner.go:130] ! I0314 19:30:56.631542       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:13.715094    8428 command_runner.go:130] ! I0314 19:31:06.643588       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:13.715094    8428 command_runner.go:130] ! I0314 19:31:06.643631       1 main.go:227] handling current node
	I0314 19:42:13.715094    8428 command_runner.go:130] ! I0314 19:31:06.643643       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:13.715094    8428 command_runner.go:130] ! I0314 19:31:06.643650       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:13.715198    8428 command_runner.go:130] ! I0314 19:31:06.644160       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:13.715198    8428 command_runner.go:130] ! I0314 19:31:06.644255       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:13.715198    8428 command_runner.go:130] ! I0314 19:31:16.650940       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:13.715198    8428 command_runner.go:130] ! I0314 19:31:16.651187       1 main.go:227] handling current node
	I0314 19:42:13.715309    8428 command_runner.go:130] ! I0314 19:31:16.651208       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:13.715309    8428 command_runner.go:130] ! I0314 19:31:16.651236       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:13.715309    8428 command_runner.go:130] ! I0314 19:31:16.651354       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:13.715309    8428 command_runner.go:130] ! I0314 19:31:16.651374       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:13.715413    8428 command_runner.go:130] ! I0314 19:31:26.665304       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:13.715413    8428 command_runner.go:130] ! I0314 19:31:26.665403       1 main.go:227] handling current node
	I0314 19:42:13.715413    8428 command_runner.go:130] ! I0314 19:31:26.665418       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:13.715509    8428 command_runner.go:130] ! I0314 19:31:26.665427       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:13.715509    8428 command_runner.go:130] ! I0314 19:31:26.665674       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:13.715509    8428 command_runner.go:130] ! I0314 19:31:26.665859       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:13.715603    8428 command_runner.go:130] ! I0314 19:31:36.681645       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:13.715603    8428 command_runner.go:130] ! I0314 19:31:36.681680       1 main.go:227] handling current node
	I0314 19:42:13.715603    8428 command_runner.go:130] ! I0314 19:31:36.681695       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:13.715603    8428 command_runner.go:130] ! I0314 19:31:36.681704       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:13.715699    8428 command_runner.go:130] ! I0314 19:31:36.682032       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:13.715699    8428 command_runner.go:130] ! I0314 19:31:36.682062       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:13.715699    8428 command_runner.go:130] ! I0314 19:31:46.697305       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:13.715699    8428 command_runner.go:130] ! I0314 19:31:46.697415       1 main.go:227] handling current node
	I0314 19:42:13.715804    8428 command_runner.go:130] ! I0314 19:31:46.697432       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:13.715804    8428 command_runner.go:130] ! I0314 19:31:46.697444       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:13.715804    8428 command_runner.go:130] ! I0314 19:31:46.697965       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:13.715804    8428 command_runner.go:130] ! I0314 19:31:46.698093       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:13.715916    8428 command_runner.go:130] ! I0314 19:31:56.705518       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:13.715916    8428 command_runner.go:130] ! I0314 19:31:56.705613       1 main.go:227] handling current node
	I0314 19:42:13.715985    8428 command_runner.go:130] ! I0314 19:31:56.705627       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:13.716020    8428 command_runner.go:130] ! I0314 19:31:56.705635       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:13.716020    8428 command_runner.go:130] ! I0314 19:31:56.706151       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:13.716064    8428 command_runner.go:130] ! I0314 19:31:56.706269       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:13.716097    8428 command_runner.go:130] ! I0314 19:32:06.716977       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:13.716097    8428 command_runner.go:130] ! I0314 19:32:06.717087       1 main.go:227] handling current node
	I0314 19:42:13.716170    8428 command_runner.go:130] ! I0314 19:32:06.717105       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:13.716229    8428 command_runner.go:130] ! I0314 19:32:06.717116       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:13.716229    8428 command_runner.go:130] ! I0314 19:32:06.717701       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:13.716229    8428 command_runner.go:130] ! I0314 19:32:06.717870       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:13.716229    8428 command_runner.go:130] ! I0314 19:32:16.738903       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:13.716229    8428 command_runner.go:130] ! I0314 19:32:16.738946       1 main.go:227] handling current node
	I0314 19:42:13.716229    8428 command_runner.go:130] ! I0314 19:32:16.738962       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:13.716229    8428 command_runner.go:130] ! I0314 19:32:16.738971       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:13.716229    8428 command_runner.go:130] ! I0314 19:32:16.739310       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:13.716229    8428 command_runner.go:130] ! I0314 19:32:16.739420       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:13.716229    8428 command_runner.go:130] ! I0314 19:32:26.749067       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:13.716229    8428 command_runner.go:130] ! I0314 19:32:26.749521       1 main.go:227] handling current node
	I0314 19:42:13.716229    8428 command_runner.go:130] ! I0314 19:32:26.749656       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:13.716229    8428 command_runner.go:130] ! I0314 19:32:26.749670       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:13.716229    8428 command_runner.go:130] ! I0314 19:32:26.750040       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:13.716229    8428 command_runner.go:130] ! I0314 19:32:26.750074       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:13.716229    8428 command_runner.go:130] ! I0314 19:32:36.765313       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:13.716229    8428 command_runner.go:130] ! I0314 19:32:36.765423       1 main.go:227] handling current node
	I0314 19:42:13.716229    8428 command_runner.go:130] ! I0314 19:32:36.765442       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:13.716229    8428 command_runner.go:130] ! I0314 19:32:36.765453       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:13.716229    8428 command_runner.go:130] ! I0314 19:32:36.766102       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:13.716229    8428 command_runner.go:130] ! I0314 19:32:36.766130       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:13.716229    8428 command_runner.go:130] ! I0314 19:32:46.781715       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:13.716229    8428 command_runner.go:130] ! I0314 19:32:46.781800       1 main.go:227] handling current node
	I0314 19:42:13.716229    8428 command_runner.go:130] ! I0314 19:32:46.782151       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:13.716802    8428 command_runner.go:130] ! I0314 19:32:46.782168       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:13.716942    8428 command_runner.go:130] ! I0314 19:32:46.782370       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:13.716942    8428 command_runner.go:130] ! I0314 19:32:46.782396       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:13.717018    8428 command_runner.go:130] ! I0314 19:32:56.797473       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:13.717041    8428 command_runner.go:130] ! I0314 19:32:56.797568       1 main.go:227] handling current node
	I0314 19:42:13.717041    8428 command_runner.go:130] ! I0314 19:32:56.797583       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:13.717115    8428 command_runner.go:130] ! I0314 19:32:56.797621       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:13.717184    8428 command_runner.go:130] ! I0314 19:32:56.797733       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:13.717219    8428 command_runner.go:130] ! I0314 19:32:56.797772       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:13.717219    8428 command_runner.go:130] ! I0314 19:33:06.803421       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:13.717219    8428 command_runner.go:130] ! I0314 19:33:06.803513       1 main.go:227] handling current node
	I0314 19:42:13.717365    8428 command_runner.go:130] ! I0314 19:33:06.803527       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:13.717418    8428 command_runner.go:130] ! I0314 19:33:06.803534       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:13.717418    8428 command_runner.go:130] ! I0314 19:33:06.804158       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:13.717418    8428 command_runner.go:130] ! I0314 19:33:06.804237       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:13.717495    8428 command_runner.go:130] ! I0314 19:33:16.818983       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:13.717562    8428 command_runner.go:130] ! I0314 19:33:16.819134       1 main.go:227] handling current node
	I0314 19:42:13.717562    8428 command_runner.go:130] ! I0314 19:33:16.819149       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:13.717562    8428 command_runner.go:130] ! I0314 19:33:16.819157       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:13.717562    8428 command_runner.go:130] ! I0314 19:33:16.819421       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:13.717562    8428 command_runner.go:130] ! I0314 19:33:16.819491       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:13.717562    8428 command_runner.go:130] ! I0314 19:33:26.826209       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:13.717562    8428 command_runner.go:130] ! I0314 19:33:26.826474       1 main.go:227] handling current node
	I0314 19:42:13.717562    8428 command_runner.go:130] ! I0314 19:33:26.826509       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:13.717562    8428 command_runner.go:130] ! I0314 19:33:26.826519       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:13.717562    8428 command_runner.go:130] ! I0314 19:33:26.826794       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:13.717562    8428 command_runner.go:130] ! I0314 19:33:26.826886       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:13.717562    8428 command_runner.go:130] ! I0314 19:33:36.839979       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:13.717562    8428 command_runner.go:130] ! I0314 19:33:36.840555       1 main.go:227] handling current node
	I0314 19:42:13.717562    8428 command_runner.go:130] ! I0314 19:33:36.840828       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:13.717562    8428 command_runner.go:130] ! I0314 19:33:36.840855       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:13.717562    8428 command_runner.go:130] ! I0314 19:33:36.841055       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:13.717562    8428 command_runner.go:130] ! I0314 19:33:36.841183       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:13.717562    8428 command_runner.go:130] ! I0314 19:33:46.854483       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:13.717562    8428 command_runner.go:130] ! I0314 19:33:46.854585       1 main.go:227] handling current node
	I0314 19:42:13.717562    8428 command_runner.go:130] ! I0314 19:33:46.854600       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:13.717562    8428 command_runner.go:130] ! I0314 19:33:46.854608       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:13.717562    8428 command_runner.go:130] ! I0314 19:33:46.855303       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:13.717562    8428 command_runner.go:130] ! I0314 19:33:46.855389       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:13.718092    8428 command_runner.go:130] ! I0314 19:33:56.867052       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:13.718092    8428 command_runner.go:130] ! I0314 19:33:56.867136       1 main.go:227] handling current node
	I0314 19:42:13.718092    8428 command_runner.go:130] ! I0314 19:33:56.867150       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:13.718194    8428 command_runner.go:130] ! I0314 19:33:56.867158       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:13.718281    8428 command_runner.go:130] ! I0314 19:33:56.867493       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:13.718316    8428 command_runner.go:130] ! I0314 19:33:56.867886       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:13.718316    8428 command_runner.go:130] ! I0314 19:34:06.874298       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:13.718316    8428 command_runner.go:130] ! I0314 19:34:06.874391       1 main.go:227] handling current node
	I0314 19:42:13.718316    8428 command_runner.go:130] ! I0314 19:34:06.874405       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:13.718316    8428 command_runner.go:130] ! I0314 19:34:06.874413       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:13.718316    8428 command_runner.go:130] ! I0314 19:34:06.874932       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:13.718316    8428 command_runner.go:130] ! I0314 19:34:06.874962       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:13.718316    8428 command_runner.go:130] ! I0314 19:34:16.890513       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:13.718316    8428 command_runner.go:130] ! I0314 19:34:16.890589       1 main.go:227] handling current node
	I0314 19:42:13.718316    8428 command_runner.go:130] ! I0314 19:34:16.890604       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:13.718316    8428 command_runner.go:130] ! I0314 19:34:16.890612       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:13.718316    8428 command_runner.go:130] ! I0314 19:34:16.890870       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:13.718316    8428 command_runner.go:130] ! I0314 19:34:16.890953       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:13.718316    8428 command_runner.go:130] ! I0314 19:34:26.908423       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:13.718316    8428 command_runner.go:130] ! I0314 19:34:26.908576       1 main.go:227] handling current node
	I0314 19:42:13.718316    8428 command_runner.go:130] ! I0314 19:34:26.908597       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:13.718316    8428 command_runner.go:130] ! I0314 19:34:26.908606       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:13.718316    8428 command_runner.go:130] ! I0314 19:34:26.909103       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:13.718316    8428 command_runner.go:130] ! I0314 19:34:26.909271       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:13.718316    8428 command_runner.go:130] ! I0314 19:34:36.915794       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:13.718316    8428 command_runner.go:130] ! I0314 19:34:36.915910       1 main.go:227] handling current node
	I0314 19:42:13.718316    8428 command_runner.go:130] ! I0314 19:34:36.915926       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:13.718853    8428 command_runner.go:130] ! I0314 19:34:36.915935       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:13.718853    8428 command_runner.go:130] ! I0314 19:34:36.916282       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:13.718853    8428 command_runner.go:130] ! I0314 19:34:36.916372       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:13.718853    8428 command_runner.go:130] ! I0314 19:34:46.931699       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:13.718853    8428 command_runner.go:130] ! I0314 19:34:46.931833       1 main.go:227] handling current node
	I0314 19:42:13.718974    8428 command_runner.go:130] ! I0314 19:34:46.931849       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:13.719026    8428 command_runner.go:130] ! I0314 19:34:46.931858       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:13.719026    8428 command_runner.go:130] ! I0314 19:34:46.932099       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:13.719026    8428 command_runner.go:130] ! I0314 19:34:46.932124       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:13.719026    8428 command_runner.go:130] ! I0314 19:34:56.946470       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:13.719026    8428 command_runner.go:130] ! I0314 19:34:56.946565       1 main.go:227] handling current node
	I0314 19:42:13.719026    8428 command_runner.go:130] ! I0314 19:34:56.946580       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:13.719026    8428 command_runner.go:130] ! I0314 19:34:56.946588       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:13.719026    8428 command_runner.go:130] ! I0314 19:34:56.946812       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:13.719026    8428 command_runner.go:130] ! I0314 19:34:56.946927       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:13.719026    8428 command_runner.go:130] ! I0314 19:35:06.960844       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:13.719026    8428 command_runner.go:130] ! I0314 19:35:06.960939       1 main.go:227] handling current node
	I0314 19:42:13.719026    8428 command_runner.go:130] ! I0314 19:35:06.960954       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:13.719026    8428 command_runner.go:130] ! I0314 19:35:06.960962       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:13.719026    8428 command_runner.go:130] ! I0314 19:35:06.961467       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:13.719026    8428 command_runner.go:130] ! I0314 19:35:06.961574       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:13.719026    8428 command_runner.go:130] ! I0314 19:35:16.981993       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:13.719026    8428 command_runner.go:130] ! I0314 19:35:16.982080       1 main.go:227] handling current node
	I0314 19:42:13.719026    8428 command_runner.go:130] ! I0314 19:35:16.982095       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:13.719026    8428 command_runner.go:130] ! I0314 19:35:16.982103       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:13.719026    8428 command_runner.go:130] ! I0314 19:35:16.982594       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:13.719026    8428 command_runner.go:130] ! I0314 19:35:16.982673       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:13.719026    8428 command_runner.go:130] ! I0314 19:35:26.993848       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:13.719026    8428 command_runner.go:130] ! I0314 19:35:26.993940       1 main.go:227] handling current node
	I0314 19:42:13.719026    8428 command_runner.go:130] ! I0314 19:35:26.993955       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:13.719026    8428 command_runner.go:130] ! I0314 19:35:26.993963       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:13.719026    8428 command_runner.go:130] ! I0314 19:35:26.994360       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:13.719026    8428 command_runner.go:130] ! I0314 19:35:26.994437       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:13.719026    8428 command_runner.go:130] ! I0314 19:35:37.008613       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:13.719026    8428 command_runner.go:130] ! I0314 19:35:37.008706       1 main.go:227] handling current node
	I0314 19:42:13.719026    8428 command_runner.go:130] ! I0314 19:35:37.008720       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:13.719026    8428 command_runner.go:130] ! I0314 19:35:37.008727       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:13.719026    8428 command_runner.go:130] ! I0314 19:35:37.009233       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:13.719026    8428 command_runner.go:130] ! I0314 19:35:37.009320       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:13.719026    8428 command_runner.go:130] ! I0314 19:35:47.018420       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:13.719026    8428 command_runner.go:130] ! I0314 19:35:47.018526       1 main.go:227] handling current node
	I0314 19:42:13.719026    8428 command_runner.go:130] ! I0314 19:35:47.018541       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:13.719026    8428 command_runner.go:130] ! I0314 19:35:47.018549       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:13.719026    8428 command_runner.go:130] ! I0314 19:35:47.018669       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:13.719026    8428 command_runner.go:130] ! I0314 19:35:47.018680       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:13.719026    8428 command_runner.go:130] ! I0314 19:35:57.025132       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:13.719026    8428 command_runner.go:130] ! I0314 19:35:57.025207       1 main.go:227] handling current node
	I0314 19:42:13.719026    8428 command_runner.go:130] ! I0314 19:35:57.025220       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:13.719561    8428 command_runner.go:130] ! I0314 19:35:57.025228       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:13.719561    8428 command_runner.go:130] ! I0314 19:35:57.026009       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:13.719561    8428 command_runner.go:130] ! I0314 19:35:57.026145       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:13.719561    8428 command_runner.go:130] ! I0314 19:36:07.042281       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:13.719561    8428 command_runner.go:130] ! I0314 19:36:07.042353       1 main.go:227] handling current node
	I0314 19:42:13.719561    8428 command_runner.go:130] ! I0314 19:36:07.042367       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:13.719643    8428 command_runner.go:130] ! I0314 19:36:07.042375       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:13.719643    8428 command_runner.go:130] ! I0314 19:36:07.042493       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:13.719643    8428 command_runner.go:130] ! I0314 19:36:07.042500       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:13.719693    8428 command_runner.go:130] ! I0314 19:36:17.055539       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:13.719693    8428 command_runner.go:130] ! I0314 19:36:17.055567       1 main.go:227] handling current node
	I0314 19:42:13.719693    8428 command_runner.go:130] ! I0314 19:36:17.055581       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:13.719733    8428 command_runner.go:130] ! I0314 19:36:17.055588       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:13.719733    8428 command_runner.go:130] ! I0314 19:36:17.056312       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:13.719733    8428 command_runner.go:130] ! I0314 19:36:17.056341       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:13.719733    8428 command_runner.go:130] ! I0314 19:36:27.067921       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:13.719733    8428 command_runner.go:130] ! I0314 19:36:27.067961       1 main.go:227] handling current node
	I0314 19:42:13.719733    8428 command_runner.go:130] ! I0314 19:36:27.069052       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:13.719733    8428 command_runner.go:130] ! I0314 19:36:27.069179       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:13.719733    8428 command_runner.go:130] ! I0314 19:36:27.069306       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:13.719733    8428 command_runner.go:130] ! I0314 19:36:27.069332       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:13.719733    8428 command_runner.go:130] ! I0314 19:36:37.082322       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:13.719733    8428 command_runner.go:130] ! I0314 19:36:37.082413       1 main.go:227] handling current node
	I0314 19:42:13.719733    8428 command_runner.go:130] ! I0314 19:36:37.082429       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:13.719733    8428 command_runner.go:130] ! I0314 19:36:37.082437       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:13.719733    8428 command_runner.go:130] ! I0314 19:36:37.082972       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:13.719733    8428 command_runner.go:130] ! I0314 19:36:37.083000       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:13.719733    8428 command_runner.go:130] ! I0314 19:36:47.099685       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:13.719733    8428 command_runner.go:130] ! I0314 19:36:47.099830       1 main.go:227] handling current node
	I0314 19:42:13.719733    8428 command_runner.go:130] ! I0314 19:36:47.099862       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:13.719733    8428 command_runner.go:130] ! I0314 19:36:47.099982       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:13.719733    8428 command_runner.go:130] ! I0314 19:36:57.107274       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:13.719733    8428 command_runner.go:130] ! I0314 19:36:57.107368       1 main.go:227] handling current node
	I0314 19:42:13.719733    8428 command_runner.go:130] ! I0314 19:36:57.107382       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:13.719733    8428 command_runner.go:130] ! I0314 19:36:57.107390       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:13.719733    8428 command_runner.go:130] ! I0314 19:36:57.107827       1 main.go:223] Handling node with IPs: map[172.17.84.215:{}]
	I0314 19:42:13.719733    8428 command_runner.go:130] ! I0314 19:36:57.107942       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.3.0/24] 
	I0314 19:42:13.719733    8428 command_runner.go:130] ! I0314 19:36:57.108076       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 172.17.84.215 Flags: [] Table: 0} 
	I0314 19:42:13.719733    8428 command_runner.go:130] ! I0314 19:37:07.120709       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:13.719733    8428 command_runner.go:130] ! I0314 19:37:07.121059       1 main.go:227] handling current node
	I0314 19:42:13.719733    8428 command_runner.go:130] ! I0314 19:37:07.121098       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:13.719733    8428 command_runner.go:130] ! I0314 19:37:07.121109       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:13.719733    8428 command_runner.go:130] ! I0314 19:37:07.121440       1 main.go:223] Handling node with IPs: map[172.17.84.215:{}]
	I0314 19:42:13.719733    8428 command_runner.go:130] ! I0314 19:37:07.121455       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.3.0/24] 
	I0314 19:42:13.719733    8428 command_runner.go:130] ! I0314 19:37:17.137704       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:13.719733    8428 command_runner.go:130] ! I0314 19:37:17.137784       1 main.go:227] handling current node
	I0314 19:42:13.719733    8428 command_runner.go:130] ! I0314 19:37:17.137796       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:13.719733    8428 command_runner.go:130] ! I0314 19:37:17.137803       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:13.719733    8428 command_runner.go:130] ! I0314 19:37:17.138265       1 main.go:223] Handling node with IPs: map[172.17.84.215:{}]
	I0314 19:42:13.719733    8428 command_runner.go:130] ! I0314 19:37:17.138298       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.3.0/24] 
	I0314 19:42:13.719733    8428 command_runner.go:130] ! I0314 19:37:27.144505       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:13.719733    8428 command_runner.go:130] ! I0314 19:37:27.144594       1 main.go:227] handling current node
	I0314 19:42:13.719733    8428 command_runner.go:130] ! I0314 19:37:27.144607       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:13.719733    8428 command_runner.go:130] ! I0314 19:37:27.144615       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:13.719733    8428 command_runner.go:130] ! I0314 19:37:27.145062       1 main.go:223] Handling node with IPs: map[172.17.84.215:{}]
	I0314 19:42:13.719733    8428 command_runner.go:130] ! I0314 19:37:27.145092       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.3.0/24] 
	I0314 19:42:13.719733    8428 command_runner.go:130] ! I0314 19:37:37.154684       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:13.719733    8428 command_runner.go:130] ! I0314 19:37:37.154836       1 main.go:227] handling current node
	I0314 19:42:13.719733    8428 command_runner.go:130] ! I0314 19:37:37.154851       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:13.719733    8428 command_runner.go:130] ! I0314 19:37:37.154860       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:13.720362    8428 command_runner.go:130] ! I0314 19:37:37.155452       1 main.go:223] Handling node with IPs: map[172.17.84.215:{}]
	I0314 19:42:13.720362    8428 command_runner.go:130] ! I0314 19:37:37.155614       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.3.0/24] 
	I0314 19:42:13.720362    8428 command_runner.go:130] ! I0314 19:37:47.168249       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:13.720362    8428 command_runner.go:130] ! I0314 19:37:47.168338       1 main.go:227] handling current node
	I0314 19:42:13.720362    8428 command_runner.go:130] ! I0314 19:37:47.168352       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:13.720362    8428 command_runner.go:130] ! I0314 19:37:47.168360       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:13.720362    8428 command_runner.go:130] ! I0314 19:37:47.168976       1 main.go:223] Handling node with IPs: map[172.17.84.215:{}]
	I0314 19:42:13.720362    8428 command_runner.go:130] ! I0314 19:37:47.169064       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.3.0/24] 
	I0314 19:42:13.720464    8428 command_runner.go:130] ! I0314 19:37:57.176039       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:13.720464    8428 command_runner.go:130] ! I0314 19:37:57.176130       1 main.go:227] handling current node
	I0314 19:42:13.720506    8428 command_runner.go:130] ! I0314 19:37:57.176145       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:13.720506    8428 command_runner.go:130] ! I0314 19:37:57.176153       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:13.720548    8428 command_runner.go:130] ! I0314 19:37:57.176528       1 main.go:223] Handling node with IPs: map[172.17.84.215:{}]
	I0314 19:42:13.720548    8428 command_runner.go:130] ! I0314 19:37:57.176659       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.3.0/24] 
	I0314 19:42:13.720548    8428 command_runner.go:130] ! I0314 19:38:07.189890       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:13.720548    8428 command_runner.go:130] ! I0314 19:38:07.189993       1 main.go:227] handling current node
	I0314 19:42:13.720548    8428 command_runner.go:130] ! I0314 19:38:07.190008       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:13.720548    8428 command_runner.go:130] ! I0314 19:38:07.190016       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:13.720548    8428 command_runner.go:130] ! I0314 19:38:07.190217       1 main.go:223] Handling node with IPs: map[172.17.84.215:{}]
	I0314 19:42:13.720548    8428 command_runner.go:130] ! I0314 19:38:07.190245       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.3.0/24] 
	I0314 19:42:13.720548    8428 command_runner.go:130] ! I0314 19:38:17.196541       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:13.720548    8428 command_runner.go:130] ! I0314 19:38:17.196633       1 main.go:227] handling current node
	I0314 19:42:13.720548    8428 command_runner.go:130] ! I0314 19:38:17.196647       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:13.720548    8428 command_runner.go:130] ! I0314 19:38:17.196655       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:13.720548    8428 command_runner.go:130] ! I0314 19:38:17.196888       1 main.go:223] Handling node with IPs: map[172.17.84.215:{}]
	I0314 19:42:13.720548    8428 command_runner.go:130] ! I0314 19:38:17.197012       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.3.0/24] 
	I0314 19:42:13.720548    8428 command_runner.go:130] ! I0314 19:38:27.217365       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:13.720548    8428 command_runner.go:130] ! I0314 19:38:27.217460       1 main.go:227] handling current node
	I0314 19:42:13.720548    8428 command_runner.go:130] ! I0314 19:38:27.217475       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:13.720548    8428 command_runner.go:130] ! I0314 19:38:27.217483       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:13.720548    8428 command_runner.go:130] ! I0314 19:38:27.217621       1 main.go:223] Handling node with IPs: map[172.17.84.215:{}]
	I0314 19:42:13.720548    8428 command_runner.go:130] ! I0314 19:38:27.217634       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.3.0/24] 
	I0314 19:42:13.720548    8428 command_runner.go:130] ! I0314 19:38:37.229941       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:13.720548    8428 command_runner.go:130] ! I0314 19:38:37.230048       1 main.go:227] handling current node
	I0314 19:42:13.720548    8428 command_runner.go:130] ! I0314 19:38:37.230062       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:13.720548    8428 command_runner.go:130] ! I0314 19:38:37.230070       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:13.720548    8428 command_runner.go:130] ! I0314 19:38:37.230268       1 main.go:223] Handling node with IPs: map[172.17.84.215:{}]
	I0314 19:42:13.720548    8428 command_runner.go:130] ! I0314 19:38:37.230338       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.3.0/24] 
	I0314 19:42:13.737472    8428 logs.go:123] Gathering logs for kubelet ...
	I0314 19:42:13.737472    8428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:42:13.766574    8428 command_runner.go:130] > Mar 14 19:40:57 multinode-442000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0314 19:42:13.766574    8428 command_runner.go:130] > Mar 14 19:40:57 multinode-442000 kubelet[1388]: I0314 19:40:57.516074    1388 server.go:467] "Kubelet version" kubeletVersion="v1.28.4"
	I0314 19:42:13.766574    8428 command_runner.go:130] > Mar 14 19:40:57 multinode-442000 kubelet[1388]: I0314 19:40:57.516440    1388 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0314 19:42:13.766574    8428 command_runner.go:130] > Mar 14 19:40:57 multinode-442000 kubelet[1388]: I0314 19:40:57.516773    1388 server.go:895] "Client rotation is on, will bootstrap in background"
	I0314 19:42:13.766574    8428 command_runner.go:130] > Mar 14 19:40:57 multinode-442000 kubelet[1388]: E0314 19:40:57.516893    1388 run.go:74] "command failed" err="failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory"
	I0314 19:42:13.766574    8428 command_runner.go:130] > Mar 14 19:40:57 multinode-442000 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	I0314 19:42:13.766574    8428 command_runner.go:130] > Mar 14 19:40:57 multinode-442000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	I0314 19:42:13.766574    8428 command_runner.go:130] > Mar 14 19:40:58 multinode-442000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1.
	I0314 19:42:13.766574    8428 command_runner.go:130] > Mar 14 19:40:58 multinode-442000 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I0314 19:42:13.766574    8428 command_runner.go:130] > Mar 14 19:40:58 multinode-442000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0314 19:42:13.766574    8428 command_runner.go:130] > Mar 14 19:40:58 multinode-442000 kubelet[1450]: I0314 19:40:58.293295    1450 server.go:467] "Kubelet version" kubeletVersion="v1.28.4"
	I0314 19:42:13.766574    8428 command_runner.go:130] > Mar 14 19:40:58 multinode-442000 kubelet[1450]: I0314 19:40:58.293422    1450 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0314 19:42:13.766574    8428 command_runner.go:130] > Mar 14 19:40:58 multinode-442000 kubelet[1450]: I0314 19:40:58.293759    1450 server.go:895] "Client rotation is on, will bootstrap in background"
	I0314 19:42:13.766574    8428 command_runner.go:130] > Mar 14 19:40:58 multinode-442000 kubelet[1450]: E0314 19:40:58.293809    1450 run.go:74] "command failed" err="failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory"
	I0314 19:42:13.766574    8428 command_runner.go:130] > Mar 14 19:40:58 multinode-442000 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	I0314 19:42:13.766574    8428 command_runner.go:130] > Mar 14 19:40:58 multinode-442000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	I0314 19:42:13.766574    8428 command_runner.go:130] > Mar 14 19:40:58 multinode-442000 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I0314 19:42:13.766574    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0314 19:42:13.766574    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.270178    1523 server.go:467] "Kubelet version" kubeletVersion="v1.28.4"
	I0314 19:42:13.766574    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.270275    1523 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0314 19:42:13.766574    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.270469    1523 server.go:895] "Client rotation is on, will bootstrap in background"
	I0314 19:42:13.766574    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.272943    1523 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	I0314 19:42:13.766574    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.286808    1523 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0314 19:42:13.766574    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.333673    1523 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
	I0314 19:42:13.766574    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.335204    1523 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
	I0314 19:42:13.766574    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.335543    1523 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","To
pologyManagerPolicyOptions":null}
	I0314 19:42:13.766574    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.335688    1523 topology_manager.go:138] "Creating topology manager with none policy"
	I0314 19:42:13.767603    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.335703    1523 container_manager_linux.go:301] "Creating device plugin manager"
	I0314 19:42:13.767603    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.336879    1523 state_mem.go:36] "Initialized new in-memory state store"
	I0314 19:42:13.767603    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.338507    1523 kubelet.go:393] "Attempting to sync node with API server"
	I0314 19:42:13.767603    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.338606    1523 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests"
	I0314 19:42:13.767603    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.339942    1523 kubelet.go:309] "Adding apiserver pod source"
	I0314 19:42:13.767681    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.339973    1523 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
	I0314 19:42:13.767681    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: W0314 19:41:00.342644    1523 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-442000&limit=500&resourceVersion=0": dial tcp 172.17.93.236:8443: connect: connection refused
	I0314 19:42:13.767742    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: E0314 19:41:00.342728    1523 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-442000&limit=500&resourceVersion=0": dial tcp 172.17.93.236:8443: connect: connection refused
	I0314 19:42:13.767810    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: W0314 19:41:00.352846    1523 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.17.93.236:8443: connect: connection refused
	I0314 19:42:13.767833    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: E0314 19:41:00.353005    1523 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.17.93.236:8443: connect: connection refused
	I0314 19:42:13.767833    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.362091    1523 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="docker" version="25.0.4" apiVersion="v1"
	I0314 19:42:13.767833    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: W0314 19:41:00.368654    1523 probe.go:268] Flexvolume plugin directory at /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
	I0314 19:42:13.767886    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.370831    1523 server.go:1232] "Started kubelet"
	I0314 19:42:13.767886    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.376404    1523 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
	I0314 19:42:13.767886    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.381472    1523 server.go:162] "Starting to listen" address="0.0.0.0" port=10250
	I0314 19:42:13.767886    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.381715    1523 volume_manager.go:291] "Starting Kubelet Volume Manager"
	I0314 19:42:13.767941    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.383735    1523 server.go:462] "Adding debug handlers to kubelet server"
	I0314 19:42:13.767941    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.385265    1523 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10
	I0314 19:42:13.767990    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.387577    1523 desired_state_of_world_populator.go:151] "Desired state populator starts to run"
	I0314 19:42:13.768012    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.392182    1523 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock"
	I0314 19:42:13.768079    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: E0314 19:41:00.392853    1523 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-442000?timeout=10s\": dial tcp 172.17.93.236:8443: connect: connection refused" interval="200ms"
	I0314 19:42:13.768116    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: W0314 19:41:00.392921    1523 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.17.93.236:8443: connect: connection refused
	I0314 19:42:13.768116    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: E0314 19:41:00.392970    1523 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.17.93.236:8443: connect: connection refused
	I0314 19:42:13.768116    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: E0314 19:41:00.402867    1523 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"multinode-442000.17bcb8e6e82683f3", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"multinode-442000", UID:"multinode-442000", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"multinode-442000"}, FirstTimestamp:time.Date(2024, ti
me.March, 14, 19, 41, 0, 370772979, time.Local), LastTimestamp:time.Date(2024, time.March, 14, 19, 41, 0, 370772979, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"multinode-442000"}': 'Post "https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events": dial tcp 172.17.93.236:8443: connect: connection refused'(may retry after sleeping)
	I0314 19:42:13.768116    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.431568    1523 reconciler_new.go:29] "Reconciler: start to sync state"
	I0314 19:42:13.768116    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.453043    1523 cpu_manager.go:214] "Starting CPU manager" policy="none"
	I0314 19:42:13.768116    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.453062    1523 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s"
	I0314 19:42:13.768116    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.453088    1523 state_mem.go:36] "Initialized new in-memory state store"
	I0314 19:42:13.768116    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.453812    1523 state_mem.go:88] "Updated default CPUSet" cpuSet=""
	I0314 19:42:13.768116    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.453838    1523 state_mem.go:96] "Updated CPUSet assignments" assignments={}
	I0314 19:42:13.768116    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.453846    1523 policy_none.go:49] "None policy: Start"
	I0314 19:42:13.768116    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.459854    1523 memory_manager.go:169] "Starting memorymanager" policy="None"
	I0314 19:42:13.768116    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.459925    1523 state_mem.go:35] "Initializing new in-memory state store"
	I0314 19:42:13.768116    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.460715    1523 state_mem.go:75] "Updated machine memory state"
	I0314 19:42:13.768116    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.466366    1523 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found"
	I0314 19:42:13.768116    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.471455    1523 plugin_manager.go:118] "Starting Kubelet Plugin Manager"
	I0314 19:42:13.768116    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.475344    1523 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4"
	I0314 19:42:13.768116    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.478780    1523 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6"
	I0314 19:42:13.768116    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.478820    1523 status_manager.go:217] "Starting to sync pod status with apiserver"
	I0314 19:42:13.768116    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.478846    1523 kubelet.go:2303] "Starting kubelet main sync loop"
	I0314 19:42:13.768116    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: E0314 19:41:00.478899    1523 kubelet.go:2327] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful"
	I0314 19:42:13.768116    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: W0314 19:41:00.485952    1523 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.17.93.236:8443: connect: connection refused
	I0314 19:42:13.768116    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: E0314 19:41:00.487569    1523 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.17.93.236:8443: connect: connection refused
	I0314 19:42:13.768116    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: E0314 19:41:00.493845    1523 eviction_manager.go:258] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"multinode-442000\" not found"
	I0314 19:42:13.768116    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.501023    1523 kubelet_node_status.go:70] "Attempting to register node" node="multinode-442000"
	I0314 19:42:13.768644    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: E0314 19:41:00.501915    1523 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.17.93.236:8443: connect: connection refused" node="multinode-442000"
	I0314 19:42:13.768684    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: E0314 19:41:00.503739    1523 iptables.go:575] "Could not set up iptables canary" err=<
	I0314 19:42:13.768716    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	I0314 19:42:13.768716    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	I0314 19:42:13.768752    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]:         Perhaps ip6tables or your kernel needs to be upgraded.
	I0314 19:42:13.768752    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	I0314 19:42:13.768752    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.578961    1523 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="af5b88117f99a24e81a324ab026c69a7058a7c1bc88d9b9a5386134abc257bba"
	I0314 19:42:13.768752    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.578983    1523 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="54e39762d7a6437164a9b2c6dd22b1f36b57514310190ce4acc3349001cb1774"
	I0314 19:42:13.768828    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.579017    1523 topology_manager.go:215] "Topology Admit Handler" podUID="2b2434280023596d1e3c90125a7219ed" podNamespace="kube-system" podName="kube-scheduler-multinode-442000"
	I0314 19:42:13.768828    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.592991    1523 topology_manager.go:215] "Topology Admit Handler" podUID="7754d2f32966faec8123dc3b8a2af767" podNamespace="kube-system" podName="kube-apiserver-multinode-442000"
	I0314 19:42:13.768902    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: E0314 19:41:00.594193    1523 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-442000?timeout=10s\": dial tcp 172.17.93.236:8443: connect: connection refused" interval="400ms"
	I0314 19:42:13.768958    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.609977    1523 topology_manager.go:215] "Topology Admit Handler" podUID="a7ee530f2bd843eddeace8cd6ec0d204" podNamespace="kube-system" podName="kube-controller-manager-multinode-442000"
	I0314 19:42:13.768958    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.622973    1523 topology_manager.go:215] "Topology Admit Handler" podUID="fa99a5621d016aa714804afcaa1e0a53" podNamespace="kube-system" podName="etcd-multinode-442000"
	I0314 19:42:13.769023    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.634832    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2b2434280023596d1e3c90125a7219ed-kubeconfig\") pod \"kube-scheduler-multinode-442000\" (UID: \"2b2434280023596d1e3c90125a7219ed\") " pod="kube-system/kube-scheduler-multinode-442000"
	I0314 19:42:13.769023    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.640587    1523 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b179d157b6b2f71cc980c7ea5060a613be77e84e89947fbcb91a687ea7310eaf"
	I0314 19:42:13.769058    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.640610    1523 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b046b896affe9f3219822b857a6b4dfa1427854d5df420b6b2e1cec631372548"
	I0314 19:42:13.769096    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.640625    1523 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fa0f2372c88eef3de0c7caa0041064157c314aff4c14bf6622f34dd89106f773"
	I0314 19:42:13.769130    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.640637    1523 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9b3244b47278e22e56ab0362b7a74ee80ca2806fb1074d718b0278b5bc70be76"
	I0314 19:42:13.769167    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.640648    1523 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a3dba3fc54c01e7fb1675536e155d6b541ed5782f664675ccd953639013f50b0"
	I0314 19:42:13.769167    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.640663    1523 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="102c907609a3ac28e95d46e2671477684c5a043672e21597c677ee9dbfcb7e08"
	I0314 19:42:13.769204    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.640674    1523 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ab390fc53b998ec55449f16c05933add797f430f2cc6f4b55afabf79cd8b0bc7"
	I0314 19:42:13.769204    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.713400    1523 kubelet_node_status.go:70] "Attempting to register node" node="multinode-442000"
	I0314 19:42:13.769262    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: E0314 19:41:00.714712    1523 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.17.93.236:8443: connect: connection refused" node="multinode-442000"
	I0314 19:42:13.769311    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.736377    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7754d2f32966faec8123dc3b8a2af767-ca-certs\") pod \"kube-apiserver-multinode-442000\" (UID: \"7754d2f32966faec8123dc3b8a2af767\") " pod="kube-system/kube-apiserver-multinode-442000"
	I0314 19:42:13.769346    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.736439    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7754d2f32966faec8123dc3b8a2af767-k8s-certs\") pod \"kube-apiserver-multinode-442000\" (UID: \"7754d2f32966faec8123dc3b8a2af767\") " pod="kube-system/kube-apiserver-multinode-442000"
	I0314 19:42:13.769383    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.736466    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7754d2f32966faec8123dc3b8a2af767-usr-share-ca-certificates\") pod \"kube-apiserver-multinode-442000\" (UID: \"7754d2f32966faec8123dc3b8a2af767\") " pod="kube-system/kube-apiserver-multinode-442000"
	I0314 19:42:13.769383    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.736490    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a7ee530f2bd843eddeace8cd6ec0d204-flexvolume-dir\") pod \"kube-controller-manager-multinode-442000\" (UID: \"a7ee530f2bd843eddeace8cd6ec0d204\") " pod="kube-system/kube-controller-manager-multinode-442000"
	I0314 19:42:13.769383    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.736521    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a7ee530f2bd843eddeace8cd6ec0d204-k8s-certs\") pod \"kube-controller-manager-multinode-442000\" (UID: \"a7ee530f2bd843eddeace8cd6ec0d204\") " pod="kube-system/kube-controller-manager-multinode-442000"
	I0314 19:42:13.769383    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.736546    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/fa99a5621d016aa714804afcaa1e0a53-etcd-certs\") pod \"etcd-multinode-442000\" (UID: \"fa99a5621d016aa714804afcaa1e0a53\") " pod="kube-system/etcd-multinode-442000"
	I0314 19:42:13.769383    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.736609    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a7ee530f2bd843eddeace8cd6ec0d204-ca-certs\") pod \"kube-controller-manager-multinode-442000\" (UID: \"a7ee530f2bd843eddeace8cd6ec0d204\") " pod="kube-system/kube-controller-manager-multinode-442000"
	I0314 19:42:13.769383    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.736642    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a7ee530f2bd843eddeace8cd6ec0d204-kubeconfig\") pod \"kube-controller-manager-multinode-442000\" (UID: \"a7ee530f2bd843eddeace8cd6ec0d204\") " pod="kube-system/kube-controller-manager-multinode-442000"
	I0314 19:42:13.769383    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.736675    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a7ee530f2bd843eddeace8cd6ec0d204-usr-share-ca-certificates\") pod \"kube-controller-manager-multinode-442000\" (UID: \"a7ee530f2bd843eddeace8cd6ec0d204\") " pod="kube-system/kube-controller-manager-multinode-442000"
	I0314 19:42:13.769383    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.736706    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/fa99a5621d016aa714804afcaa1e0a53-etcd-data\") pod \"etcd-multinode-442000\" (UID: \"fa99a5621d016aa714804afcaa1e0a53\") " pod="kube-system/etcd-multinode-442000"
	I0314 19:42:13.769383    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: E0314 19:41:00.996146    1523 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-442000?timeout=10s\": dial tcp 172.17.93.236:8443: connect: connection refused" interval="800ms"
	I0314 19:42:13.769383    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 kubelet[1523]: E0314 19:41:01.009288    1523 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"multinode-442000.17bcb8e6e82683f3", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"multinode-442000", UID:"multinode-442000", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"multinode-442000"}, FirstTimestamp:time.Date(2024, ti
me.March, 14, 19, 41, 0, 370772979, time.Local), LastTimestamp:time.Date(2024, time.March, 14, 19, 41, 0, 370772979, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"multinode-442000"}': 'Post "https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events": dial tcp 172.17.93.236:8443: connect: connection refused'(may retry after sleeping)
	I0314 19:42:13.769383    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 kubelet[1523]: I0314 19:41:01.128790    1523 kubelet_node_status.go:70] "Attempting to register node" node="multinode-442000"
	I0314 19:42:13.769383    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 kubelet[1523]: E0314 19:41:01.130034    1523 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.17.93.236:8443: connect: connection refused" node="multinode-442000"
	I0314 19:42:13.769917    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 kubelet[1523]: W0314 19:41:01.475229    1523 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.17.93.236:8443: connect: connection refused
	I0314 19:42:13.769959    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 kubelet[1523]: E0314 19:41:01.475367    1523 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.17.93.236:8443: connect: connection refused
	I0314 19:42:13.769994    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 kubelet[1523]: W0314 19:41:01.647700    1523 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-442000&limit=500&resourceVersion=0": dial tcp 172.17.93.236:8443: connect: connection refused
	I0314 19:42:13.770052    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 kubelet[1523]: E0314 19:41:01.647839    1523 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-442000&limit=500&resourceVersion=0": dial tcp 172.17.93.236:8443: connect: connection refused
	I0314 19:42:13.770052    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 kubelet[1523]: I0314 19:41:01.684558    1523 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c70744e60ac50b50085376d0c124ff15cc884b8a836b0085ef71a65ddb06bcfd"
	I0314 19:42:13.770052    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 kubelet[1523]: W0314 19:41:01.767121    1523 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.17.93.236:8443: connect: connection refused
	I0314 19:42:13.770052    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 kubelet[1523]: E0314 19:41:01.767283    1523 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.17.93.236:8443: connect: connection refused
	I0314 19:42:13.770052    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 kubelet[1523]: E0314 19:41:01.797772    1523 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-442000?timeout=10s\": dial tcp 172.17.93.236:8443: connect: connection refused" interval="1.6s"
	I0314 19:42:13.770052    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 kubelet[1523]: W0314 19:41:01.907277    1523 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.17.93.236:8443: connect: connection refused
	I0314 19:42:13.770052    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 kubelet[1523]: E0314 19:41:01.907408    1523 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.17.93.236:8443: connect: connection refused
	I0314 19:42:13.770052    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 kubelet[1523]: I0314 19:41:01.963548    1523 kubelet_node_status.go:70] "Attempting to register node" node="multinode-442000"
	I0314 19:42:13.770052    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 kubelet[1523]: E0314 19:41:01.967786    1523 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.17.93.236:8443: connect: connection refused" node="multinode-442000"
	I0314 19:42:13.770052    8428 command_runner.go:130] > Mar 14 19:41:03 multinode-442000 kubelet[1523]: I0314 19:41:03.581966    1523 kubelet_node_status.go:70] "Attempting to register node" node="multinode-442000"
	I0314 19:42:13.770052    8428 command_runner.go:130] > Mar 14 19:41:05 multinode-442000 kubelet[1523]: I0314 19:41:05.875219    1523 kubelet_node_status.go:108] "Node was previously registered" node="multinode-442000"
	I0314 19:42:13.770052    8428 command_runner.go:130] > Mar 14 19:41:05 multinode-442000 kubelet[1523]: I0314 19:41:05.875953    1523 kubelet_node_status.go:73] "Successfully registered node" node="multinode-442000"
	I0314 19:42:13.770052    8428 command_runner.go:130] > Mar 14 19:41:05 multinode-442000 kubelet[1523]: I0314 19:41:05.881726    1523 kuberuntime_manager.go:1528] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	I0314 19:42:13.770052    8428 command_runner.go:130] > Mar 14 19:41:05 multinode-442000 kubelet[1523]: I0314 19:41:05.882677    1523 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	I0314 19:42:13.770052    8428 command_runner.go:130] > Mar 14 19:41:05 multinode-442000 kubelet[1523]: I0314 19:41:05.894905    1523 setters.go:552] "Node became not ready" node="multinode-442000" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-03-14T19:41:05Z","lastTransitionTime":"2024-03-14T19:41:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"}
	I0314 19:42:13.770052    8428 command_runner.go:130] > Mar 14 19:41:05 multinode-442000 kubelet[1523]: E0314 19:41:05.973748    1523 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"etcd-multinode-442000\" already exists" pod="kube-system/etcd-multinode-442000"
	I0314 19:42:13.770052    8428 command_runner.go:130] > Mar 14 19:41:06 multinode-442000 kubelet[1523]: I0314 19:41:06.346543    1523 apiserver.go:52] "Watching apiserver"
	I0314 19:42:13.770052    8428 command_runner.go:130] > Mar 14 19:41:06 multinode-442000 kubelet[1523]: I0314 19:41:06.355573    1523 topology_manager.go:215] "Topology Admit Handler" podUID="677b9084-0026-4b21-b041-445940624ed7" podNamespace="kube-system" podName="kindnet-7b9lf"
	I0314 19:42:13.770052    8428 command_runner.go:130] > Mar 14 19:41:06 multinode-442000 kubelet[1523]: I0314 19:41:06.355823    1523 topology_manager.go:215] "Topology Admit Handler" podUID="c7f798bf-6722-4731-af8d-ccd5703d116e" podNamespace="kube-system" podName="kube-proxy-cg28g"
	I0314 19:42:13.770052    8428 command_runner.go:130] > Mar 14 19:41:06 multinode-442000 kubelet[1523]: I0314 19:41:06.355970    1523 topology_manager.go:215] "Topology Admit Handler" podUID="2a563b3f-a175-4dc2-9f0b-67dbaefbfaac" podNamespace="kube-system" podName="coredns-5dd5756b68-d22jc"
	I0314 19:42:13.770580    8428 command_runner.go:130] > Mar 14 19:41:06 multinode-442000 kubelet[1523]: I0314 19:41:06.356220    1523 topology_manager.go:215] "Topology Admit Handler" podUID="65d76566-4401-4b28-8452-10ed98624901" podNamespace="kube-system" podName="storage-provisioner"
	I0314 19:42:13.770619    8428 command_runner.go:130] > Mar 14 19:41:06 multinode-442000 kubelet[1523]: I0314 19:41:06.356515    1523 topology_manager.go:215] "Topology Admit Handler" podUID="6ca0ace6-596a-4504-80b5-0cc0cc11f9a2" podNamespace="default" podName="busybox-5b5d89c9d6-7446n"
	I0314 19:42:13.770691    8428 command_runner.go:130] > Mar 14 19:41:06 multinode-442000 kubelet[1523]: E0314 19:41:06.356776    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-7446n" podUID="6ca0ace6-596a-4504-80b5-0cc0cc11f9a2"
	I0314 19:42:13.770725    8428 command_runner.go:130] > Mar 14 19:41:06 multinode-442000 kubelet[1523]: E0314 19:41:06.356948    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-d22jc" podUID="2a563b3f-a175-4dc2-9f0b-67dbaefbfaac"
	I0314 19:42:13.770762    8428 command_runner.go:130] > Mar 14 19:41:06 multinode-442000 kubelet[1523]: I0314 19:41:06.360847    1523 kubelet.go:1872] "Trying to delete pod" pod="kube-system/kube-apiserver-multinode-442000" podUID="02a2d011-5f4c-451c-9698-a88e42e4b6c9"
	I0314 19:42:13.770762    8428 command_runner.go:130] > Mar 14 19:41:06 multinode-442000 kubelet[1523]: I0314 19:41:06.388530    1523 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
	I0314 19:42:13.770762    8428 command_runner.go:130] > Mar 14 19:41:06 multinode-442000 kubelet[1523]: I0314 19:41:06.394882    1523 kubelet.go:1877] "Deleted mirror pod because it is outdated" pod="kube-system/kube-apiserver-multinode-442000"
	I0314 19:42:13.770762    8428 command_runner.go:130] > Mar 14 19:41:06 multinode-442000 kubelet[1523]: I0314 19:41:06.419699    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c7f798bf-6722-4731-af8d-ccd5703d116e-xtables-lock\") pod \"kube-proxy-cg28g\" (UID: \"c7f798bf-6722-4731-af8d-ccd5703d116e\") " pod="kube-system/kube-proxy-cg28g"
	I0314 19:42:13.770762    8428 command_runner.go:130] > Mar 14 19:41:06 multinode-442000 kubelet[1523]: I0314 19:41:06.419828    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/677b9084-0026-4b21-b041-445940624ed7-cni-cfg\") pod \"kindnet-7b9lf\" (UID: \"677b9084-0026-4b21-b041-445940624ed7\") " pod="kube-system/kindnet-7b9lf"
	I0314 19:42:13.770762    8428 command_runner.go:130] > Mar 14 19:41:06 multinode-442000 kubelet[1523]: I0314 19:41:06.419854    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/677b9084-0026-4b21-b041-445940624ed7-lib-modules\") pod \"kindnet-7b9lf\" (UID: \"677b9084-0026-4b21-b041-445940624ed7\") " pod="kube-system/kindnet-7b9lf"
	I0314 19:42:13.770762    8428 command_runner.go:130] > Mar 14 19:41:06 multinode-442000 kubelet[1523]: I0314 19:41:06.419895    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/65d76566-4401-4b28-8452-10ed98624901-tmp\") pod \"storage-provisioner\" (UID: \"65d76566-4401-4b28-8452-10ed98624901\") " pod="kube-system/storage-provisioner"
	I0314 19:42:13.770762    8428 command_runner.go:130] > Mar 14 19:41:06 multinode-442000 kubelet[1523]: I0314 19:41:06.419943    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/677b9084-0026-4b21-b041-445940624ed7-xtables-lock\") pod \"kindnet-7b9lf\" (UID: \"677b9084-0026-4b21-b041-445940624ed7\") " pod="kube-system/kindnet-7b9lf"
	I0314 19:42:13.770762    8428 command_runner.go:130] > Mar 14 19:41:06 multinode-442000 kubelet[1523]: I0314 19:41:06.420062    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c7f798bf-6722-4731-af8d-ccd5703d116e-lib-modules\") pod \"kube-proxy-cg28g\" (UID: \"c7f798bf-6722-4731-af8d-ccd5703d116e\") " pod="kube-system/kube-proxy-cg28g"
	I0314 19:42:13.770762    8428 command_runner.go:130] > Mar 14 19:41:06 multinode-442000 kubelet[1523]: E0314 19:41:06.420370    1523 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0314 19:42:13.770762    8428 command_runner.go:130] > Mar 14 19:41:06 multinode-442000 kubelet[1523]: E0314 19:41:06.420509    1523 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2a563b3f-a175-4dc2-9f0b-67dbaefbfaac-config-volume podName:2a563b3f-a175-4dc2-9f0b-67dbaefbfaac nodeName:}" failed. No retries permitted until 2024-03-14 19:41:06.920467401 +0000 UTC m=+6.742091622 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/2a563b3f-a175-4dc2-9f0b-67dbaefbfaac-config-volume") pod "coredns-5dd5756b68-d22jc" (UID: "2a563b3f-a175-4dc2-9f0b-67dbaefbfaac") : object "kube-system"/"coredns" not registered
	I0314 19:42:13.770762    8428 command_runner.go:130] > Mar 14 19:41:06 multinode-442000 kubelet[1523]: E0314 19:41:06.447169    1523 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0314 19:42:13.770762    8428 command_runner.go:130] > Mar 14 19:41:06 multinode-442000 kubelet[1523]: E0314 19:41:06.447481    1523 projected.go:198] Error preparing data for projected volume kube-api-access-6hh9s for pod default/busybox-5b5d89c9d6-7446n: object "default"/"kube-root-ca.crt" not registered
	I0314 19:42:13.771292    8428 command_runner.go:130] > Mar 14 19:41:06 multinode-442000 kubelet[1523]: E0314 19:41:06.447769    1523 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6ca0ace6-596a-4504-80b5-0cc0cc11f9a2-kube-api-access-6hh9s podName:6ca0ace6-596a-4504-80b5-0cc0cc11f9a2 nodeName:}" failed. No retries permitted until 2024-03-14 19:41:06.9477485 +0000 UTC m=+6.769372721 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-6hh9s" (UniqueName: "kubernetes.io/projected/6ca0ace6-596a-4504-80b5-0cc0cc11f9a2-kube-api-access-6hh9s") pod "busybox-5b5d89c9d6-7446n" (UID: "6ca0ace6-596a-4504-80b5-0cc0cc11f9a2") : object "default"/"kube-root-ca.crt" not registered
	I0314 19:42:13.771292    8428 command_runner.go:130] > Mar 14 19:41:06 multinode-442000 kubelet[1523]: I0314 19:41:06.496544    1523 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="81fdcd9740169a0b72b7c7316eeac39f" path="/var/lib/kubelet/pods/81fdcd9740169a0b72b7c7316eeac39f/volumes"
	I0314 19:42:13.771292    8428 command_runner.go:130] > Mar 14 19:41:06 multinode-442000 kubelet[1523]: I0314 19:41:06.497856    1523 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="92e70beb375f9f247f5f8395dc065033" path="/var/lib/kubelet/pods/92e70beb375f9f247f5f8395dc065033/volumes"
	I0314 19:42:13.771373    8428 command_runner.go:130] > Mar 14 19:41:06 multinode-442000 kubelet[1523]: I0314 19:41:06.840791    1523 kubelet.go:1872] "Trying to delete pod" pod="kube-system/etcd-multinode-442000" podUID="8974ad44-5d36-48f0-bc6b-9115bab5fb5e"
	I0314 19:42:13.771373    8428 command_runner.go:130] > Mar 14 19:41:06 multinode-442000 kubelet[1523]: I0314 19:41:06.864488    1523 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-multinode-442000" podStartSLOduration=0.864428449 podCreationTimestamp="2024-03-14 19:41:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-03-14 19:41:06.656175631 +0000 UTC m=+6.477799952" watchObservedRunningTime="2024-03-14 19:41:06.864428449 +0000 UTC m=+6.686052670"
	I0314 19:42:13.771443    8428 command_runner.go:130] > Mar 14 19:41:06 multinode-442000 kubelet[1523]: I0314 19:41:06.889820    1523 kubelet.go:1877] "Deleted mirror pod because it is outdated" pod="kube-system/etcd-multinode-442000"
	I0314 19:42:13.771443    8428 command_runner.go:130] > Mar 14 19:41:06 multinode-442000 kubelet[1523]: E0314 19:41:06.925613    1523 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0314 19:42:13.771514    8428 command_runner.go:130] > Mar 14 19:41:06 multinode-442000 kubelet[1523]: E0314 19:41:06.925789    1523 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2a563b3f-a175-4dc2-9f0b-67dbaefbfaac-config-volume podName:2a563b3f-a175-4dc2-9f0b-67dbaefbfaac nodeName:}" failed. No retries permitted until 2024-03-14 19:41:07.925744766 +0000 UTC m=+7.747368987 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/2a563b3f-a175-4dc2-9f0b-67dbaefbfaac-config-volume") pod "coredns-5dd5756b68-d22jc" (UID: "2a563b3f-a175-4dc2-9f0b-67dbaefbfaac") : object "kube-system"/"coredns" not registered
	I0314 19:42:13.771514    8428 command_runner.go:130] > Mar 14 19:41:07 multinode-442000 kubelet[1523]: E0314 19:41:07.026456    1523 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0314 19:42:13.771584    8428 command_runner.go:130] > Mar 14 19:41:07 multinode-442000 kubelet[1523]: E0314 19:41:07.026485    1523 projected.go:198] Error preparing data for projected volume kube-api-access-6hh9s for pod default/busybox-5b5d89c9d6-7446n: object "default"/"kube-root-ca.crt" not registered
	I0314 19:42:13.771584    8428 command_runner.go:130] > Mar 14 19:41:07 multinode-442000 kubelet[1523]: E0314 19:41:07.026583    1523 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6ca0ace6-596a-4504-80b5-0cc0cc11f9a2-kube-api-access-6hh9s podName:6ca0ace6-596a-4504-80b5-0cc0cc11f9a2 nodeName:}" failed. No retries permitted until 2024-03-14 19:41:08.02656612 +0000 UTC m=+7.848190341 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-6hh9s" (UniqueName: "kubernetes.io/projected/6ca0ace6-596a-4504-80b5-0cc0cc11f9a2-kube-api-access-6hh9s") pod "busybox-5b5d89c9d6-7446n" (UID: "6ca0ace6-596a-4504-80b5-0cc0cc11f9a2") : object "default"/"kube-root-ca.crt" not registered
	I0314 19:42:13.771655    8428 command_runner.go:130] > Mar 14 19:41:07 multinode-442000 kubelet[1523]: E0314 19:41:07.479340    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-d22jc" podUID="2a563b3f-a175-4dc2-9f0b-67dbaefbfaac"
	I0314 19:42:13.771728    8428 command_runner.go:130] > Mar 14 19:41:07 multinode-442000 kubelet[1523]: E0314 19:41:07.479540    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-7446n" podUID="6ca0ace6-596a-4504-80b5-0cc0cc11f9a2"
	I0314 19:42:13.771728    8428 command_runner.go:130] > Mar 14 19:41:07 multinode-442000 kubelet[1523]: E0314 19:41:07.934416    1523 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0314 19:42:13.771728    8428 command_runner.go:130] > Mar 14 19:41:07 multinode-442000 kubelet[1523]: E0314 19:41:07.934566    1523 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2a563b3f-a175-4dc2-9f0b-67dbaefbfaac-config-volume podName:2a563b3f-a175-4dc2-9f0b-67dbaefbfaac nodeName:}" failed. No retries permitted until 2024-03-14 19:41:09.934544359 +0000 UTC m=+9.756168580 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/2a563b3f-a175-4dc2-9f0b-67dbaefbfaac-config-volume") pod "coredns-5dd5756b68-d22jc" (UID: "2a563b3f-a175-4dc2-9f0b-67dbaefbfaac") : object "kube-system"/"coredns" not registered
	I0314 19:42:13.771818    8428 command_runner.go:130] > Mar 14 19:41:08 multinode-442000 kubelet[1523]: E0314 19:41:08.035285    1523 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0314 19:42:13.771818    8428 command_runner.go:130] > Mar 14 19:41:08 multinode-442000 kubelet[1523]: E0314 19:41:08.035328    1523 projected.go:198] Error preparing data for projected volume kube-api-access-6hh9s for pod default/busybox-5b5d89c9d6-7446n: object "default"/"kube-root-ca.crt" not registered
	I0314 19:42:13.771818    8428 command_runner.go:130] > Mar 14 19:41:08 multinode-442000 kubelet[1523]: E0314 19:41:08.035382    1523 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6ca0ace6-596a-4504-80b5-0cc0cc11f9a2-kube-api-access-6hh9s podName:6ca0ace6-596a-4504-80b5-0cc0cc11f9a2 nodeName:}" failed. No retries permitted until 2024-03-14 19:41:10.035364414 +0000 UTC m=+9.856988635 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-6hh9s" (UniqueName: "kubernetes.io/projected/6ca0ace6-596a-4504-80b5-0cc0cc11f9a2-kube-api-access-6hh9s") pod "busybox-5b5d89c9d6-7446n" (UID: "6ca0ace6-596a-4504-80b5-0cc0cc11f9a2") : object "default"/"kube-root-ca.crt" not registered
	I0314 19:42:13.771919    8428 command_runner.go:130] > Mar 14 19:41:08 multinode-442000 kubelet[1523]: I0314 19:41:08.192454    1523 kubelet.go:1872] "Trying to delete pod" pod="kube-system/etcd-multinode-442000" podUID="8974ad44-5d36-48f0-bc6b-9115bab5fb5e"
	I0314 19:42:13.771919    8428 command_runner.go:130] > Mar 14 19:41:08 multinode-442000 kubelet[1523]: I0314 19:41:08.232807    1523 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/etcd-multinode-442000" podStartSLOduration=2.232765597 podCreationTimestamp="2024-03-14 19:41:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-03-14 19:41:08.211688076 +0000 UTC m=+8.033312297" watchObservedRunningTime="2024-03-14 19:41:08.232765597 +0000 UTC m=+8.054389818"
	I0314 19:42:13.772000    8428 command_runner.go:130] > Mar 14 19:41:09 multinode-442000 kubelet[1523]: E0314 19:41:09.480285    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-7446n" podUID="6ca0ace6-596a-4504-80b5-0cc0cc11f9a2"
	I0314 19:42:13.772073    8428 command_runner.go:130] > Mar 14 19:41:09 multinode-442000 kubelet[1523]: E0314 19:41:09.480350    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-d22jc" podUID="2a563b3f-a175-4dc2-9f0b-67dbaefbfaac"
	I0314 19:42:13.772073    8428 command_runner.go:130] > Mar 14 19:41:09 multinode-442000 kubelet[1523]: E0314 19:41:09.954598    1523 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0314 19:42:13.772141    8428 command_runner.go:130] > Mar 14 19:41:09 multinode-442000 kubelet[1523]: E0314 19:41:09.954683    1523 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2a563b3f-a175-4dc2-9f0b-67dbaefbfaac-config-volume podName:2a563b3f-a175-4dc2-9f0b-67dbaefbfaac nodeName:}" failed. No retries permitted until 2024-03-14 19:41:13.95466674 +0000 UTC m=+13.776290961 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/2a563b3f-a175-4dc2-9f0b-67dbaefbfaac-config-volume") pod "coredns-5dd5756b68-d22jc" (UID: "2a563b3f-a175-4dc2-9f0b-67dbaefbfaac") : object "kube-system"/"coredns" not registered
	I0314 19:42:13.772141    8428 command_runner.go:130] > Mar 14 19:41:10 multinode-442000 kubelet[1523]: E0314 19:41:10.055917    1523 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0314 19:42:13.772141    8428 command_runner.go:130] > Mar 14 19:41:10 multinode-442000 kubelet[1523]: E0314 19:41:10.055948    1523 projected.go:198] Error preparing data for projected volume kube-api-access-6hh9s for pod default/busybox-5b5d89c9d6-7446n: object "default"/"kube-root-ca.crt" not registered
	I0314 19:42:13.772265    8428 command_runner.go:130] > Mar 14 19:41:10 multinode-442000 kubelet[1523]: E0314 19:41:10.055999    1523 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6ca0ace6-596a-4504-80b5-0cc0cc11f9a2-kube-api-access-6hh9s podName:6ca0ace6-596a-4504-80b5-0cc0cc11f9a2 nodeName:}" failed. No retries permitted until 2024-03-14 19:41:14.055983733 +0000 UTC m=+13.877608054 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-6hh9s" (UniqueName: "kubernetes.io/projected/6ca0ace6-596a-4504-80b5-0cc0cc11f9a2-kube-api-access-6hh9s") pod "busybox-5b5d89c9d6-7446n" (UID: "6ca0ace6-596a-4504-80b5-0cc0cc11f9a2") : object "default"/"kube-root-ca.crt" not registered
	I0314 19:42:13.772265    8428 command_runner.go:130] > Mar 14 19:41:11 multinode-442000 kubelet[1523]: E0314 19:41:11.480167    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-d22jc" podUID="2a563b3f-a175-4dc2-9f0b-67dbaefbfaac"
	I0314 19:42:13.772338    8428 command_runner.go:130] > Mar 14 19:41:11 multinode-442000 kubelet[1523]: E0314 19:41:11.480285    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-7446n" podUID="6ca0ace6-596a-4504-80b5-0cc0cc11f9a2"
	I0314 19:42:13.772415    8428 command_runner.go:130] > Mar 14 19:41:13 multinode-442000 kubelet[1523]: E0314 19:41:13.480095    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-d22jc" podUID="2a563b3f-a175-4dc2-9f0b-67dbaefbfaac"
	I0314 19:42:13.772441    8428 command_runner.go:130] > Mar 14 19:41:13 multinode-442000 kubelet[1523]: E0314 19:41:13.480797    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-7446n" podUID="6ca0ace6-596a-4504-80b5-0cc0cc11f9a2"
	I0314 19:42:13.772476    8428 command_runner.go:130] > Mar 14 19:41:13 multinode-442000 kubelet[1523]: E0314 19:41:13.988392    1523 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0314 19:42:13.772546    8428 command_runner.go:130] > Mar 14 19:41:13 multinode-442000 kubelet[1523]: E0314 19:41:13.988528    1523 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2a563b3f-a175-4dc2-9f0b-67dbaefbfaac-config-volume podName:2a563b3f-a175-4dc2-9f0b-67dbaefbfaac nodeName:}" failed. No retries permitted until 2024-03-14 19:41:21.98850961 +0000 UTC m=+21.810133831 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/2a563b3f-a175-4dc2-9f0b-67dbaefbfaac-config-volume") pod "coredns-5dd5756b68-d22jc" (UID: "2a563b3f-a175-4dc2-9f0b-67dbaefbfaac") : object "kube-system"/"coredns" not registered
	I0314 19:42:13.772593    8428 command_runner.go:130] > Mar 14 19:41:14 multinode-442000 kubelet[1523]: E0314 19:41:14.089208    1523 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0314 19:42:13.772627    8428 command_runner.go:130] > Mar 14 19:41:14 multinode-442000 kubelet[1523]: E0314 19:41:14.089365    1523 projected.go:198] Error preparing data for projected volume kube-api-access-6hh9s for pod default/busybox-5b5d89c9d6-7446n: object "default"/"kube-root-ca.crt" not registered
	I0314 19:42:13.772691    8428 command_runner.go:130] > Mar 14 19:41:14 multinode-442000 kubelet[1523]: E0314 19:41:14.089427    1523 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6ca0ace6-596a-4504-80b5-0cc0cc11f9a2-kube-api-access-6hh9s podName:6ca0ace6-596a-4504-80b5-0cc0cc11f9a2 nodeName:}" failed. No retries permitted until 2024-03-14 19:41:22.089409571 +0000 UTC m=+21.911033792 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-6hh9s" (UniqueName: "kubernetes.io/projected/6ca0ace6-596a-4504-80b5-0cc0cc11f9a2-kube-api-access-6hh9s") pod "busybox-5b5d89c9d6-7446n" (UID: "6ca0ace6-596a-4504-80b5-0cc0cc11f9a2") : object "default"/"kube-root-ca.crt" not registered
	I0314 19:42:13.772739    8428 command_runner.go:130] > Mar 14 19:41:15 multinode-442000 kubelet[1523]: E0314 19:41:15.480116    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-d22jc" podUID="2a563b3f-a175-4dc2-9f0b-67dbaefbfaac"
	I0314 19:42:13.772779    8428 command_runner.go:130] > Mar 14 19:41:15 multinode-442000 kubelet[1523]: E0314 19:41:15.480286    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-7446n" podUID="6ca0ace6-596a-4504-80b5-0cc0cc11f9a2"
	I0314 19:42:13.772779    8428 command_runner.go:130] > Mar 14 19:41:17 multinode-442000 kubelet[1523]: E0314 19:41:17.479583    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-7446n" podUID="6ca0ace6-596a-4504-80b5-0cc0cc11f9a2"
	I0314 19:42:13.772863    8428 command_runner.go:130] > Mar 14 19:41:17 multinode-442000 kubelet[1523]: E0314 19:41:17.480025    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-d22jc" podUID="2a563b3f-a175-4dc2-9f0b-67dbaefbfaac"
	I0314 19:42:13.772863    8428 command_runner.go:130] > Mar 14 19:41:19 multinode-442000 kubelet[1523]: E0314 19:41:19.480562    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-d22jc" podUID="2a563b3f-a175-4dc2-9f0b-67dbaefbfaac"
	I0314 19:42:13.772943    8428 command_runner.go:130] > Mar 14 19:41:19 multinode-442000 kubelet[1523]: E0314 19:41:19.480625    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-7446n" podUID="6ca0ace6-596a-4504-80b5-0cc0cc11f9a2"
	I0314 19:42:13.772970    8428 command_runner.go:130] > Mar 14 19:41:21 multinode-442000 kubelet[1523]: E0314 19:41:21.479895    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-7446n" podUID="6ca0ace6-596a-4504-80b5-0cc0cc11f9a2"
	I0314 19:42:13.772970    8428 command_runner.go:130] > Mar 14 19:41:21 multinode-442000 kubelet[1523]: E0314 19:41:21.480437    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-d22jc" podUID="2a563b3f-a175-4dc2-9f0b-67dbaefbfaac"
	I0314 19:42:13.772970    8428 command_runner.go:130] > Mar 14 19:41:22 multinode-442000 kubelet[1523]: E0314 19:41:22.061436    1523 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0314 19:42:13.772970    8428 command_runner.go:130] > Mar 14 19:41:22 multinode-442000 kubelet[1523]: E0314 19:41:22.061515    1523 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2a563b3f-a175-4dc2-9f0b-67dbaefbfaac-config-volume podName:2a563b3f-a175-4dc2-9f0b-67dbaefbfaac nodeName:}" failed. No retries permitted until 2024-03-14 19:41:38.061499618 +0000 UTC m=+37.883123839 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/2a563b3f-a175-4dc2-9f0b-67dbaefbfaac-config-volume") pod "coredns-5dd5756b68-d22jc" (UID: "2a563b3f-a175-4dc2-9f0b-67dbaefbfaac") : object "kube-system"/"coredns" not registered
	I0314 19:42:13.772970    8428 command_runner.go:130] > Mar 14 19:41:22 multinode-442000 kubelet[1523]: E0314 19:41:22.162555    1523 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0314 19:42:13.772970    8428 command_runner.go:130] > Mar 14 19:41:22 multinode-442000 kubelet[1523]: E0314 19:41:22.162603    1523 projected.go:198] Error preparing data for projected volume kube-api-access-6hh9s for pod default/busybox-5b5d89c9d6-7446n: object "default"/"kube-root-ca.crt" not registered
	I0314 19:42:13.772970    8428 command_runner.go:130] > Mar 14 19:41:22 multinode-442000 kubelet[1523]: E0314 19:41:22.162667    1523 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6ca0ace6-596a-4504-80b5-0cc0cc11f9a2-kube-api-access-6hh9s podName:6ca0ace6-596a-4504-80b5-0cc0cc11f9a2 nodeName:}" failed. No retries permitted until 2024-03-14 19:41:38.162650651 +0000 UTC m=+37.984274872 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-6hh9s" (UniqueName: "kubernetes.io/projected/6ca0ace6-596a-4504-80b5-0cc0cc11f9a2-kube-api-access-6hh9s") pod "busybox-5b5d89c9d6-7446n" (UID: "6ca0ace6-596a-4504-80b5-0cc0cc11f9a2") : object "default"/"kube-root-ca.crt" not registered
	I0314 19:42:13.772970    8428 command_runner.go:130] > Mar 14 19:41:23 multinode-442000 kubelet[1523]: E0314 19:41:23.480157    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-d22jc" podUID="2a563b3f-a175-4dc2-9f0b-67dbaefbfaac"
	I0314 19:42:13.772970    8428 command_runner.go:130] > Mar 14 19:41:23 multinode-442000 kubelet[1523]: E0314 19:41:23.481151    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-7446n" podUID="6ca0ace6-596a-4504-80b5-0cc0cc11f9a2"
	I0314 19:42:13.772970    8428 command_runner.go:130] > Mar 14 19:41:25 multinode-442000 kubelet[1523]: E0314 19:41:25.479970    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-d22jc" podUID="2a563b3f-a175-4dc2-9f0b-67dbaefbfaac"
	I0314 19:42:13.772970    8428 command_runner.go:130] > Mar 14 19:41:25 multinode-442000 kubelet[1523]: E0314 19:41:25.480065    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-7446n" podUID="6ca0ace6-596a-4504-80b5-0cc0cc11f9a2"
	I0314 19:42:13.772970    8428 command_runner.go:130] > Mar 14 19:41:27 multinode-442000 kubelet[1523]: E0314 19:41:27.480032    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-d22jc" podUID="2a563b3f-a175-4dc2-9f0b-67dbaefbfaac"
	I0314 19:42:13.772970    8428 command_runner.go:130] > Mar 14 19:41:27 multinode-442000 kubelet[1523]: E0314 19:41:27.480122    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-7446n" podUID="6ca0ace6-596a-4504-80b5-0cc0cc11f9a2"
	I0314 19:42:13.773497    8428 command_runner.go:130] > Mar 14 19:41:29 multinode-442000 kubelet[1523]: E0314 19:41:29.480034    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-7446n" podUID="6ca0ace6-596a-4504-80b5-0cc0cc11f9a2"
	I0314 19:42:13.773497    8428 command_runner.go:130] > Mar 14 19:41:29 multinode-442000 kubelet[1523]: E0314 19:41:29.480291    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-d22jc" podUID="2a563b3f-a175-4dc2-9f0b-67dbaefbfaac"
	I0314 19:42:13.773588    8428 command_runner.go:130] > Mar 14 19:41:31 multinode-442000 kubelet[1523]: E0314 19:41:31.479554    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-d22jc" podUID="2a563b3f-a175-4dc2-9f0b-67dbaefbfaac"
	I0314 19:42:13.773588    8428 command_runner.go:130] > Mar 14 19:41:31 multinode-442000 kubelet[1523]: E0314 19:41:31.479650    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-7446n" podUID="6ca0ace6-596a-4504-80b5-0cc0cc11f9a2"
	I0314 19:42:13.773662    8428 command_runner.go:130] > Mar 14 19:41:33 multinode-442000 kubelet[1523]: E0314 19:41:33.479299    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-d22jc" podUID="2a563b3f-a175-4dc2-9f0b-67dbaefbfaac"
	I0314 19:42:13.773662    8428 command_runner.go:130] > Mar 14 19:41:33 multinode-442000 kubelet[1523]: E0314 19:41:33.479835    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-7446n" podUID="6ca0ace6-596a-4504-80b5-0cc0cc11f9a2"
	I0314 19:42:13.773735    8428 command_runner.go:130] > Mar 14 19:41:35 multinode-442000 kubelet[1523]: E0314 19:41:35.479778    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-7446n" podUID="6ca0ace6-596a-4504-80b5-0cc0cc11f9a2"
	I0314 19:42:13.773735    8428 command_runner.go:130] > Mar 14 19:41:35 multinode-442000 kubelet[1523]: E0314 19:41:35.480230    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-d22jc" podUID="2a563b3f-a175-4dc2-9f0b-67dbaefbfaac"
	I0314 19:42:13.773808    8428 command_runner.go:130] > Mar 14 19:41:37 multinode-442000 kubelet[1523]: E0314 19:41:37.480388    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-d22jc" podUID="2a563b3f-a175-4dc2-9f0b-67dbaefbfaac"
	I0314 19:42:13.773808    8428 command_runner.go:130] > Mar 14 19:41:37 multinode-442000 kubelet[1523]: E0314 19:41:37.480921    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-7446n" podUID="6ca0ace6-596a-4504-80b5-0cc0cc11f9a2"
	I0314 19:42:13.773808    8428 command_runner.go:130] > Mar 14 19:41:38 multinode-442000 kubelet[1523]: E0314 19:41:38.089907    1523 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0314 19:42:13.773907    8428 command_runner.go:130] > Mar 14 19:41:38 multinode-442000 kubelet[1523]: E0314 19:41:38.090056    1523 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2a563b3f-a175-4dc2-9f0b-67dbaefbfaac-config-volume podName:2a563b3f-a175-4dc2-9f0b-67dbaefbfaac nodeName:}" failed. No retries permitted until 2024-03-14 19:42:10.090036325 +0000 UTC m=+69.911660546 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/2a563b3f-a175-4dc2-9f0b-67dbaefbfaac-config-volume") pod "coredns-5dd5756b68-d22jc" (UID: "2a563b3f-a175-4dc2-9f0b-67dbaefbfaac") : object "kube-system"/"coredns" not registered
	I0314 19:42:13.773907    8428 command_runner.go:130] > Mar 14 19:41:38 multinode-442000 kubelet[1523]: E0314 19:41:38.191172    1523 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0314 19:42:13.773984    8428 command_runner.go:130] > Mar 14 19:41:38 multinode-442000 kubelet[1523]: E0314 19:41:38.191351    1523 projected.go:198] Error preparing data for projected volume kube-api-access-6hh9s for pod default/busybox-5b5d89c9d6-7446n: object "default"/"kube-root-ca.crt" not registered
	I0314 19:42:13.774183    8428 command_runner.go:130] > Mar 14 19:41:38 multinode-442000 kubelet[1523]: E0314 19:41:38.191425    1523 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6ca0ace6-596a-4504-80b5-0cc0cc11f9a2-kube-api-access-6hh9s podName:6ca0ace6-596a-4504-80b5-0cc0cc11f9a2 nodeName:}" failed. No retries permitted until 2024-03-14 19:42:10.191406835 +0000 UTC m=+70.013031056 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-6hh9s" (UniqueName: "kubernetes.io/projected/6ca0ace6-596a-4504-80b5-0cc0cc11f9a2-kube-api-access-6hh9s") pod "busybox-5b5d89c9d6-7446n" (UID: "6ca0ace6-596a-4504-80b5-0cc0cc11f9a2") : object "default"/"kube-root-ca.crt" not registered
	I0314 19:42:13.774183    8428 command_runner.go:130] > Mar 14 19:41:38 multinode-442000 kubelet[1523]: I0314 19:41:38.578418    1523 scope.go:117] "RemoveContainer" containerID="07c2872c48edaa090b20d66267963c0d69c5c9eb97824b199af2d7e611ac596a"
	I0314 19:42:13.774183    8428 command_runner.go:130] > Mar 14 19:41:38 multinode-442000 kubelet[1523]: I0314 19:41:38.578814    1523 scope.go:117] "RemoveContainer" containerID="2876622a2618d9b60f7cb4f182054a8b2d30209e3bd14c5d4afe515101547bc8"
	I0314 19:42:13.774183    8428 command_runner.go:130] > Mar 14 19:41:38 multinode-442000 kubelet[1523]: E0314 19:41:38.579025    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(65d76566-4401-4b28-8452-10ed98624901)\"" pod="kube-system/storage-provisioner" podUID="65d76566-4401-4b28-8452-10ed98624901"
	I0314 19:42:13.774183    8428 command_runner.go:130] > Mar 14 19:41:39 multinode-442000 kubelet[1523]: E0314 19:41:39.479691    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-d22jc" podUID="2a563b3f-a175-4dc2-9f0b-67dbaefbfaac"
	I0314 19:42:13.774183    8428 command_runner.go:130] > Mar 14 19:41:39 multinode-442000 kubelet[1523]: E0314 19:41:39.479909    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-7446n" podUID="6ca0ace6-596a-4504-80b5-0cc0cc11f9a2"
	I0314 19:42:13.774183    8428 command_runner.go:130] > Mar 14 19:41:41 multinode-442000 kubelet[1523]: E0314 19:41:41.479574    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-7446n" podUID="6ca0ace6-596a-4504-80b5-0cc0cc11f9a2"
	I0314 19:42:13.774183    8428 command_runner.go:130] > Mar 14 19:41:41 multinode-442000 kubelet[1523]: E0314 19:41:41.480003    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-d22jc" podUID="2a563b3f-a175-4dc2-9f0b-67dbaefbfaac"
	I0314 19:42:13.774183    8428 command_runner.go:130] > Mar 14 19:41:41 multinode-442000 kubelet[1523]: I0314 19:41:41.518811    1523 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	I0314 19:42:13.774183    8428 command_runner.go:130] > Mar 14 19:41:53 multinode-442000 kubelet[1523]: I0314 19:41:53.480206    1523 scope.go:117] "RemoveContainer" containerID="2876622a2618d9b60f7cb4f182054a8b2d30209e3bd14c5d4afe515101547bc8"
	I0314 19:42:13.774183    8428 command_runner.go:130] > Mar 14 19:42:00 multinode-442000 kubelet[1523]: I0314 19:42:00.447192    1523 scope.go:117] "RemoveContainer" containerID="9585e3eb2ead2f471eb0d22c8e29e4bfd954095774af365d80329ea39fff78e1"
	I0314 19:42:13.774183    8428 command_runner.go:130] > Mar 14 19:42:00 multinode-442000 kubelet[1523]: I0314 19:42:00.490865    1523 scope.go:117] "RemoveContainer" containerID="cd640f130e429bd4182c258358ec791604b8f307f9c45f2e3880e9b1a7df666a"
	I0314 19:42:13.774183    8428 command_runner.go:130] > Mar 14 19:42:00 multinode-442000 kubelet[1523]: E0314 19:42:00.516969    1523 iptables.go:575] "Could not set up iptables canary" err=<
	I0314 19:42:13.774183    8428 command_runner.go:130] > Mar 14 19:42:00 multinode-442000 kubelet[1523]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	I0314 19:42:13.774183    8428 command_runner.go:130] > Mar 14 19:42:00 multinode-442000 kubelet[1523]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	I0314 19:42:13.774183    8428 command_runner.go:130] > Mar 14 19:42:00 multinode-442000 kubelet[1523]:         Perhaps ip6tables or your kernel needs to be upgraded.
	I0314 19:42:13.774183    8428 command_runner.go:130] > Mar 14 19:42:00 multinode-442000 kubelet[1523]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	I0314 19:42:13.774183    8428 command_runner.go:130] > Mar 14 19:42:11 multinode-442000 kubelet[1523]: I0314 19:42:11.167906    1523 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="89f326046d00d990fbe8611867f6438ef498caad91d78b4f265633a7cd56307f"
	I0314 19:42:13.774183    8428 command_runner.go:130] > Mar 14 19:42:11 multinode-442000 kubelet[1523]: I0314 19:42:11.214897    1523 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cddebe360bf3a58d057146523ff9f043ddb40843d3e55a24f8f364524780a439"
	I0314 19:42:13.815729    8428 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:42:13.815729    8428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 19:42:14.033247    8428 command_runner.go:130] > Name:               multinode-442000
	I0314 19:42:14.033324    8428 command_runner.go:130] > Roles:              control-plane
	I0314 19:42:14.033324    8428 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0314 19:42:14.033324    8428 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0314 19:42:14.033324    8428 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0314 19:42:14.033324    8428 command_runner.go:130] >                     kubernetes.io/hostname=multinode-442000
	I0314 19:42:14.033458    8428 command_runner.go:130] >                     kubernetes.io/os=linux
	I0314 19:42:14.033510    8428 command_runner.go:130] >                     minikube.k8s.io/commit=c6f78a3db54ac629870afb44fb5bc8be9e04a8c7
	I0314 19:42:14.033568    8428 command_runner.go:130] >                     minikube.k8s.io/name=multinode-442000
	I0314 19:42:14.033568    8428 command_runner.go:130] >                     minikube.k8s.io/primary=true
	I0314 19:42:14.033640    8428 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_03_14T19_19_05_0700
	I0314 19:42:14.033681    8428 command_runner.go:130] >                     minikube.k8s.io/version=v1.32.0
	I0314 19:42:14.033725    8428 command_runner.go:130] >                     node-role.kubernetes.io/control-plane=
	I0314 19:42:14.033783    8428 command_runner.go:130] >                     node.kubernetes.io/exclude-from-external-load-balancers=
	I0314 19:42:14.033783    8428 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0314 19:42:14.033844    8428 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0314 19:42:14.033844    8428 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0314 19:42:14.033908    8428 command_runner.go:130] > CreationTimestamp:  Thu, 14 Mar 2024 19:19:00 +0000
	I0314 19:42:14.033908    8428 command_runner.go:130] > Taints:             <none>
	I0314 19:42:14.033969    8428 command_runner.go:130] > Unschedulable:      false
	I0314 19:42:14.033969    8428 command_runner.go:130] > Lease:
	I0314 19:42:14.034027    8428 command_runner.go:130] >   HolderIdentity:  multinode-442000
	I0314 19:42:14.034027    8428 command_runner.go:130] >   AcquireTime:     <unset>
	I0314 19:42:14.034088    8428 command_runner.go:130] >   RenewTime:       Thu, 14 Mar 2024 19:42:07 +0000
	I0314 19:42:14.034088    8428 command_runner.go:130] > Conditions:
	I0314 19:42:14.034187    8428 command_runner.go:130] >   Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	I0314 19:42:14.034187    8428 command_runner.go:130] >   ----             ------  -----------------                 ------------------                ------                       -------
	I0314 19:42:14.034255    8428 command_runner.go:130] >   MemoryPressure   False   Thu, 14 Mar 2024 19:41:41 +0000   Thu, 14 Mar 2024 19:18:58 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	I0314 19:42:14.034300    8428 command_runner.go:130] >   DiskPressure     False   Thu, 14 Mar 2024 19:41:41 +0000   Thu, 14 Mar 2024 19:18:58 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	I0314 19:42:14.034393    8428 command_runner.go:130] >   PIDPressure      False   Thu, 14 Mar 2024 19:41:41 +0000   Thu, 14 Mar 2024 19:18:58 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	I0314 19:42:14.034445    8428 command_runner.go:130] >   Ready            True    Thu, 14 Mar 2024 19:41:41 +0000   Thu, 14 Mar 2024 19:41:41 +0000   KubeletReady                 kubelet is posting ready status
	I0314 19:42:14.034484    8428 command_runner.go:130] > Addresses:
	I0314 19:42:14.034539    8428 command_runner.go:130] >   InternalIP:  172.17.93.236
	I0314 19:42:14.034539    8428 command_runner.go:130] >   Hostname:    multinode-442000
	I0314 19:42:14.034580    8428 command_runner.go:130] > Capacity:
	I0314 19:42:14.034580    8428 command_runner.go:130] >   cpu:                2
	I0314 19:42:14.034580    8428 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0314 19:42:14.034622    8428 command_runner.go:130] >   hugepages-2Mi:      0
	I0314 19:42:14.034622    8428 command_runner.go:130] >   memory:             2164268Ki
	I0314 19:42:14.034653    8428 command_runner.go:130] >   pods:               110
	I0314 19:42:14.034653    8428 command_runner.go:130] > Allocatable:
	I0314 19:42:14.034699    8428 command_runner.go:130] >   cpu:                2
	I0314 19:42:14.034699    8428 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0314 19:42:14.034732    8428 command_runner.go:130] >   hugepages-2Mi:      0
	I0314 19:42:14.034732    8428 command_runner.go:130] >   memory:             2164268Ki
	I0314 19:42:14.034732    8428 command_runner.go:130] >   pods:               110
	I0314 19:42:14.034789    8428 command_runner.go:130] > System Info:
	I0314 19:42:14.034840    8428 command_runner.go:130] >   Machine ID:                 37c811f81f1d4d709fd4a6eb79d70749
	I0314 19:42:14.034840    8428 command_runner.go:130] >   System UUID:                8469b663-ea90-da4f-856d-11034a8f65d8
	I0314 19:42:14.034890    8428 command_runner.go:130] >   Boot ID:                    91589624-f8f3-469e-b556-aa6dd64e54de
	I0314 19:42:14.034932    8428 command_runner.go:130] >   Kernel Version:             5.10.207
	I0314 19:42:14.034969    8428 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0314 19:42:14.035003    8428 command_runner.go:130] >   Operating System:           linux
	I0314 19:42:14.035071    8428 command_runner.go:130] >   Architecture:               amd64
	I0314 19:42:14.035071    8428 command_runner.go:130] >   Container Runtime Version:  docker://25.0.4
	I0314 19:42:14.035133    8428 command_runner.go:130] >   Kubelet Version:            v1.28.4
	I0314 19:42:14.035156    8428 command_runner.go:130] >   Kube-Proxy Version:         v1.28.4
	I0314 19:42:14.035185    8428 command_runner.go:130] > PodCIDR:                      10.244.0.0/24
	I0314 19:42:14.035246    8428 command_runner.go:130] > PodCIDRs:                     10.244.0.0/24
	I0314 19:42:14.035352    8428 command_runner.go:130] > Non-terminated Pods:          (9 in total)
	I0314 19:42:14.035393    8428 command_runner.go:130] >   Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0314 19:42:14.035433    8428 command_runner.go:130] >   ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	I0314 19:42:14.035468    8428 command_runner.go:130] >   default                     busybox-5b5d89c9d6-7446n                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	I0314 19:42:14.035468    8428 command_runner.go:130] >   kube-system                 coredns-5dd5756b68-d22jc                    100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     22m
	I0314 19:42:14.035528    8428 command_runner.go:130] >   kube-system                 etcd-multinode-442000                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         68s
	I0314 19:42:14.035528    8428 command_runner.go:130] >   kube-system                 kindnet-7b9lf                               100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      22m
	I0314 19:42:14.035606    8428 command_runner.go:130] >   kube-system                 kube-apiserver-multinode-442000             250m (12%)    0 (0%)      0 (0%)           0 (0%)         68s
	I0314 19:42:14.035632    8428 command_runner.go:130] >   kube-system                 kube-controller-manager-multinode-442000    200m (10%)    0 (0%)      0 (0%)           0 (0%)         23m
	I0314 19:42:14.035632    8428 command_runner.go:130] >   kube-system                 kube-proxy-cg28g                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         22m
	I0314 19:42:14.035632    8428 command_runner.go:130] >   kube-system                 kube-scheduler-multinode-442000             100m (5%)     0 (0%)      0 (0%)           0 (0%)         23m
	I0314 19:42:14.035632    8428 command_runner.go:130] >   kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         22m
	I0314 19:42:14.035632    8428 command_runner.go:130] > Allocated resources:
	I0314 19:42:14.035632    8428 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0314 19:42:14.035632    8428 command_runner.go:130] >   Resource           Requests     Limits
	I0314 19:42:14.035632    8428 command_runner.go:130] >   --------           --------     ------
	I0314 19:42:14.035632    8428 command_runner.go:130] >   cpu                850m (42%)   100m (5%)
	I0314 19:42:14.035632    8428 command_runner.go:130] >   memory             220Mi (10%)  220Mi (10%)
	I0314 19:42:14.035632    8428 command_runner.go:130] >   ephemeral-storage  0 (0%)       0 (0%)
	I0314 19:42:14.035632    8428 command_runner.go:130] >   hugepages-2Mi      0 (0%)       0 (0%)
	I0314 19:42:14.035632    8428 command_runner.go:130] > Events:
	I0314 19:42:14.035632    8428 command_runner.go:130] >   Type    Reason                   Age                From             Message
	I0314 19:42:14.035632    8428 command_runner.go:130] >   ----    ------                   ----               ----             -------
	I0314 19:42:14.035632    8428 command_runner.go:130] >   Normal  Starting                 22m                kube-proxy       
	I0314 19:42:14.035632    8428 command_runner.go:130] >   Normal  Starting                 65s                kube-proxy       
	I0314 19:42:14.035632    8428 command_runner.go:130] >   Normal  NodeHasSufficientMemory  23m (x8 over 23m)  kubelet          Node multinode-442000 status is now: NodeHasSufficientMemory
	I0314 19:42:14.035632    8428 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    23m (x8 over 23m)  kubelet          Node multinode-442000 status is now: NodeHasNoDiskPressure
	I0314 19:42:14.035632    8428 command_runner.go:130] >   Normal  NodeHasSufficientPID     23m (x7 over 23m)  kubelet          Node multinode-442000 status is now: NodeHasSufficientPID
	I0314 19:42:14.036181    8428 command_runner.go:130] >   Normal  NodeAllocatableEnforced  23m                kubelet          Updated Node Allocatable limit across pods
	I0314 19:42:14.036247    8428 command_runner.go:130] >   Normal  NodeHasSufficientMemory  23m                kubelet          Node multinode-442000 status is now: NodeHasSufficientMemory
	I0314 19:42:14.036301    8428 command_runner.go:130] >   Normal  NodeAllocatableEnforced  23m                kubelet          Updated Node Allocatable limit across pods
	I0314 19:42:14.036360    8428 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    23m                kubelet          Node multinode-442000 status is now: NodeHasNoDiskPressure
	I0314 19:42:14.036415    8428 command_runner.go:130] >   Normal  NodeHasSufficientPID     23m                kubelet          Node multinode-442000 status is now: NodeHasSufficientPID
	I0314 19:42:14.036415    8428 command_runner.go:130] >   Normal  Starting                 23m                kubelet          Starting kubelet.
	I0314 19:42:14.036471    8428 command_runner.go:130] >   Normal  RegisteredNode           22m                node-controller  Node multinode-442000 event: Registered Node multinode-442000 in Controller
	I0314 19:42:14.036532    8428 command_runner.go:130] >   Normal  NodeReady                22m                kubelet          Node multinode-442000 status is now: NodeReady
	I0314 19:42:14.036532    8428 command_runner.go:130] >   Normal  Starting                 74s                kubelet          Starting kubelet.
	I0314 19:42:14.036532    8428 command_runner.go:130] >   Normal  NodeHasSufficientMemory  74s (x8 over 74s)  kubelet          Node multinode-442000 status is now: NodeHasSufficientMemory
	I0314 19:42:14.036606    8428 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    74s (x8 over 74s)  kubelet          Node multinode-442000 status is now: NodeHasNoDiskPressure
	I0314 19:42:14.036659    8428 command_runner.go:130] >   Normal  NodeHasSufficientPID     74s (x7 over 74s)  kubelet          Node multinode-442000 status is now: NodeHasSufficientPID
	I0314 19:42:14.036720    8428 command_runner.go:130] >   Normal  NodeAllocatableEnforced  74s                kubelet          Updated Node Allocatable limit across pods
	I0314 19:42:14.036771    8428 command_runner.go:130] >   Normal  RegisteredNode           56s                node-controller  Node multinode-442000 event: Registered Node multinode-442000 in Controller
	I0314 19:42:14.036830    8428 command_runner.go:130] > Name:               multinode-442000-m02
	I0314 19:42:14.036830    8428 command_runner.go:130] > Roles:              <none>
	I0314 19:42:14.036830    8428 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0314 19:42:14.036886    8428 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0314 19:42:14.036886    8428 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0314 19:42:14.036948    8428 command_runner.go:130] >                     kubernetes.io/hostname=multinode-442000-m02
	I0314 19:42:14.036948    8428 command_runner.go:130] >                     kubernetes.io/os=linux
	I0314 19:42:14.037006    8428 command_runner.go:130] >                     minikube.k8s.io/commit=c6f78a3db54ac629870afb44fb5bc8be9e04a8c7
	I0314 19:42:14.037066    8428 command_runner.go:130] >                     minikube.k8s.io/name=multinode-442000
	I0314 19:42:14.037066    8428 command_runner.go:130] >                     minikube.k8s.io/primary=false
	I0314 19:42:14.037121    8428 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_03_14T19_22_02_0700
	I0314 19:42:14.037121    8428 command_runner.go:130] >                     minikube.k8s.io/version=v1.32.0
	I0314 19:42:14.037184    8428 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0314 19:42:14.037184    8428 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0314 19:42:14.037338    8428 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0314 19:42:14.037381    8428 command_runner.go:130] > CreationTimestamp:  Thu, 14 Mar 2024 19:22:02 +0000
	I0314 19:42:14.037422    8428 command_runner.go:130] > Taints:             node.kubernetes.io/unreachable:NoExecute
	I0314 19:42:14.037460    8428 command_runner.go:130] >                     node.kubernetes.io/unreachable:NoSchedule
	I0314 19:42:14.037460    8428 command_runner.go:130] > Unschedulable:      false
	I0314 19:42:14.037541    8428 command_runner.go:130] > Lease:
	I0314 19:42:14.037541    8428 command_runner.go:130] >   HolderIdentity:  multinode-442000-m02
	I0314 19:42:14.037541    8428 command_runner.go:130] >   AcquireTime:     <unset>
	I0314 19:42:14.037541    8428 command_runner.go:130] >   RenewTime:       Thu, 14 Mar 2024 19:38:03 +0000
	I0314 19:42:14.037541    8428 command_runner.go:130] > Conditions:
	I0314 19:42:14.037541    8428 command_runner.go:130] >   Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	I0314 19:42:14.037541    8428 command_runner.go:130] >   ----             ------    -----------------                 ------------------                ------              -------
	I0314 19:42:14.037541    8428 command_runner.go:130] >   MemoryPressure   Unknown   Thu, 14 Mar 2024 19:33:15 +0000   Thu, 14 Mar 2024 19:41:59 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0314 19:42:14.037541    8428 command_runner.go:130] >   DiskPressure     Unknown   Thu, 14 Mar 2024 19:33:15 +0000   Thu, 14 Mar 2024 19:41:59 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0314 19:42:14.037541    8428 command_runner.go:130] >   PIDPressure      Unknown   Thu, 14 Mar 2024 19:33:15 +0000   Thu, 14 Mar 2024 19:41:59 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0314 19:42:14.037541    8428 command_runner.go:130] >   Ready            Unknown   Thu, 14 Mar 2024 19:33:15 +0000   Thu, 14 Mar 2024 19:41:59 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0314 19:42:14.037541    8428 command_runner.go:130] > Addresses:
	I0314 19:42:14.037541    8428 command_runner.go:130] >   InternalIP:  172.17.80.135
	I0314 19:42:14.037541    8428 command_runner.go:130] >   Hostname:    multinode-442000-m02
	I0314 19:42:14.037541    8428 command_runner.go:130] > Capacity:
	I0314 19:42:14.037541    8428 command_runner.go:130] >   cpu:                2
	I0314 19:42:14.037541    8428 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0314 19:42:14.037541    8428 command_runner.go:130] >   hugepages-2Mi:      0
	I0314 19:42:14.037541    8428 command_runner.go:130] >   memory:             2164268Ki
	I0314 19:42:14.037541    8428 command_runner.go:130] >   pods:               110
	I0314 19:42:14.037541    8428 command_runner.go:130] > Allocatable:
	I0314 19:42:14.037541    8428 command_runner.go:130] >   cpu:                2
	I0314 19:42:14.037541    8428 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0314 19:42:14.037541    8428 command_runner.go:130] >   hugepages-2Mi:      0
	I0314 19:42:14.037541    8428 command_runner.go:130] >   memory:             2164268Ki
	I0314 19:42:14.037541    8428 command_runner.go:130] >   pods:               110
	I0314 19:42:14.037541    8428 command_runner.go:130] > System Info:
	I0314 19:42:14.037541    8428 command_runner.go:130] >   Machine ID:                 35b6f7da4d3943d99d8a5913cae1c8fb
	I0314 19:42:14.037541    8428 command_runner.go:130] >   System UUID:                0b9b8376-0767-f940-9973-d373e3dc050d
	I0314 19:42:14.037541    8428 command_runner.go:130] >   Boot ID:                    45d479cc-26e8-46a6-9431-50637071f586
	I0314 19:42:14.037541    8428 command_runner.go:130] >   Kernel Version:             5.10.207
	I0314 19:42:14.037541    8428 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0314 19:42:14.037541    8428 command_runner.go:130] >   Operating System:           linux
	I0314 19:42:14.037541    8428 command_runner.go:130] >   Architecture:               amd64
	I0314 19:42:14.037541    8428 command_runner.go:130] >   Container Runtime Version:  docker://25.0.4
	I0314 19:42:14.037541    8428 command_runner.go:130] >   Kubelet Version:            v1.28.4
	I0314 19:42:14.037541    8428 command_runner.go:130] >   Kube-Proxy Version:         v1.28.4
	I0314 19:42:14.037541    8428 command_runner.go:130] > PodCIDR:                      10.244.1.0/24
	I0314 19:42:14.037541    8428 command_runner.go:130] > PodCIDRs:                     10.244.1.0/24
	I0314 19:42:14.037541    8428 command_runner.go:130] > Non-terminated Pods:          (3 in total)
	I0314 19:42:14.037541    8428 command_runner.go:130] >   Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0314 19:42:14.037541    8428 command_runner.go:130] >   ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	I0314 19:42:14.037541    8428 command_runner.go:130] >   default                     busybox-5b5d89c9d6-8drpb    0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	I0314 19:42:14.037541    8428 command_runner.go:130] >   kube-system                 kindnet-c7m4p               100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      20m
	I0314 19:42:14.037541    8428 command_runner.go:130] >   kube-system                 kube-proxy-72dzs            0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	I0314 19:42:14.037541    8428 command_runner.go:130] > Allocated resources:
	I0314 19:42:14.037541    8428 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0314 19:42:14.037541    8428 command_runner.go:130] >   Resource           Requests   Limits
	I0314 19:42:14.037541    8428 command_runner.go:130] >   --------           --------   ------
	I0314 19:42:14.037541    8428 command_runner.go:130] >   cpu                100m (5%)  100m (5%)
	I0314 19:42:14.037541    8428 command_runner.go:130] >   memory             50Mi (2%)  50Mi (2%)
	I0314 19:42:14.037541    8428 command_runner.go:130] >   ephemeral-storage  0 (0%)     0 (0%)
	I0314 19:42:14.037541    8428 command_runner.go:130] >   hugepages-2Mi      0 (0%)     0 (0%)
	I0314 19:42:14.038642    8428 command_runner.go:130] > Events:
	I0314 19:42:14.038642    8428 command_runner.go:130] >   Type    Reason                   Age                From             Message
	I0314 19:42:14.038642    8428 command_runner.go:130] >   ----    ------                   ----               ----             -------
	I0314 19:42:14.038765    8428 command_runner.go:130] >   Normal  Starting                 20m                kube-proxy       
	I0314 19:42:14.038765    8428 command_runner.go:130] >   Normal  NodeHasSufficientMemory  20m (x5 over 20m)  kubelet          Node multinode-442000-m02 status is now: NodeHasSufficientMemory
	I0314 19:42:14.038765    8428 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    20m (x5 over 20m)  kubelet          Node multinode-442000-m02 status is now: NodeHasNoDiskPressure
	I0314 19:42:14.038765    8428 command_runner.go:130] >   Normal  NodeHasSufficientPID     20m (x5 over 20m)  kubelet          Node multinode-442000-m02 status is now: NodeHasSufficientPID
	I0314 19:42:14.038765    8428 command_runner.go:130] >   Normal  RegisteredNode           20m                node-controller  Node multinode-442000-m02 event: Registered Node multinode-442000-m02 in Controller
	I0314 19:42:14.038765    8428 command_runner.go:130] >   Normal  NodeReady                19m                kubelet          Node multinode-442000-m02 status is now: NodeReady
	I0314 19:42:14.038765    8428 command_runner.go:130] >   Normal  RegisteredNode           56s                node-controller  Node multinode-442000-m02 event: Registered Node multinode-442000-m02 in Controller
	I0314 19:42:14.038765    8428 command_runner.go:130] >   Normal  NodeNotReady             15s                node-controller  Node multinode-442000-m02 status is now: NodeNotReady
	I0314 19:42:14.038765    8428 command_runner.go:130] > Name:               multinode-442000-m03
	I0314 19:42:14.038765    8428 command_runner.go:130] > Roles:              <none>
	I0314 19:42:14.038765    8428 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0314 19:42:14.038765    8428 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0314 19:42:14.038765    8428 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0314 19:42:14.038765    8428 command_runner.go:130] >                     kubernetes.io/hostname=multinode-442000-m03
	I0314 19:42:14.038765    8428 command_runner.go:130] >                     kubernetes.io/os=linux
	I0314 19:42:14.038765    8428 command_runner.go:130] >                     minikube.k8s.io/commit=c6f78a3db54ac629870afb44fb5bc8be9e04a8c7
	I0314 19:42:14.038765    8428 command_runner.go:130] >                     minikube.k8s.io/name=multinode-442000
	I0314 19:42:14.038765    8428 command_runner.go:130] >                     minikube.k8s.io/primary=false
	I0314 19:42:14.038765    8428 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_03_14T19_36_47_0700
	I0314 19:42:14.038765    8428 command_runner.go:130] >                     minikube.k8s.io/version=v1.32.0
	I0314 19:42:14.038765    8428 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0314 19:42:14.038765    8428 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0314 19:42:14.039293    8428 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0314 19:42:14.039293    8428 command_runner.go:130] > CreationTimestamp:  Thu, 14 Mar 2024 19:36:47 +0000
	I0314 19:42:14.039293    8428 command_runner.go:130] > Taints:             node.kubernetes.io/unreachable:NoExecute
	I0314 19:42:14.039293    8428 command_runner.go:130] >                     node.kubernetes.io/unreachable:NoSchedule
	I0314 19:42:14.039293    8428 command_runner.go:130] > Unschedulable:      false
	I0314 19:42:14.039293    8428 command_runner.go:130] > Lease:
	I0314 19:42:14.039412    8428 command_runner.go:130] >   HolderIdentity:  multinode-442000-m03
	I0314 19:42:14.039412    8428 command_runner.go:130] >   AcquireTime:     <unset>
	I0314 19:42:14.039463    8428 command_runner.go:130] >   RenewTime:       Thu, 14 Mar 2024 19:37:37 +0000
	I0314 19:42:14.039463    8428 command_runner.go:130] > Conditions:
	I0314 19:42:14.039463    8428 command_runner.go:130] >   Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	I0314 19:42:14.039463    8428 command_runner.go:130] >   ----             ------    -----------------                 ------------------                ------              -------
	I0314 19:42:14.039463    8428 command_runner.go:130] >   MemoryPressure   Unknown   Thu, 14 Mar 2024 19:36:54 +0000   Thu, 14 Mar 2024 19:38:21 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0314 19:42:14.039463    8428 command_runner.go:130] >   DiskPressure     Unknown   Thu, 14 Mar 2024 19:36:54 +0000   Thu, 14 Mar 2024 19:38:21 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0314 19:42:14.039463    8428 command_runner.go:130] >   PIDPressure      Unknown   Thu, 14 Mar 2024 19:36:54 +0000   Thu, 14 Mar 2024 19:38:21 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0314 19:42:14.039463    8428 command_runner.go:130] >   Ready            Unknown   Thu, 14 Mar 2024 19:36:54 +0000   Thu, 14 Mar 2024 19:38:21 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0314 19:42:14.039463    8428 command_runner.go:130] > Addresses:
	I0314 19:42:14.039463    8428 command_runner.go:130] >   InternalIP:  172.17.84.215
	I0314 19:42:14.039463    8428 command_runner.go:130] >   Hostname:    multinode-442000-m03
	I0314 19:42:14.039463    8428 command_runner.go:130] > Capacity:
	I0314 19:42:14.039463    8428 command_runner.go:130] >   cpu:                2
	I0314 19:42:14.039463    8428 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0314 19:42:14.039463    8428 command_runner.go:130] >   hugepages-2Mi:      0
	I0314 19:42:14.039463    8428 command_runner.go:130] >   memory:             2164268Ki
	I0314 19:42:14.039463    8428 command_runner.go:130] >   pods:               110
	I0314 19:42:14.039463    8428 command_runner.go:130] > Allocatable:
	I0314 19:42:14.039463    8428 command_runner.go:130] >   cpu:                2
	I0314 19:42:14.039463    8428 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0314 19:42:14.039463    8428 command_runner.go:130] >   hugepages-2Mi:      0
	I0314 19:42:14.039463    8428 command_runner.go:130] >   memory:             2164268Ki
	I0314 19:42:14.039463    8428 command_runner.go:130] >   pods:               110
	I0314 19:42:14.039463    8428 command_runner.go:130] > System Info:
	I0314 19:42:14.039463    8428 command_runner.go:130] >   Machine ID:                 dc7772516bfe448db22a5c28796f53ab
	I0314 19:42:14.039463    8428 command_runner.go:130] >   System UUID:                71573585-d564-f043-9154-3d5854ce61b8
	I0314 19:42:14.039463    8428 command_runner.go:130] >   Boot ID:                    fed746b2-110b-43ee-9065-09983ba74a37
	I0314 19:42:14.039995    8428 command_runner.go:130] >   Kernel Version:             5.10.207
	I0314 19:42:14.039995    8428 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0314 19:42:14.039995    8428 command_runner.go:130] >   Operating System:           linux
	I0314 19:42:14.040079    8428 command_runner.go:130] >   Architecture:               amd64
	I0314 19:42:14.040079    8428 command_runner.go:130] >   Container Runtime Version:  docker://25.0.4
	I0314 19:42:14.040141    8428 command_runner.go:130] >   Kubelet Version:            v1.28.4
	I0314 19:42:14.040141    8428 command_runner.go:130] >   Kube-Proxy Version:         v1.28.4
	I0314 19:42:14.040141    8428 command_runner.go:130] > PodCIDR:                      10.244.3.0/24
	I0314 19:42:14.040141    8428 command_runner.go:130] > PodCIDRs:                     10.244.3.0/24
	I0314 19:42:14.040141    8428 command_runner.go:130] > Non-terminated Pods:          (2 in total)
	I0314 19:42:14.040141    8428 command_runner.go:130] >   Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0314 19:42:14.040141    8428 command_runner.go:130] >   ---------                   ----                ------------  ----------  ---------------  -------------  ---
	I0314 19:42:14.040141    8428 command_runner.go:130] >   kube-system                 kindnet-r7zdb       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      15m
	I0314 19:42:14.040141    8428 command_runner.go:130] >   kube-system                 kube-proxy-w2qls    0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	I0314 19:42:14.040141    8428 command_runner.go:130] > Allocated resources:
	I0314 19:42:14.040141    8428 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0314 19:42:14.040141    8428 command_runner.go:130] >   Resource           Requests   Limits
	I0314 19:42:14.040141    8428 command_runner.go:130] >   --------           --------   ------
	I0314 19:42:14.040141    8428 command_runner.go:130] >   cpu                100m (5%)  100m (5%)
	I0314 19:42:14.040141    8428 command_runner.go:130] >   memory             50Mi (2%)  50Mi (2%)
	I0314 19:42:14.040141    8428 command_runner.go:130] >   ephemeral-storage  0 (0%)     0 (0%)
	I0314 19:42:14.040141    8428 command_runner.go:130] >   hugepages-2Mi      0 (0%)     0 (0%)
	I0314 19:42:14.040141    8428 command_runner.go:130] > Events:
	I0314 19:42:14.040141    8428 command_runner.go:130] >   Type    Reason                   Age                    From             Message
	I0314 19:42:14.040141    8428 command_runner.go:130] >   ----    ------                   ----                   ----             -------
	I0314 19:42:14.040141    8428 command_runner.go:130] >   Normal  Starting                 15m                    kube-proxy       
	I0314 19:42:14.040141    8428 command_runner.go:130] >   Normal  Starting                 5m25s                  kube-proxy       
	I0314 19:42:14.040141    8428 command_runner.go:130] >   Normal  NodeHasSufficientMemory  15m (x5 over 15m)      kubelet          Node multinode-442000-m03 status is now: NodeHasSufficientMemory
	I0314 19:42:14.040667    8428 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    15m (x5 over 15m)      kubelet          Node multinode-442000-m03 status is now: NodeHasNoDiskPressure
	I0314 19:42:14.040667    8428 command_runner.go:130] >   Normal  NodeHasSufficientPID     15m (x5 over 15m)      kubelet          Node multinode-442000-m03 status is now: NodeHasSufficientPID
	I0314 19:42:14.040748    8428 command_runner.go:130] >   Normal  NodeReady                15m                    kubelet          Node multinode-442000-m03 status is now: NodeReady
	I0314 19:42:14.040826    8428 command_runner.go:130] >   Normal  NodeHasSufficientMemory  5m27s (x5 over 5m29s)  kubelet          Node multinode-442000-m03 status is now: NodeHasSufficientMemory
	I0314 19:42:14.040826    8428 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    5m27s (x5 over 5m29s)  kubelet          Node multinode-442000-m03 status is now: NodeHasNoDiskPressure
	I0314 19:42:14.040917    8428 command_runner.go:130] >   Normal  NodeHasSufficientPID     5m27s (x5 over 5m29s)  kubelet          Node multinode-442000-m03 status is now: NodeHasSufficientPID
	I0314 19:42:14.040917    8428 command_runner.go:130] >   Normal  RegisteredNode           5m23s                  node-controller  Node multinode-442000-m03 event: Registered Node multinode-442000-m03 in Controller
	I0314 19:42:14.040917    8428 command_runner.go:130] >   Normal  NodeReady                5m20s                  kubelet          Node multinode-442000-m03 status is now: NodeReady
	I0314 19:42:14.041025    8428 command_runner.go:130] >   Normal  NodeNotReady             3m53s                  node-controller  Node multinode-442000-m03 status is now: NodeNotReady
	I0314 19:42:14.041025    8428 command_runner.go:130] >   Normal  RegisteredNode           56s                    node-controller  Node multinode-442000-m03 event: Registered Node multinode-442000-m03 in Controller
	I0314 19:42:14.051049    8428 logs.go:123] Gathering logs for kube-scheduler [dbb603289bf1] ...
	I0314 19:42:14.051049    8428 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbb603289bf1"
	I0314 19:42:14.082882    8428 command_runner.go:130] ! I0314 19:18:59.007917       1 serving.go:348] Generated self-signed cert in-memory
	I0314 19:42:14.082882    8428 command_runner.go:130] ! W0314 19:19:00.211611       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	I0314 19:42:14.082882    8428 command_runner.go:130] ! W0314 19:19:00.212802       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	I0314 19:42:14.083479    8428 command_runner.go:130] ! W0314 19:19:00.212990       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	I0314 19:42:14.083479    8428 command_runner.go:130] ! W0314 19:19:00.213108       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0314 19:42:14.083644    8428 command_runner.go:130] ! I0314 19:19:00.283055       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.4"
	I0314 19:42:14.083644    8428 command_runner.go:130] ! I0314 19:19:00.284207       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0314 19:42:14.083644    8428 command_runner.go:130] ! I0314 19:19:00.288027       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0314 19:42:14.083743    8428 command_runner.go:130] ! I0314 19:19:00.288233       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0314 19:42:14.083743    8428 command_runner.go:130] ! I0314 19:19:00.288206       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0314 19:42:14.083743    8428 command_runner.go:130] ! I0314 19:19:00.290233       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0314 19:42:14.083743    8428 command_runner.go:130] ! W0314 19:19:00.293166       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0314 19:42:14.083743    8428 command_runner.go:130] ! E0314 19:19:00.293367       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0314 19:42:14.083863    8428 command_runner.go:130] ! W0314 19:19:00.311723       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0314 19:42:14.083863    8428 command_runner.go:130] ! E0314 19:19:00.311803       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0314 19:42:14.083863    8428 command_runner.go:130] ! W0314 19:19:00.312480       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0314 19:42:14.083863    8428 command_runner.go:130] ! E0314 19:19:00.317665       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0314 19:42:14.083974    8428 command_runner.go:130] ! W0314 19:19:00.313212       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0314 19:42:14.083974    8428 command_runner.go:130] ! W0314 19:19:00.313379       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0314 19:42:14.084069    8428 command_runner.go:130] ! W0314 19:19:00.313450       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0314 19:42:14.084069    8428 command_runner.go:130] ! W0314 19:19:00.313586       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0314 19:42:14.084069    8428 command_runner.go:130] ! W0314 19:19:00.313632       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0314 19:42:14.084161    8428 command_runner.go:130] ! W0314 19:19:00.313705       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0314 19:42:14.084161    8428 command_runner.go:130] ! W0314 19:19:00.313774       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0314 19:42:14.084161    8428 command_runner.go:130] ! W0314 19:19:00.313864       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0314 19:42:14.084161    8428 command_runner.go:130] ! W0314 19:19:00.313910       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0314 19:42:14.084250    8428 command_runner.go:130] ! W0314 19:19:00.313978       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0314 19:42:14.084250    8428 command_runner.go:130] ! W0314 19:19:00.314056       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0314 19:42:14.084250    8428 command_runner.go:130] ! W0314 19:19:00.314091       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0314 19:42:14.084340    8428 command_runner.go:130] ! E0314 19:19:00.318101       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0314 19:42:14.084340    8428 command_runner.go:130] ! E0314 19:19:00.318394       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0314 19:42:14.084340    8428 command_runner.go:130] ! E0314 19:19:00.318606       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0314 19:42:14.084429    8428 command_runner.go:130] ! E0314 19:19:00.318728       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0314 19:42:14.084429    8428 command_runner.go:130] ! E0314 19:19:00.318953       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0314 19:42:14.084429    8428 command_runner.go:130] ! E0314 19:19:00.319076       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0314 19:42:14.084519    8428 command_runner.go:130] ! E0314 19:19:00.319318       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0314 19:42:14.084519    8428 command_runner.go:130] ! E0314 19:19:00.319575       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0314 19:42:14.084519    8428 command_runner.go:130] ! E0314 19:19:00.319588       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0314 19:42:14.084631    8428 command_runner.go:130] ! E0314 19:19:00.319719       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0314 19:42:14.084631    8428 command_runner.go:130] ! E0314 19:19:00.319732       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0314 19:42:14.084631    8428 command_runner.go:130] ! E0314 19:19:00.319788       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0314 19:42:14.084631    8428 command_runner.go:130] ! W0314 19:19:01.268901       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0314 19:42:14.084729    8428 command_runner.go:130] ! E0314 19:19:01.269219       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0314 19:42:14.084729    8428 command_runner.go:130] ! W0314 19:19:01.309661       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0314 19:42:14.084729    8428 command_runner.go:130] ! E0314 19:19:01.309894       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0314 19:42:14.084729    8428 command_runner.go:130] ! W0314 19:19:01.318104       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0314 19:42:14.084834    8428 command_runner.go:130] ! E0314 19:19:01.318410       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0314 19:42:14.084834    8428 command_runner.go:130] ! W0314 19:19:01.382148       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0314 19:42:14.084834    8428 command_runner.go:130] ! E0314 19:19:01.382194       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0314 19:42:14.084941    8428 command_runner.go:130] ! W0314 19:19:01.454259       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0314 19:42:14.084941    8428 command_runner.go:130] ! E0314 19:19:01.454398       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0314 19:42:14.085024    8428 command_runner.go:130] ! W0314 19:19:01.505982       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0314 19:42:14.085024    8428 command_runner.go:130] ! E0314 19:19:01.506182       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0314 19:42:14.085078    8428 command_runner.go:130] ! W0314 19:19:01.640521       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0314 19:42:14.085078    8428 command_runner.go:130] ! E0314 19:19:01.640836       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0314 19:42:14.085078    8428 command_runner.go:130] ! W0314 19:19:01.681052       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0314 19:42:14.085078    8428 command_runner.go:130] ! E0314 19:19:01.681953       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0314 19:42:14.085078    8428 command_runner.go:130] ! W0314 19:19:01.732243       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0314 19:42:14.085078    8428 command_runner.go:130] ! E0314 19:19:01.732288       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0314 19:42:14.085078    8428 command_runner.go:130] ! W0314 19:19:01.767241       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0314 19:42:14.085078    8428 command_runner.go:130] ! E0314 19:19:01.767329       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0314 19:42:14.085078    8428 command_runner.go:130] ! W0314 19:19:01.783665       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0314 19:42:14.085078    8428 command_runner.go:130] ! E0314 19:19:01.783845       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0314 19:42:14.085078    8428 command_runner.go:130] ! W0314 19:19:01.812936       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0314 19:42:14.085078    8428 command_runner.go:130] ! E0314 19:19:01.813027       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0314 19:42:14.085078    8428 command_runner.go:130] ! W0314 19:19:01.821109       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0314 19:42:14.085078    8428 command_runner.go:130] ! E0314 19:19:01.821267       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0314 19:42:14.085078    8428 command_runner.go:130] ! W0314 19:19:01.843311       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0314 19:42:14.085078    8428 command_runner.go:130] ! E0314 19:19:01.843339       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0314 19:42:14.085078    8428 command_runner.go:130] ! W0314 19:19:01.914649       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0314 19:42:14.085078    8428 command_runner.go:130] ! E0314 19:19:01.914986       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0314 19:42:14.085078    8428 command_runner.go:130] ! I0314 19:19:04.090863       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0314 19:42:14.085078    8428 command_runner.go:130] ! I0314 19:38:43.236637       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0314 19:42:14.085620    8428 command_runner.go:130] ! I0314 19:38:43.237145       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0314 19:42:14.085620    8428 command_runner.go:130] ! E0314 19:38:43.237439       1 run.go:74] "command failed" err="finished without leader elect"
	I0314 19:42:14.096261    8428 logs.go:123] Gathering logs for kube-controller-manager [16b80f73683d] ...
	I0314 19:42:14.096291    8428 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16b80f73683d"
	I0314 19:42:14.132585    8428 command_runner.go:130] ! I0314 19:18:57.791996       1 serving.go:348] Generated self-signed cert in-memory
	I0314 19:42:14.133034    8428 command_runner.go:130] ! I0314 19:18:58.802083       1 controllermanager.go:189] "Starting" version="v1.28.4"
	I0314 19:42:14.133107    8428 command_runner.go:130] ! I0314 19:18:58.802123       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0314 19:42:14.133107    8428 command_runner.go:130] ! I0314 19:18:58.803952       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0314 19:42:14.133241    8428 command_runner.go:130] ! I0314 19:18:58.804068       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0314 19:42:14.133241    8428 command_runner.go:130] ! I0314 19:18:58.807259       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0314 19:42:14.133412    8428 command_runner.go:130] ! I0314 19:18:58.807321       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0314 19:42:14.133412    8428 command_runner.go:130] ! I0314 19:19:03.211766       1 shared_informer.go:311] Waiting for caches to sync for tokens
	I0314 19:42:14.133575    8428 command_runner.go:130] ! I0314 19:19:03.241058       1 controllermanager.go:642] "Started controller" controller="endpoints-controller"
	I0314 19:42:14.133655    8428 command_runner.go:130] ! I0314 19:19:03.241394       1 endpoints_controller.go:174] "Starting endpoint controller"
	I0314 19:42:14.133731    8428 command_runner.go:130] ! I0314 19:19:03.241421       1 shared_informer.go:311] Waiting for caches to sync for endpoint
	I0314 19:42:14.133731    8428 command_runner.go:130] ! I0314 19:19:03.277645       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="limitranges"
	I0314 19:42:14.133812    8428 command_runner.go:130] ! I0314 19:19:03.277842       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="serviceaccounts"
	I0314 19:42:14.133987    8428 command_runner.go:130] ! I0314 19:19:03.277987       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="statefulsets.apps"
	I0314 19:42:14.134092    8428 command_runner.go:130] ! I0314 19:19:03.278099       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="roles.rbac.authorization.k8s.io"
	I0314 19:42:14.134092    8428 command_runner.go:130] ! I0314 19:19:03.278176       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="jobs.batch"
	I0314 19:42:14.134181    8428 command_runner.go:130] ! I0314 19:19:03.278283       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="ingresses.networking.k8s.io"
	I0314 19:42:14.134261    8428 command_runner.go:130] ! I0314 19:19:03.278389       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="leases.coordination.k8s.io"
	I0314 19:42:14.134336    8428 command_runner.go:130] ! I0314 19:19:03.278566       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="rolebindings.rbac.authorization.k8s.io"
	I0314 19:42:14.134418    8428 command_runner.go:130] ! W0314 19:19:03.278710       1 shared_informer.go:593] resyncPeriod 13h23m0.648968128s is smaller than resyncCheckPeriod 15h46m21.421594093s and the informer has already started. Changing it to 15h46m21.421594093s
	I0314 19:42:14.134418    8428 command_runner.go:130] ! I0314 19:19:03.278915       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="controllerrevisions.apps"
	I0314 19:42:14.134506    8428 command_runner.go:130] ! I0314 19:19:03.279052       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="daemonsets.apps"
	I0314 19:42:14.134585    8428 command_runner.go:130] ! I0314 19:19:03.279196       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="networkpolicies.networking.k8s.io"
	I0314 19:42:14.134585    8428 command_runner.go:130] ! I0314 19:19:03.279291       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="poddisruptionbudgets.policy"
	I0314 19:42:14.134834    8428 command_runner.go:130] ! I0314 19:19:03.279313       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="endpoints"
	I0314 19:42:14.134834    8428 command_runner.go:130] ! I0314 19:19:03.279560       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="replicasets.apps"
	I0314 19:42:14.134915    8428 command_runner.go:130] ! I0314 19:19:03.279688       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="horizontalpodautoscalers.autoscaling"
	I0314 19:42:14.134991    8428 command_runner.go:130] ! I0314 19:19:03.279834       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="cronjobs.batch"
	I0314 19:42:14.135068    8428 command_runner.go:130] ! I0314 19:19:03.279857       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="deployments.apps"
	I0314 19:42:14.135124    8428 command_runner.go:130] ! I0314 19:19:03.279927       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="csistoragecapacities.storage.k8s.io"
	I0314 19:42:14.135124    8428 command_runner.go:130] ! I0314 19:19:03.280011       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="endpointslices.discovery.k8s.io"
	I0314 19:42:14.135185    8428 command_runner.go:130] ! I0314 19:19:03.280106       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="podtemplates"
	I0314 19:42:14.135185    8428 command_runner.go:130] ! I0314 19:19:03.280148       1 controllermanager.go:642] "Started controller" controller="resourcequota-controller"
	I0314 19:42:14.135249    8428 command_runner.go:130] ! I0314 19:19:03.280224       1 resource_quota_controller.go:294] "Starting resource quota controller"
	I0314 19:42:14.135249    8428 command_runner.go:130] ! I0314 19:19:03.280306       1 shared_informer.go:311] Waiting for caches to sync for resource quota
	I0314 19:42:14.135309    8428 command_runner.go:130] ! I0314 19:19:03.280392       1 resource_quota_monitor.go:305] "QuotaMonitor running"
	I0314 19:42:14.135309    8428 command_runner.go:130] ! I0314 19:19:03.297527       1 controllermanager.go:642] "Started controller" controller="serviceaccount-controller"
	I0314 19:42:14.135365    8428 command_runner.go:130] ! I0314 19:19:03.297675       1 serviceaccounts_controller.go:111] "Starting service account controller"
	I0314 19:42:14.135424    8428 command_runner.go:130] ! I0314 19:19:03.297706       1 shared_informer.go:311] Waiting for caches to sync for service account
	I0314 19:42:14.135424    8428 command_runner.go:130] ! I0314 19:19:03.310691       1 node_lifecycle_controller.go:431] "Controller will reconcile labels"
	I0314 19:42:14.135480    8428 command_runner.go:130] ! I0314 19:19:03.310864       1 controllermanager.go:642] "Started controller" controller="node-lifecycle-controller"
	I0314 19:42:14.135541    8428 command_runner.go:130] ! I0314 19:19:03.311121       1 node_lifecycle_controller.go:465] "Sending events to api server"
	I0314 19:42:14.135596    8428 command_runner.go:130] ! I0314 19:19:03.311163       1 node_lifecycle_controller.go:476] "Starting node controller"
	I0314 19:42:14.135596    8428 command_runner.go:130] ! I0314 19:19:03.311170       1 shared_informer.go:311] Waiting for caches to sync for taint
	I0314 19:42:14.135596    8428 command_runner.go:130] ! I0314 19:19:03.312491       1 shared_informer.go:318] Caches are synced for tokens
	I0314 19:42:14.135656    8428 command_runner.go:130] ! I0314 19:19:03.324271       1 controllermanager.go:642] "Started controller" controller="persistentvolume-attach-detach-controller"
	I0314 19:42:14.135717    8428 command_runner.go:130] ! I0314 19:19:03.324640       1 attach_detach_controller.go:337] "Starting attach detach controller"
	I0314 19:42:14.135717    8428 command_runner.go:130] ! I0314 19:19:03.324856       1 shared_informer.go:311] Waiting for caches to sync for attach detach
	I0314 19:42:14.135778    8428 command_runner.go:130] ! I0314 19:19:03.341489       1 controllermanager.go:642] "Started controller" controller="certificatesigningrequest-cleaner-controller"
	I0314 19:42:14.135778    8428 command_runner.go:130] ! I0314 19:19:03.341829       1 cleaner.go:83] "Starting CSR cleaner controller"
	I0314 19:42:14.135833    8428 command_runner.go:130] ! I0314 19:19:03.359979       1 controllermanager.go:642] "Started controller" controller="bootstrap-signer-controller"
	I0314 19:42:14.135892    8428 command_runner.go:130] ! I0314 19:19:03.360131       1 shared_informer.go:311] Waiting for caches to sync for bootstrap_signer
	I0314 19:42:14.135947    8428 command_runner.go:130] ! I0314 19:19:03.373006       1 controllermanager.go:642] "Started controller" controller="persistentvolume-binder-controller"
	I0314 19:42:14.135947    8428 command_runner.go:130] ! I0314 19:19:03.373343       1 pv_controller_base.go:319] "Starting persistent volume controller"
	I0314 19:42:14.136006    8428 command_runner.go:130] ! I0314 19:19:03.373606       1 shared_informer.go:311] Waiting for caches to sync for persistent volume
	I0314 19:42:14.136062    8428 command_runner.go:130] ! I0314 19:19:03.385026       1 controllermanager.go:642] "Started controller" controller="persistentvolumeclaim-protection-controller"
	I0314 19:42:14.136062    8428 command_runner.go:130] ! I0314 19:19:03.385081       1 pvc_protection_controller.go:102] "Starting PVC protection controller"
	I0314 19:42:14.136121    8428 command_runner.go:130] ! I0314 19:19:03.385807       1 shared_informer.go:311] Waiting for caches to sync for PVC protection
	I0314 19:42:14.136180    8428 command_runner.go:130] ! I0314 19:19:03.399556       1 controllermanager.go:642] "Started controller" controller="token-cleaner-controller"
	I0314 19:42:14.136180    8428 command_runner.go:130] ! I0314 19:19:03.399796       1 core.go:228] "Warning: configure-cloud-routes is set, but no cloud provider specified. Will not configure cloud provider routes."
	I0314 19:42:14.136240    8428 command_runner.go:130] ! I0314 19:19:03.399936       1 controllermanager.go:620] "Warning: skipping controller" controller="node-route-controller"
	I0314 19:42:14.136295    8428 command_runner.go:130] ! I0314 19:19:03.400078       1 tokencleaner.go:112] "Starting token cleaner controller"
	I0314 19:42:14.136354    8428 command_runner.go:130] ! I0314 19:19:03.400349       1 shared_informer.go:311] Waiting for caches to sync for token_cleaner
	I0314 19:42:14.136354    8428 command_runner.go:130] ! I0314 19:19:03.400489       1 shared_informer.go:318] Caches are synced for token_cleaner
	I0314 19:42:14.136411    8428 command_runner.go:130] ! I0314 19:19:03.521977       1 controllermanager.go:642] "Started controller" controller="persistentvolume-protection-controller"
	I0314 19:42:14.136411    8428 command_runner.go:130] ! I0314 19:19:03.522076       1 pv_protection_controller.go:78] "Starting PV protection controller"
	I0314 19:42:14.136471    8428 command_runner.go:130] ! I0314 19:19:03.522086       1 shared_informer.go:311] Waiting for caches to sync for PV protection
	I0314 19:42:14.136471    8428 command_runner.go:130] ! I0314 19:19:03.567446       1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-kubelet-serving"
	I0314 19:42:14.136528    8428 command_runner.go:130] ! I0314 19:19:03.567574       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I0314 19:42:14.136590    8428 command_runner.go:130] ! I0314 19:19:03.567615       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0314 19:42:14.136590    8428 command_runner.go:130] ! I0314 19:19:03.568792       1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-kubelet-client"
	I0314 19:42:14.136706    8428 command_runner.go:130] ! I0314 19:19:03.568891       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I0314 19:42:14.136769    8428 command_runner.go:130] ! I0314 19:19:03.569119       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0314 19:42:14.136820    8428 command_runner.go:130] ! I0314 19:19:03.570147       1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-kube-apiserver-client"
	I0314 19:42:14.136820    8428 command_runner.go:130] ! I0314 19:19:03.570261       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I0314 19:42:14.136875    8428 command_runner.go:130] ! I0314 19:19:03.570356       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0314 19:42:14.136937    8428 command_runner.go:130] ! I0314 19:19:03.571403       1 controllermanager.go:642] "Started controller" controller="certificatesigningrequest-signing-controller"
	I0314 19:42:14.136998    8428 command_runner.go:130] ! I0314 19:19:03.571529       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0314 19:42:14.137042    8428 command_runner.go:130] ! I0314 19:19:03.571434       1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-legacy-unknown"
	I0314 19:42:14.137100    8428 command_runner.go:130] ! I0314 19:19:03.572095       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0314 19:42:14.137100    8428 command_runner.go:130] ! I0314 19:19:03.723142       1 controllermanager.go:642] "Started controller" controller="ttl-controller"
	I0314 19:42:14.137160    8428 command_runner.go:130] ! I0314 19:19:03.723289       1 ttl_controller.go:124] "Starting TTL controller"
	I0314 19:42:14.137160    8428 command_runner.go:130] ! I0314 19:19:03.723300       1 shared_informer.go:311] Waiting for caches to sync for TTL
	I0314 19:42:14.137216    8428 command_runner.go:130] ! I0314 19:19:13.784656       1 range_allocator.go:111] "No Secondary Service CIDR provided. Skipping filtering out secondary service addresses"
	I0314 19:42:14.137276    8428 command_runner.go:130] ! I0314 19:19:13.784710       1 controllermanager.go:642] "Started controller" controller="node-ipam-controller"
	I0314 19:42:14.137276    8428 command_runner.go:130] ! I0314 19:19:13.784891       1 node_ipam_controller.go:162] "Starting ipam controller"
	I0314 19:42:14.137333    8428 command_runner.go:130] ! I0314 19:19:13.784975       1 shared_informer.go:311] Waiting for caches to sync for node
	I0314 19:42:14.137333    8428 command_runner.go:130] ! I0314 19:19:13.813537       1 controllermanager.go:642] "Started controller" controller="namespace-controller"
	I0314 19:42:14.137393    8428 command_runner.go:130] ! I0314 19:19:13.814099       1 namespace_controller.go:197] "Starting namespace controller"
	I0314 19:42:14.137393    8428 command_runner.go:130] ! I0314 19:19:13.814528       1 shared_informer.go:311] Waiting for caches to sync for namespace
	I0314 19:42:14.137450    8428 command_runner.go:130] ! I0314 19:19:13.831516       1 controllermanager.go:642] "Started controller" controller="garbage-collector-controller"
	I0314 19:42:14.137512    8428 command_runner.go:130] ! I0314 19:19:13.831928       1 garbagecollector.go:155] "Starting controller" controller="garbagecollector"
	I0314 19:42:14.137569    8428 command_runner.go:130] ! I0314 19:19:13.832023       1 shared_informer.go:311] Waiting for caches to sync for garbage collector
	I0314 19:42:14.137569    8428 command_runner.go:130] ! I0314 19:19:13.832052       1 graph_builder.go:294] "Running" component="GraphBuilder"
	I0314 19:42:14.137631    8428 command_runner.go:130] ! I0314 19:19:13.876141       1 controllermanager.go:642] "Started controller" controller="horizontal-pod-autoscaler-controller"
	I0314 19:42:14.137631    8428 command_runner.go:130] ! I0314 19:19:13.876437       1 horizontal.go:200] "Starting HPA controller"
	I0314 19:42:14.137690    8428 command_runner.go:130] ! I0314 19:19:13.876448       1 shared_informer.go:311] Waiting for caches to sync for HPA
	I0314 19:42:14.137690    8428 command_runner.go:130] ! I0314 19:19:13.892498       1 controllermanager.go:642] "Started controller" controller="disruption-controller"
	I0314 19:42:14.137751    8428 command_runner.go:130] ! I0314 19:19:13.892891       1 disruption.go:433] "Sending events to api server."
	I0314 19:42:14.137751    8428 command_runner.go:130] ! I0314 19:19:13.893092       1 disruption.go:444] "Starting disruption controller"
	I0314 19:42:14.137751    8428 command_runner.go:130] ! I0314 19:19:13.893185       1 shared_informer.go:311] Waiting for caches to sync for disruption
	I0314 19:42:14.137809    8428 command_runner.go:130] ! I0314 19:19:13.895299       1 controllermanager.go:642] "Started controller" controller="certificatesigningrequest-approving-controller"
	I0314 19:42:14.137860    8428 command_runner.go:130] ! I0314 19:19:13.895861       1 certificate_controller.go:115] "Starting certificate controller" name="csrapproving"
	I0314 19:42:14.137860    8428 command_runner.go:130] ! I0314 19:19:13.896105       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrapproving
	I0314 19:42:14.137898    8428 command_runner.go:130] ! I0314 19:19:13.908480       1 controllermanager.go:642] "Started controller" controller="endpointslice-mirroring-controller"
	I0314 19:42:14.137944    8428 command_runner.go:130] ! I0314 19:19:13.908861       1 endpointslicemirroring_controller.go:223] "Starting EndpointSliceMirroring controller"
	I0314 19:42:14.138009    8428 command_runner.go:130] ! I0314 19:19:13.908873       1 shared_informer.go:311] Waiting for caches to sync for endpoint_slice_mirroring
	I0314 19:42:14.138009    8428 command_runner.go:130] ! I0314 19:19:13.929369       1 controllermanager.go:642] "Started controller" controller="replicationcontroller-controller"
	I0314 19:42:14.138070    8428 command_runner.go:130] ! I0314 19:19:13.929803       1 replica_set.go:214] "Starting controller" name="replicationcontroller"
	I0314 19:42:14.138126    8428 command_runner.go:130] ! I0314 19:19:13.930050       1 shared_informer.go:311] Waiting for caches to sync for ReplicationController
	I0314 19:42:14.138126    8428 command_runner.go:130] ! I0314 19:19:13.974683       1 controllermanager.go:642] "Started controller" controller="replicaset-controller"
	I0314 19:42:14.138187    8428 command_runner.go:130] ! I0314 19:19:13.974899       1 replica_set.go:214] "Starting controller" name="replicaset"
	I0314 19:42:14.138187    8428 command_runner.go:130] ! I0314 19:19:13.975108       1 shared_informer.go:311] Waiting for caches to sync for ReplicaSet
	I0314 19:42:14.138245    8428 command_runner.go:130] ! E0314 19:19:14.134866       1 core.go:92] "Failed to start service controller" err="WARNING: no cloud provider provided, services of type LoadBalancer will fail"
	I0314 19:42:14.138245    8428 command_runner.go:130] ! I0314 19:19:14.135266       1 controllermanager.go:620] "Warning: skipping controller" controller="service-lb-controller"
	I0314 19:42:14.138307    8428 command_runner.go:130] ! E0314 19:19:14.170400       1 core.go:213] "Failed to start cloud node lifecycle controller" err="no cloud provider provided"
	I0314 19:42:14.138362    8428 command_runner.go:130] ! I0314 19:19:14.170426       1 controllermanager.go:620] "Warning: skipping controller" controller="cloud-node-lifecycle-controller"
	I0314 19:42:14.138421    8428 command_runner.go:130] ! I0314 19:19:14.324676       1 controllermanager.go:642] "Started controller" controller="ttl-after-finished-controller"
	I0314 19:42:14.138421    8428 command_runner.go:130] ! I0314 19:19:14.324865       1 ttlafterfinished_controller.go:109] "Starting TTL after finished controller"
	I0314 19:42:14.138478    8428 command_runner.go:130] ! I0314 19:19:14.325169       1 shared_informer.go:311] Waiting for caches to sync for TTL after finished
	I0314 19:42:14.138537    8428 command_runner.go:130] ! I0314 19:19:14.474401       1 controllermanager.go:642] "Started controller" controller="ephemeral-volume-controller"
	I0314 19:42:14.138576    8428 command_runner.go:130] ! I0314 19:19:14.474562       1 controller.go:169] "Starting ephemeral volume controller"
	I0314 19:42:14.138576    8428 command_runner.go:130] ! I0314 19:19:14.474660       1 shared_informer.go:311] Waiting for caches to sync for ephemeral
	I0314 19:42:14.138576    8428 command_runner.go:130] ! I0314 19:19:14.633668       1 controllermanager.go:642] "Started controller" controller="endpointslice-controller"
	I0314 19:42:14.138667    8428 command_runner.go:130] ! I0314 19:19:14.633821       1 endpointslice_controller.go:264] "Starting endpoint slice controller"
	I0314 19:42:14.138667    8428 command_runner.go:130] ! I0314 19:19:14.633832       1 shared_informer.go:311] Waiting for caches to sync for endpoint_slice
	I0314 19:42:14.138773    8428 command_runner.go:130] ! I0314 19:19:14.773955       1 controllermanager.go:642] "Started controller" controller="pod-garbage-collector-controller"
	I0314 19:42:14.138773    8428 command_runner.go:130] ! I0314 19:19:14.774019       1 gc_controller.go:101] "Starting GC controller"
	I0314 19:42:14.138773    8428 command_runner.go:130] ! I0314 19:19:14.774027       1 shared_informer.go:311] Waiting for caches to sync for GC
	I0314 19:42:14.138773    8428 command_runner.go:130] ! I0314 19:19:14.925568       1 controllermanager.go:642] "Started controller" controller="daemonset-controller"
	I0314 19:42:14.138872    8428 command_runner.go:130] ! I0314 19:19:14.925814       1 daemon_controller.go:291] "Starting daemon sets controller"
	I0314 19:42:14.138872    8428 command_runner.go:130] ! I0314 19:19:14.925828       1 shared_informer.go:311] Waiting for caches to sync for daemon sets
	I0314 19:42:14.138872    8428 command_runner.go:130] ! I0314 19:19:15.075328       1 controllermanager.go:642] "Started controller" controller="job-controller"
	I0314 19:42:14.138872    8428 command_runner.go:130] ! I0314 19:19:15.075556       1 job_controller.go:226] "Starting job controller"
	I0314 19:42:14.138872    8428 command_runner.go:130] ! I0314 19:19:15.075634       1 shared_informer.go:311] Waiting for caches to sync for job
	I0314 19:42:14.138981    8428 command_runner.go:130] ! I0314 19:19:15.225929       1 controllermanager.go:642] "Started controller" controller="persistentvolume-expander-controller"
	I0314 19:42:14.138981    8428 command_runner.go:130] ! I0314 19:19:15.226065       1 expand_controller.go:328] "Starting expand controller"
	I0314 19:42:14.138981    8428 command_runner.go:130] ! I0314 19:19:15.226077       1 shared_informer.go:311] Waiting for caches to sync for expand
	I0314 19:42:14.139089    8428 command_runner.go:130] ! I0314 19:19:15.378471       1 controllermanager.go:642] "Started controller" controller="deployment-controller"
	I0314 19:42:14.139089    8428 command_runner.go:130] ! I0314 19:19:15.378640       1 deployment_controller.go:168] "Starting controller" controller="deployment"
	I0314 19:42:14.139089    8428 command_runner.go:130] ! I0314 19:19:15.379237       1 shared_informer.go:311] Waiting for caches to sync for deployment
	I0314 19:42:14.139089    8428 command_runner.go:130] ! I0314 19:19:15.525089       1 controllermanager.go:642] "Started controller" controller="statefulset-controller"
	I0314 19:42:14.139195    8428 command_runner.go:130] ! I0314 19:19:15.525565       1 stateful_set.go:161] "Starting stateful set controller"
	I0314 19:42:14.139195    8428 command_runner.go:130] ! I0314 19:19:15.525643       1 shared_informer.go:311] Waiting for caches to sync for stateful set
	I0314 19:42:14.139304    8428 command_runner.go:130] ! I0314 19:19:15.679545       1 controllermanager.go:642] "Started controller" controller="cronjob-controller"
	I0314 19:42:14.139304    8428 command_runner.go:130] ! I0314 19:19:15.679611       1 cronjob_controllerv2.go:139] "Starting cronjob controller v2"
	I0314 19:42:14.139404    8428 command_runner.go:130] ! I0314 19:19:15.679619       1 shared_informer.go:311] Waiting for caches to sync for cronjob
	I0314 19:42:14.139404    8428 command_runner.go:130] ! I0314 19:19:15.825516       1 controllermanager.go:642] "Started controller" controller="clusterrole-aggregation-controller"
	I0314 19:42:14.139404    8428 command_runner.go:130] ! I0314 19:19:15.825908       1 clusterroleaggregation_controller.go:189] "Starting ClusterRoleAggregator controller"
	I0314 19:42:14.139506    8428 command_runner.go:130] ! I0314 19:19:15.825920       1 shared_informer.go:311] Waiting for caches to sync for ClusterRoleAggregator
	I0314 19:42:14.139554    8428 command_runner.go:130] ! I0314 19:19:15.976308       1 controllermanager.go:642] "Started controller" controller="root-ca-certificate-publisher-controller"
	I0314 19:42:14.139635    8428 command_runner.go:130] ! I0314 19:19:15.976673       1 publisher.go:102] "Starting root CA cert publisher controller"
	I0314 19:42:14.139686    8428 command_runner.go:130] ! I0314 19:19:15.976858       1 shared_informer.go:311] Waiting for caches to sync for crt configmap
	I0314 19:42:14.139686    8428 command_runner.go:130] ! I0314 19:19:15.993409       1 shared_informer.go:311] Waiting for caches to sync for resource quota
	I0314 19:42:14.139686    8428 command_runner.go:130] ! I0314 19:19:16.017841       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-442000\" does not exist"
	I0314 19:42:14.139686    8428 command_runner.go:130] ! I0314 19:19:16.022817       1 shared_informer.go:311] Waiting for caches to sync for garbage collector
	I0314 19:42:14.139686    8428 command_runner.go:130] ! I0314 19:19:16.023332       1 shared_informer.go:318] Caches are synced for TTL
	I0314 19:42:14.139686    8428 command_runner.go:130] ! I0314 19:19:16.025413       1 shared_informer.go:318] Caches are synced for TTL after finished
	I0314 19:42:14.139686    8428 command_runner.go:130] ! I0314 19:19:16.025667       1 shared_informer.go:318] Caches are synced for stateful set
	I0314 19:42:14.139686    8428 command_runner.go:130] ! I0314 19:19:16.025909       1 shared_informer.go:318] Caches are synced for daemon sets
	I0314 19:42:14.139686    8428 command_runner.go:130] ! I0314 19:19:16.026194       1 shared_informer.go:318] Caches are synced for expand
	I0314 19:42:14.139686    8428 command_runner.go:130] ! I0314 19:19:16.030689       1 shared_informer.go:318] Caches are synced for ReplicationController
	I0314 19:42:14.139686    8428 command_runner.go:130] ! I0314 19:19:16.042937       1 shared_informer.go:318] Caches are synced for endpoint
	I0314 19:42:14.139686    8428 command_runner.go:130] ! I0314 19:19:16.063170       1 shared_informer.go:318] Caches are synced for bootstrap_signer
	I0314 19:42:14.139686    8428 command_runner.go:130] ! I0314 19:19:16.069816       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-client
	I0314 19:42:14.139686    8428 command_runner.go:130] ! I0314 19:19:16.069953       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-serving
	I0314 19:42:14.139686    8428 command_runner.go:130] ! I0314 19:19:16.071382       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0314 19:42:14.139686    8428 command_runner.go:130] ! I0314 19:19:16.072881       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-legacy-unknown
	I0314 19:42:14.139686    8428 command_runner.go:130] ! I0314 19:19:16.075260       1 shared_informer.go:318] Caches are synced for GC
	I0314 19:42:14.139686    8428 command_runner.go:130] ! I0314 19:19:16.075273       1 shared_informer.go:318] Caches are synced for ReplicaSet
	I0314 19:42:14.139686    8428 command_runner.go:130] ! I0314 19:19:16.075312       1 shared_informer.go:318] Caches are synced for ephemeral
	I0314 19:42:14.139686    8428 command_runner.go:130] ! I0314 19:19:16.076852       1 shared_informer.go:318] Caches are synced for HPA
	I0314 19:42:14.139686    8428 command_runner.go:130] ! I0314 19:19:16.077008       1 shared_informer.go:318] Caches are synced for crt configmap
	I0314 19:42:14.139686    8428 command_runner.go:130] ! I0314 19:19:16.077022       1 shared_informer.go:318] Caches are synced for job
	I0314 19:42:14.139686    8428 command_runner.go:130] ! I0314 19:19:16.079681       1 shared_informer.go:318] Caches are synced for deployment
	I0314 19:42:14.139686    8428 command_runner.go:130] ! I0314 19:19:16.079893       1 shared_informer.go:318] Caches are synced for cronjob
	I0314 19:42:14.139686    8428 command_runner.go:130] ! I0314 19:19:16.085788       1 shared_informer.go:318] Caches are synced for node
	I0314 19:42:14.139686    8428 command_runner.go:130] ! I0314 19:19:16.085869       1 range_allocator.go:174] "Sending events to api server"
	I0314 19:42:14.139686    8428 command_runner.go:130] ! I0314 19:19:16.085937       1 range_allocator.go:178] "Starting range CIDR allocator"
	I0314 19:42:14.139686    8428 command_runner.go:130] ! I0314 19:19:16.085945       1 shared_informer.go:311] Waiting for caches to sync for cidrallocator
	I0314 19:42:14.139686    8428 command_runner.go:130] ! I0314 19:19:16.085951       1 shared_informer.go:318] Caches are synced for cidrallocator
	I0314 19:42:14.139686    8428 command_runner.go:130] ! I0314 19:19:16.086224       1 shared_informer.go:318] Caches are synced for PVC protection
	I0314 19:42:14.139686    8428 command_runner.go:130] ! I0314 19:19:16.093730       1 shared_informer.go:318] Caches are synced for disruption
	I0314 19:42:14.139686    8428 command_runner.go:130] ! I0314 19:19:16.093802       1 shared_informer.go:318] Caches are synced for resource quota
	I0314 19:42:14.139686    8428 command_runner.go:130] ! I0314 19:19:16.097148       1 shared_informer.go:318] Caches are synced for certificate-csrapproving
	I0314 19:42:14.139686    8428 command_runner.go:130] ! I0314 19:19:16.098688       1 shared_informer.go:318] Caches are synced for service account
	I0314 19:42:14.139686    8428 command_runner.go:130] ! I0314 19:19:16.102404       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-442000" podCIDRs=["10.244.0.0/24"]
	I0314 19:42:14.139686    8428 command_runner.go:130] ! I0314 19:19:16.112396       1 shared_informer.go:318] Caches are synced for taint
	I0314 19:42:14.139686    8428 command_runner.go:130] ! I0314 19:19:16.112849       1 node_lifecycle_controller.go:1225] "Initializing eviction metric for zone" zone=""
	I0314 19:42:14.139686    8428 command_runner.go:130] ! I0314 19:19:16.113070       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-442000"
	I0314 19:42:14.139686    8428 command_runner.go:130] ! I0314 19:19:16.113155       1 node_lifecycle_controller.go:1029] "Controller detected that all Nodes are not-Ready. Entering master disruption mode"
	I0314 19:42:14.139686    8428 command_runner.go:130] ! I0314 19:19:16.112659       1 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring
	I0314 19:42:14.139686    8428 command_runner.go:130] ! I0314 19:19:16.113865       1 taint_manager.go:205] "Starting NoExecuteTaintManager"
	I0314 19:42:14.139686    8428 command_runner.go:130] ! I0314 19:19:16.113966       1 taint_manager.go:210] "Sending events to api server"
	I0314 19:42:14.139686    8428 command_runner.go:130] ! I0314 19:19:16.115068       1 shared_informer.go:318] Caches are synced for namespace
	I0314 19:42:14.140230    8428 command_runner.go:130] ! I0314 19:19:16.118281       1 event.go:307] "Event occurred" object="multinode-442000" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-442000 event: Registered Node multinode-442000 in Controller"
	I0314 19:42:14.140230    8428 command_runner.go:130] ! I0314 19:19:16.134584       1 shared_informer.go:318] Caches are synced for endpoint_slice
	I0314 19:42:14.140230    8428 command_runner.go:130] ! I0314 19:19:16.151625       1 event.go:307] "Event occurred" object="kube-system/kube-scheduler-multinode-442000" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0314 19:42:14.140230    8428 command_runner.go:130] ! I0314 19:19:16.171551       1 event.go:307] "Event occurred" object="kube-system/etcd-multinode-442000" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0314 19:42:14.140351    8428 command_runner.go:130] ! I0314 19:19:16.174341       1 event.go:307] "Event occurred" object="kube-system/kube-apiserver-multinode-442000" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0314 19:42:14.140351    8428 command_runner.go:130] ! I0314 19:19:16.174358       1 event.go:307] "Event occurred" object="kube-system/kube-controller-manager-multinode-442000" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0314 19:42:14.140399    8428 command_runner.go:130] ! I0314 19:19:16.184987       1 shared_informer.go:318] Caches are synced for resource quota
	I0314 19:42:14.140454    8428 command_runner.go:130] ! I0314 19:19:16.223118       1 shared_informer.go:318] Caches are synced for PV protection
	I0314 19:42:14.140454    8428 command_runner.go:130] ! I0314 19:19:16.225526       1 shared_informer.go:318] Caches are synced for attach detach
	I0314 19:42:14.140503    8428 command_runner.go:130] ! I0314 19:19:16.225950       1 shared_informer.go:318] Caches are synced for ClusterRoleAggregator
	I0314 19:42:14.140554    8428 command_runner.go:130] ! I0314 19:19:16.274020       1 shared_informer.go:318] Caches are synced for persistent volume
	I0314 19:42:14.140601    8428 command_runner.go:130] ! I0314 19:19:16.320250       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-7b9lf"
	I0314 19:42:14.140655    8428 command_runner.go:130] ! I0314 19:19:16.328650       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-cg28g"
	I0314 19:42:14.140655    8428 command_runner.go:130] ! I0314 19:19:16.626855       1 shared_informer.go:318] Caches are synced for garbage collector
	I0314 19:42:14.140655    8428 command_runner.go:130] ! I0314 19:19:16.633099       1 shared_informer.go:318] Caches are synced for garbage collector
	I0314 19:42:14.140765    8428 command_runner.go:130] ! I0314 19:19:16.633344       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0314 19:42:14.140765    8428 command_runner.go:130] ! I0314 19:19:16.789964       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5dd5756b68 to 2"
	I0314 19:42:14.140813    8428 command_runner.go:130] ! I0314 19:19:17.099870       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-pvxpr"
	I0314 19:42:14.140922    8428 command_runner.go:130] ! I0314 19:19:17.114819       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-d22jc"
	I0314 19:42:14.140922    8428 command_runner.go:130] ! I0314 19:19:17.146456       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="355.713874ms"
	I0314 19:42:14.140922    8428 command_runner.go:130] ! I0314 19:19:17.166202       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="19.688691ms"
	I0314 19:42:14.140922    8428 command_runner.go:130] ! I0314 19:19:17.169087       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="2.771063ms"
	I0314 19:42:14.140922    8428 command_runner.go:130] ! I0314 19:19:18.399096       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I0314 19:42:14.140922    8428 command_runner.go:130] ! I0314 19:19:18.448322       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-pvxpr"
	I0314 19:42:14.140922    8428 command_runner.go:130] ! I0314 19:19:18.482373       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="82.944747ms"
	I0314 19:42:14.140922    8428 command_runner.go:130] ! I0314 19:19:18.500300       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="17.716936ms"
	I0314 19:42:14.140922    8428 command_runner.go:130] ! I0314 19:19:18.500887       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="99.317µs"
	I0314 19:42:14.140922    8428 command_runner.go:130] ! I0314 19:19:26.475232       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="98.515µs"
	I0314 19:42:14.140922    8428 command_runner.go:130] ! I0314 19:19:26.505160       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="58.309µs"
	I0314 19:42:14.140922    8428 command_runner.go:130] ! I0314 19:19:28.423231       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="23.310782ms"
	I0314 19:42:14.140922    8428 command_runner.go:130] ! I0314 19:19:28.423925       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="72.006µs"
	I0314 19:42:14.140922    8428 command_runner.go:130] ! I0314 19:19:31.116802       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I0314 19:42:14.140922    8428 command_runner.go:130] ! I0314 19:22:02.467925       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-442000-m02\" does not exist"
	I0314 19:42:14.140922    8428 command_runner.go:130] ! I0314 19:22:02.479576       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-442000-m02" podCIDRs=["10.244.1.0/24"]
	I0314 19:42:14.140922    8428 command_runner.go:130] ! I0314 19:22:02.507610       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-72dzs"
	I0314 19:42:14.140922    8428 command_runner.go:130] ! I0314 19:22:02.511169       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-c7m4p"
	I0314 19:42:14.140922    8428 command_runner.go:130] ! I0314 19:22:06.145908       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-442000-m02"
	I0314 19:42:14.140922    8428 command_runner.go:130] ! I0314 19:22:06.146201       1 event.go:307] "Event occurred" object="multinode-442000-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-442000-m02 event: Registered Node multinode-442000-m02 in Controller"
	I0314 19:42:14.140922    8428 command_runner.go:130] ! I0314 19:22:20.862710       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-442000-m02"
	I0314 19:42:14.140922    8428 command_runner.go:130] ! I0314 19:22:45.188036       1 event.go:307] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-5b5d89c9d6 to 2"
	I0314 19:42:14.140922    8428 command_runner.go:130] ! I0314 19:22:45.218022       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5b5d89c9d6-8drpb"
	I0314 19:42:14.140922    8428 command_runner.go:130] ! I0314 19:22:45.241867       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5b5d89c9d6-7446n"
	I0314 19:42:14.140922    8428 command_runner.go:130] ! I0314 19:22:45.267427       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="80.313691ms"
	I0314 19:42:14.140922    8428 command_runner.go:130] ! I0314 19:22:45.292961       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="25.159362ms"
	I0314 19:42:14.141459    8428 command_runner.go:130] ! I0314 19:22:45.311264       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="18.241692ms"
	I0314 19:42:14.141459    8428 command_runner.go:130] ! I0314 19:22:45.311407       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="93.911µs"
	I0314 19:42:14.141459    8428 command_runner.go:130] ! I0314 19:22:48.320252       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="21.515467ms"
	I0314 19:42:14.141459    8428 command_runner.go:130] ! I0314 19:22:48.320403       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="46.303µs"
	I0314 19:42:14.141617    8428 command_runner.go:130] ! I0314 19:22:48.344640       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="8.018521ms"
	I0314 19:42:14.141617    8428 command_runner.go:130] ! I0314 19:22:48.344838       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="42.804µs"
	I0314 19:42:14.141669    8428 command_runner.go:130] ! I0314 19:26:25.208780       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-442000-m02"
	I0314 19:42:14.141717    8428 command_runner.go:130] ! I0314 19:26:25.214591       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-442000-m03\" does not exist"
	I0314 19:42:14.141717    8428 command_runner.go:130] ! I0314 19:26:25.248082       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-442000-m03" podCIDRs=["10.244.2.0/24"]
	I0314 19:42:14.141717    8428 command_runner.go:130] ! I0314 19:26:25.265233       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-r7zdb"
	I0314 19:42:14.141717    8428 command_runner.go:130] ! I0314 19:26:25.273144       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-w2qls"
	I0314 19:42:14.141717    8428 command_runner.go:130] ! I0314 19:26:26.207170       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-442000-m03"
	I0314 19:42:14.141717    8428 command_runner.go:130] ! I0314 19:26:26.207236       1 event.go:307] "Event occurred" object="multinode-442000-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-442000-m03 event: Registered Node multinode-442000-m03 in Controller"
	I0314 19:42:14.141717    8428 command_runner.go:130] ! I0314 19:26:43.758846       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-442000-m02"
	I0314 19:42:14.141717    8428 command_runner.go:130] ! I0314 19:33:46.333556       1 event.go:307] "Event occurred" object="multinode-442000-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-442000-m03 status is now: NodeNotReady"
	I0314 19:42:14.141717    8428 command_runner.go:130] ! I0314 19:33:46.333891       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-442000-m02"
	I0314 19:42:14.141717    8428 command_runner.go:130] ! I0314 19:33:46.348976       1 event.go:307] "Event occurred" object="kube-system/kindnet-r7zdb" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0314 19:42:14.141717    8428 command_runner.go:130] ! I0314 19:33:46.370200       1 event.go:307] "Event occurred" object="kube-system/kube-proxy-w2qls" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0314 19:42:14.141717    8428 command_runner.go:130] ! I0314 19:36:39.868492       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-442000-m02"
	I0314 19:42:14.141717    8428 command_runner.go:130] ! I0314 19:36:41.400896       1 event.go:307] "Event occurred" object="multinode-442000-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RemovingNode" message="Node multinode-442000-m03 event: Removing Node multinode-442000-m03 from Controller"
	I0314 19:42:14.141717    8428 command_runner.go:130] ! I0314 19:36:47.335802       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-442000-m03\" does not exist"
	I0314 19:42:14.141717    8428 command_runner.go:130] ! I0314 19:36:47.336128       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-442000-m02"
	I0314 19:42:14.141717    8428 command_runner.go:130] ! I0314 19:36:47.352987       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-442000-m03" podCIDRs=["10.244.3.0/24"]
	I0314 19:42:14.141717    8428 command_runner.go:130] ! I0314 19:36:51.403261       1 event.go:307] "Event occurred" object="multinode-442000-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-442000-m03 event: Registered Node multinode-442000-m03 in Controller"
	I0314 19:42:14.141717    8428 command_runner.go:130] ! I0314 19:36:54.976864       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-442000-m02"
	I0314 19:42:14.141717    8428 command_runner.go:130] ! I0314 19:38:21.463528       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-442000-m02"
	I0314 19:42:14.141717    8428 command_runner.go:130] ! I0314 19:38:21.463818       1 event.go:307] "Event occurred" object="multinode-442000-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-442000-m03 status is now: NodeNotReady"
	I0314 19:42:14.142314    8428 command_runner.go:130] ! I0314 19:38:21.486796       1 event.go:307] "Event occurred" object="kube-system/kindnet-r7zdb" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0314 19:42:14.142314    8428 command_runner.go:130] ! I0314 19:38:21.501217       1 event.go:307] "Event occurred" object="kube-system/kube-proxy-w2qls" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0314 19:42:14.159016    8428 logs.go:123] Gathering logs for etcd [a81a9c43c355] ...
	I0314 19:42:14.159016    8428 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a81a9c43c355"
	I0314 19:42:14.192397    8428 command_runner.go:130] ! {"level":"warn","ts":"2024-03-14T19:41:01.944953Z","caller":"embed/config.go:673","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I0314 19:42:14.192904    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:01.945607Z","caller":"etcdmain/etcd.go:73","msg":"Running: ","args":["etcd","--advertise-client-urls=https://172.17.93.236:2379","--cert-file=/var/lib/minikube/certs/etcd/server.crt","--client-cert-auth=true","--data-dir=/var/lib/minikube/etcd","--experimental-initial-corrupt-check=true","--experimental-watch-progress-notify-interval=5s","--initial-advertise-peer-urls=https://172.17.93.236:2380","--initial-cluster=multinode-442000=https://172.17.93.236:2380","--key-file=/var/lib/minikube/certs/etcd/server.key","--listen-client-urls=https://127.0.0.1:2379,https://172.17.93.236:2379","--listen-metrics-urls=http://127.0.0.1:2381","--listen-peer-urls=https://172.17.93.236:2380","--name=multinode-442000","--peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt","--peer-client-cert-auth=true","--peer-key-file=/var/lib/minikube/certs/etcd/peer.key","--peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt","--prox
y-refresh-interval=70000","--snapshot-count=10000","--trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt"]}
	I0314 19:42:14.192974    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:01.945676Z","caller":"etcdmain/etcd.go:116","msg":"server has been already initialized","data-dir":"/var/lib/minikube/etcd","dir-type":"member"}
	I0314 19:42:14.192974    8428 command_runner.go:130] ! {"level":"warn","ts":"2024-03-14T19:41:01.945701Z","caller":"embed/config.go:673","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I0314 19:42:14.192974    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:01.94571Z","caller":"embed/etcd.go:127","msg":"configuring peer listeners","listen-peer-urls":["https://172.17.93.236:2380"]}
	I0314 19:42:14.193047    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:01.94582Z","caller":"embed/etcd.go:495","msg":"starting with peer TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I0314 19:42:14.193047    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:01.94751Z","caller":"embed/etcd.go:135","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://172.17.93.236:2379"]}
	I0314 19:42:14.193200    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:01.948798Z","caller":"embed/etcd.go:309","msg":"starting an etcd server","etcd-version":"3.5.9","git-sha":"bdbbde998","go-version":"go1.19.9","go-os":"linux","go-arch":"amd64","max-cpu-set":2,"max-cpu-available":2,"member-initialized":true,"name":"multinode-442000","data-dir":"/var/lib/minikube/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/minikube/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://172.17.93.236:2380"],"listen-peer-urls":["https://172.17.93.236:2380"],"advertise-client-urls":["https://172.17.93.236:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.17.93.236:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"cors":["*"],"host-whitelist":["*"],"initial-
cluster":"","initial-cluster-state":"new","initial-cluster-token":"","quota-backend-bytes":2147483648,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"initial-corrupt-check":true,"corrupt-check-time-interval":"0s","compact-check-time-enabled":false,"compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s"}
	I0314 19:42:14.193234    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:01.989049Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"39.493838ms"}
	I0314 19:42:14.193273    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:02.0258Z","caller":"etcdserver/server.go:530","msg":"No snapshot found. Recovering WAL from scratch!"}
	I0314 19:42:14.193273    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:02.055698Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"76b99849a2fc5549","local-member-id":"fa26a6ed08186c39","commit-index":1967}
	I0314 19:42:14.193334    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:02.067927Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fa26a6ed08186c39 switched to configuration voters=()"}
	I0314 19:42:14.193390    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:02.067975Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fa26a6ed08186c39 became follower at term 2"}
	I0314 19:42:14.193390    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:02.068051Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft fa26a6ed08186c39 [peers: [], term: 2, commit: 1967, applied: 0, lastindex: 1967, lastterm: 2]"}
	I0314 19:42:14.193390    8428 command_runner.go:130] ! {"level":"warn","ts":"2024-03-14T19:41:02.100633Z","caller":"auth/store.go:1238","msg":"simple token is not cryptographically signed"}
	I0314 19:42:14.193441    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:02.113992Z","caller":"mvcc/kvstore.go:323","msg":"restored last compact revision","meta-bucket-name":"meta","meta-bucket-name-key":"finishedCompactRev","restored-compact-revision":1090}
	I0314 19:42:14.193441    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:02.125551Z","caller":"mvcc/kvstore.go:393","msg":"kvstore restored","current-rev":1704}
	I0314 19:42:14.193441    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:02.137052Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	I0314 19:42:14.193507    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:02.152836Z","caller":"etcdserver/corrupt.go:95","msg":"starting initial corruption check","local-member-id":"fa26a6ed08186c39","timeout":"7s"}
	I0314 19:42:14.193507    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:02.153448Z","caller":"etcdserver/corrupt.go:165","msg":"initial corruption checking passed; no corruption","local-member-id":"fa26a6ed08186c39"}
	I0314 19:42:14.193507    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:02.153504Z","caller":"etcdserver/server.go:854","msg":"starting etcd server","local-member-id":"fa26a6ed08186c39","local-server-version":"3.5.9","cluster-version":"to_be_decided"}
	I0314 19:42:14.193568    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:02.154089Z","caller":"etcdserver/server.go:754","msg":"starting initial election tick advance","election-ticks":10}
	I0314 19:42:14.193568    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:02.154894Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	I0314 19:42:14.193624    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:02.154977Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	I0314 19:42:14.193624    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:02.154992Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	I0314 19:42:14.193624    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:02.158559Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fa26a6ed08186c39 switched to configuration voters=(18025278095570267193)"}
	I0314 19:42:14.193676    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:02.158756Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"76b99849a2fc5549","local-member-id":"fa26a6ed08186c39","added-peer-id":"fa26a6ed08186c39","added-peer-peer-urls":["https://172.17.86.124:2380"]}
	I0314 19:42:14.193676    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:02.158933Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"76b99849a2fc5549","local-member-id":"fa26a6ed08186c39","cluster-version":"3.5"}
	I0314 19:42:14.193732    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:02.158969Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	I0314 19:42:14.193732    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:02.159838Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I0314 19:42:14.193783    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:02.160148Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"fa26a6ed08186c39","initial-advertise-peer-urls":["https://172.17.93.236:2380"],"listen-peer-urls":["https://172.17.93.236:2380"],"advertise-client-urls":["https://172.17.93.236:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.17.93.236:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	I0314 19:42:14.193838    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:02.160272Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	I0314 19:42:14.193838    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:02.161335Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"172.17.93.236:2380"}
	I0314 19:42:14.193838    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:02.161389Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"172.17.93.236:2380"}
	I0314 19:42:14.193913    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:03.281331Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fa26a6ed08186c39 is starting a new election at term 2"}
	I0314 19:42:14.193913    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:03.281645Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fa26a6ed08186c39 became pre-candidate at term 2"}
	I0314 19:42:14.193913    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:03.281829Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fa26a6ed08186c39 received MsgPreVoteResp from fa26a6ed08186c39 at term 2"}
	I0314 19:42:14.193976    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:03.281928Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fa26a6ed08186c39 became candidate at term 3"}
	I0314 19:42:14.193976    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:03.282044Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fa26a6ed08186c39 received MsgVoteResp from fa26a6ed08186c39 at term 3"}
	I0314 19:42:14.193976    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:03.282164Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fa26a6ed08186c39 became leader at term 3"}
	I0314 19:42:14.193976    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:03.282332Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: fa26a6ed08186c39 elected leader fa26a6ed08186c39 at term 3"}
	I0314 19:42:14.194060    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:03.292472Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"fa26a6ed08186c39","local-member-attributes":"{Name:multinode-442000 ClientURLs:[https://172.17.93.236:2379]}","request-path":"/0/members/fa26a6ed08186c39/attributes","cluster-id":"76b99849a2fc5549","publish-timeout":"7s"}
	I0314 19:42:14.194060    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:03.292867Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	I0314 19:42:14.194060    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:03.296522Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	I0314 19:42:14.194114    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:03.298446Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	I0314 19:42:14.194114    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:03.311867Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.17.93.236:2379"}
	I0314 19:42:14.194114    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:03.311957Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	I0314 19:42:14.194114    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:03.31205Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	I0314 19:42:14.199853    8428 logs.go:123] Gathering logs for kube-proxy [497007582e44] ...
	I0314 19:42:14.199853    8428 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 497007582e44"
	I0314 19:42:14.232229    8428 command_runner.go:130] ! I0314 19:41:08.342277       1 server_others.go:69] "Using iptables proxy"
	I0314 19:42:14.232229    8428 command_runner.go:130] ! I0314 19:41:08.381589       1 node.go:141] Successfully retrieved node IP: 172.17.93.236
	I0314 19:42:14.232229    8428 command_runner.go:130] ! I0314 19:41:08.703360       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0314 19:42:14.232229    8428 command_runner.go:130] ! I0314 19:41:08.703384       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0314 19:42:14.232229    8428 command_runner.go:130] ! I0314 19:41:08.724122       1 server_others.go:152] "Using iptables Proxier"
	I0314 19:42:14.232229    8428 command_runner.go:130] ! I0314 19:41:08.726554       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0314 19:42:14.232229    8428 command_runner.go:130] ! I0314 19:41:08.729424       1 server.go:846] "Version info" version="v1.28.4"
	I0314 19:42:14.232229    8428 command_runner.go:130] ! I0314 19:41:08.729460       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0314 19:42:14.232229    8428 command_runner.go:130] ! I0314 19:41:08.732062       1 config.go:188] "Starting service config controller"
	I0314 19:42:14.232229    8428 command_runner.go:130] ! I0314 19:41:08.732501       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0314 19:42:14.232229    8428 command_runner.go:130] ! I0314 19:41:08.732571       1 config.go:97] "Starting endpoint slice config controller"
	I0314 19:42:14.232229    8428 command_runner.go:130] ! I0314 19:41:08.732581       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0314 19:42:14.232229    8428 command_runner.go:130] ! I0314 19:41:08.733523       1 config.go:315] "Starting node config controller"
	I0314 19:42:14.232229    8428 command_runner.go:130] ! I0314 19:41:08.733550       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0314 19:42:14.232229    8428 command_runner.go:130] ! I0314 19:41:08.832968       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0314 19:42:14.232229    8428 command_runner.go:130] ! I0314 19:41:08.833049       1 shared_informer.go:318] Caches are synced for service config
	I0314 19:42:14.232229    8428 command_runner.go:130] ! I0314 19:41:08.835501       1 shared_informer.go:318] Caches are synced for node config
	I0314 19:42:14.235918    8428 logs.go:123] Gathering logs for kindnet [999e4c168afe] ...
	I0314 19:42:14.235918    8428 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 999e4c168afe"
	I0314 19:42:14.261684    8428 command_runner.go:130] ! I0314 19:41:08.409720       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0314 19:42:14.262069    8428 command_runner.go:130] ! I0314 19:41:08.410195       1 main.go:107] hostIP = 172.17.93.236
	I0314 19:42:14.262168    8428 command_runner.go:130] ! podIP = 172.17.93.236
	I0314 19:42:14.262168    8428 command_runner.go:130] ! I0314 19:41:08.411178       1 main.go:116] setting mtu 1500 for CNI 
	I0314 19:42:14.262168    8428 command_runner.go:130] ! I0314 19:41:08.411230       1 main.go:146] kindnetd IP family: "ipv4"
	I0314 19:42:14.262215    8428 command_runner.go:130] ! I0314 19:41:08.411277       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0314 19:42:14.262240    8428 command_runner.go:130] ! I0314 19:41:38.747509       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: i/o timeout
	I0314 19:42:14.262240    8428 command_runner.go:130] ! I0314 19:41:38.770843       1 main.go:223] Handling node with IPs: map[172.17.93.236:{}]
	I0314 19:42:14.262240    8428 command_runner.go:130] ! I0314 19:41:38.770994       1 main.go:227] handling current node
	I0314 19:42:14.262240    8428 command_runner.go:130] ! I0314 19:41:38.771413       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:14.262240    8428 command_runner.go:130] ! I0314 19:41:38.771428       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:14.262240    8428 command_runner.go:130] ! I0314 19:41:38.771670       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 172.17.80.135 Flags: [] Table: 0} 
	I0314 19:42:14.262327    8428 command_runner.go:130] ! I0314 19:41:38.771817       1 main.go:223] Handling node with IPs: map[172.17.84.215:{}]
	I0314 19:42:14.262327    8428 command_runner.go:130] ! I0314 19:41:38.771827       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.3.0/24] 
	I0314 19:42:14.262361    8428 command_runner.go:130] ! I0314 19:41:38.771944       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 172.17.84.215 Flags: [] Table: 0} 
	I0314 19:42:14.262361    8428 command_runner.go:130] ! I0314 19:41:48.777997       1 main.go:223] Handling node with IPs: map[172.17.93.236:{}]
	I0314 19:42:14.262361    8428 command_runner.go:130] ! I0314 19:41:48.778091       1 main.go:227] handling current node
	I0314 19:42:14.262361    8428 command_runner.go:130] ! I0314 19:41:48.778105       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:14.262361    8428 command_runner.go:130] ! I0314 19:41:48.778113       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:14.262423    8428 command_runner.go:130] ! I0314 19:41:48.778217       1 main.go:223] Handling node with IPs: map[172.17.84.215:{}]
	I0314 19:42:14.262467    8428 command_runner.go:130] ! I0314 19:41:48.778373       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.3.0/24] 
	I0314 19:42:14.262467    8428 command_runner.go:130] ! I0314 19:41:58.793215       1 main.go:223] Handling node with IPs: map[172.17.93.236:{}]
	I0314 19:42:14.262467    8428 command_runner.go:130] ! I0314 19:41:58.793285       1 main.go:227] handling current node
	I0314 19:42:14.262467    8428 command_runner.go:130] ! I0314 19:41:58.793297       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:14.262467    8428 command_runner.go:130] ! I0314 19:41:58.793304       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:14.262526    8428 command_runner.go:130] ! I0314 19:41:58.793793       1 main.go:223] Handling node with IPs: map[172.17.84.215:{}]
	I0314 19:42:14.262526    8428 command_runner.go:130] ! I0314 19:41:58.793859       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.3.0/24] 
	I0314 19:42:14.262570    8428 command_runner.go:130] ! I0314 19:42:08.808709       1 main.go:223] Handling node with IPs: map[172.17.93.236:{}]
	I0314 19:42:14.262606    8428 command_runner.go:130] ! I0314 19:42:08.808803       1 main.go:227] handling current node
	I0314 19:42:14.262606    8428 command_runner.go:130] ! I0314 19:42:08.808818       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:14.262647    8428 command_runner.go:130] ! I0314 19:42:08.808826       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:14.262647    8428 command_runner.go:130] ! I0314 19:42:08.809153       1 main.go:223] Handling node with IPs: map[172.17.84.215:{}]
	I0314 19:42:14.262647    8428 command_runner.go:130] ! I0314 19:42:08.809168       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.3.0/24] 
	I0314 19:42:14.265362    8428 logs.go:123] Gathering logs for Docker ...
	I0314 19:42:14.265362    8428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0314 19:42:14.288815    8428 command_runner.go:130] > Mar 14 19:39:36 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0314 19:42:14.288888    8428 command_runner.go:130] > Mar 14 19:39:36 minikube cri-dockerd[222]: time="2024-03-14T19:39:36Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0314 19:42:14.288888    8428 command_runner.go:130] > Mar 14 19:39:36 minikube cri-dockerd[222]: time="2024-03-14T19:39:36Z" level=info msg="Start docker client with request timeout 0s"
	I0314 19:42:14.288888    8428 command_runner.go:130] > Mar 14 19:39:36 minikube cri-dockerd[222]: time="2024-03-14T19:39:36Z" level=fatal msg="failed to get docker version: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0314 19:42:14.288888    8428 command_runner.go:130] > Mar 14 19:39:37 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0314 19:42:14.288888    8428 command_runner.go:130] > Mar 14 19:39:37 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0314 19:42:14.288888    8428 command_runner.go:130] > Mar 14 19:39:37 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0314 19:42:14.288888    8428 command_runner.go:130] > Mar 14 19:39:39 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 1.
	I0314 19:42:14.288888    8428 command_runner.go:130] > Mar 14 19:39:39 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0314 19:42:14.288888    8428 command_runner.go:130] > Mar 14 19:39:39 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0314 19:42:14.288888    8428 command_runner.go:130] > Mar 14 19:39:39 minikube cri-dockerd[402]: time="2024-03-14T19:39:39Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0314 19:42:14.288888    8428 command_runner.go:130] > Mar 14 19:39:39 minikube cri-dockerd[402]: time="2024-03-14T19:39:39Z" level=info msg="Start docker client with request timeout 0s"
	I0314 19:42:14.288888    8428 command_runner.go:130] > Mar 14 19:39:39 minikube cri-dockerd[402]: time="2024-03-14T19:39:39Z" level=fatal msg="failed to get docker version: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0314 19:42:14.288888    8428 command_runner.go:130] > Mar 14 19:39:39 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0314 19:42:14.288888    8428 command_runner.go:130] > Mar 14 19:39:39 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0314 19:42:14.288888    8428 command_runner.go:130] > Mar 14 19:39:39 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0314 19:42:14.288888    8428 command_runner.go:130] > Mar 14 19:39:41 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 2.
	I0314 19:42:14.288888    8428 command_runner.go:130] > Mar 14 19:39:41 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0314 19:42:14.288888    8428 command_runner.go:130] > Mar 14 19:39:41 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0314 19:42:14.288888    8428 command_runner.go:130] > Mar 14 19:39:41 minikube cri-dockerd[422]: time="2024-03-14T19:39:41Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0314 19:42:14.288888    8428 command_runner.go:130] > Mar 14 19:39:41 minikube cri-dockerd[422]: time="2024-03-14T19:39:41Z" level=info msg="Start docker client with request timeout 0s"
	I0314 19:42:14.289423    8428 command_runner.go:130] > Mar 14 19:39:41 minikube cri-dockerd[422]: time="2024-03-14T19:39:41Z" level=fatal msg="failed to get docker version: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0314 19:42:14.289423    8428 command_runner.go:130] > Mar 14 19:39:41 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0314 19:42:14.289423    8428 command_runner.go:130] > Mar 14 19:39:41 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0314 19:42:14.289423    8428 command_runner.go:130] > Mar 14 19:39:41 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0314 19:42:14.289423    8428 command_runner.go:130] > Mar 14 19:39:44 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 3.
	I0314 19:42:14.289562    8428 command_runner.go:130] > Mar 14 19:39:44 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0314 19:42:14.289562    8428 command_runner.go:130] > Mar 14 19:39:44 minikube systemd[1]: cri-docker.service: Start request repeated too quickly.
	I0314 19:42:14.289562    8428 command_runner.go:130] > Mar 14 19:39:44 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0314 19:42:14.289664    8428 command_runner.go:130] > Mar 14 19:39:44 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0314 19:42:14.289664    8428 command_runner.go:130] > Mar 14 19:40:26 multinode-442000 systemd[1]: Starting Docker Application Container Engine...
	I0314 19:42:14.289794    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[650]: time="2024-03-14T19:40:27.010258466Z" level=info msg="Starting up"
	I0314 19:42:14.289880    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[650]: time="2024-03-14T19:40:27.011413188Z" level=info msg="containerd not running, starting managed containerd"
	I0314 19:42:14.289880    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[650]: time="2024-03-14T19:40:27.012927209Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=656
	I0314 19:42:14.289969    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.042687292Z" level=info msg="starting containerd" revision=dcf2847247e18caba8dce86522029642f60fe96b version=v1.7.14
	I0314 19:42:14.290051    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.069138554Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0314 19:42:14.290051    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.069242083Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0314 19:42:14.290051    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.069344111Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0314 19:42:14.290051    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.069362416Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0314 19:42:14.290051    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.070081016Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0314 19:42:14.290051    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.070164740Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0314 19:42:14.290051    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.070380400Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0314 19:42:14.290051    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.070511536Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0314 19:42:14.290051    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.070532642Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0314 19:42:14.290051    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.070544145Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0314 19:42:14.290051    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.070983067Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0314 19:42:14.290051    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.071556427Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0314 19:42:14.290051    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.074554061Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0314 19:42:14.290585    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.074645687Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0314 19:42:14.290585    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.074800830Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0314 19:42:14.290675    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.074883153Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0314 19:42:14.290788    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.075687977Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0314 19:42:14.290823    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.075800308Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0314 19:42:14.290823    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.075818813Z" level=info msg="metadata content store policy set" policy=shared
	I0314 19:42:14.290917    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.081334348Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0314 19:42:14.290917    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.081440978Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0314 19:42:14.291002    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.081463484Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0314 19:42:14.291002    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.081526902Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0314 19:42:14.291078    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.081545007Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0314 19:42:14.291157    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.081621128Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0314 19:42:14.291157    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.082036144Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0314 19:42:14.291296    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.082193387Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0314 19:42:14.291296    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.082276711Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0314 19:42:14.291296    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.082349431Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0314 19:42:14.291296    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.082368036Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0314 19:42:14.291296    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.082385141Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0314 19:42:14.291296    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.082401545Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0314 19:42:14.291296    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.082417450Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0314 19:42:14.291296    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.082433154Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0314 19:42:14.291296    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.082457161Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0314 19:42:14.291296    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.082515377Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0314 19:42:14.291296    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.082533482Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0314 19:42:14.291296    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.082554788Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0314 19:42:14.291296    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.082572093Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0314 19:42:14.291296    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.082586997Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0314 19:42:14.291296    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.082601801Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0314 19:42:14.291826    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.082616305Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0314 19:42:14.291826    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.082631109Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0314 19:42:14.291913    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.082643913Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0314 19:42:14.291913    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.082659317Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0314 19:42:14.292002    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.082673721Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0314 19:42:14.292002    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.082690226Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0314 19:42:14.292084    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.082704230Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0314 19:42:14.292140    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.082717333Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0314 19:42:14.292140    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.082730637Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0314 19:42:14.292140    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.082747942Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0314 19:42:14.292140    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.082771048Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0314 19:42:14.292140    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.082785952Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0314 19:42:14.292140    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.082799956Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0314 19:42:14.292140    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.082936994Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0314 19:42:14.292140    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.082973004Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	I0314 19:42:14.292140    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.082986808Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0314 19:42:14.292140    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.082998612Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	I0314 19:42:14.292140    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.083067631Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0314 19:42:14.292140    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.083095839Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0314 19:42:14.292140    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.083107842Z" level=info msg="NRI interface is disabled by configuration."
	I0314 19:42:14.292140    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.083364013Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0314 19:42:14.292662    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.083531860Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0314 19:42:14.292662    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.083575672Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0314 19:42:14.292662    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.083609482Z" level=info msg="containerd successfully booted in 0.043398s"
	I0314 19:42:14.292662    8428 command_runner.go:130] > Mar 14 19:40:28 multinode-442000 dockerd[650]: time="2024-03-14T19:40:28.063674621Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0314 19:42:14.292788    8428 command_runner.go:130] > Mar 14 19:40:28 multinode-442000 dockerd[650]: time="2024-03-14T19:40:28.220876850Z" level=info msg="Loading containers: start."
	I0314 19:42:14.292788    8428 command_runner.go:130] > Mar 14 19:40:28 multinode-442000 dockerd[650]: time="2024-03-14T19:40:28.643208421Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.18.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0314 19:42:14.292788    8428 command_runner.go:130] > Mar 14 19:40:28 multinode-442000 dockerd[650]: time="2024-03-14T19:40:28.726589336Z" level=info msg="Loading containers: done."
	I0314 19:42:14.292879    8428 command_runner.go:130] > Mar 14 19:40:28 multinode-442000 dockerd[650]: time="2024-03-14T19:40:28.750141296Z" level=info msg="Docker daemon" commit=061aa95 containerd-snapshotter=false storage-driver=overlay2 version=25.0.4
	I0314 19:42:14.292879    8428 command_runner.go:130] > Mar 14 19:40:28 multinode-442000 dockerd[650]: time="2024-03-14T19:40:28.750832983Z" level=info msg="Daemon has completed initialization"
	I0314 19:42:14.292963    8428 command_runner.go:130] > Mar 14 19:40:28 multinode-442000 systemd[1]: Started Docker Application Container Engine.
	I0314 19:42:14.292963    8428 command_runner.go:130] > Mar 14 19:40:28 multinode-442000 dockerd[650]: time="2024-03-14T19:40:28.799522730Z" level=info msg="API listen on [::]:2376"
	I0314 19:42:14.292963    8428 command_runner.go:130] > Mar 14 19:40:28 multinode-442000 dockerd[650]: time="2024-03-14T19:40:28.799691776Z" level=info msg="API listen on /var/run/docker.sock"
	I0314 19:42:14.293048    8428 command_runner.go:130] > Mar 14 19:40:52 multinode-442000 systemd[1]: Stopping Docker Application Container Engine...
	I0314 19:42:14.293048    8428 command_runner.go:130] > Mar 14 19:40:52 multinode-442000 dockerd[650]: time="2024-03-14T19:40:52.824796168Z" level=info msg="Processing signal 'terminated'"
	I0314 19:42:14.293131    8428 command_runner.go:130] > Mar 14 19:40:52 multinode-442000 dockerd[650]: time="2024-03-14T19:40:52.825961557Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0314 19:42:14.293131    8428 command_runner.go:130] > Mar 14 19:40:52 multinode-442000 dockerd[650]: time="2024-03-14T19:40:52.826585605Z" level=info msg="Daemon shutdown complete"
	I0314 19:42:14.293131    8428 command_runner.go:130] > Mar 14 19:40:52 multinode-442000 dockerd[650]: time="2024-03-14T19:40:52.826653911Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0314 19:42:14.293215    8428 command_runner.go:130] > Mar 14 19:40:52 multinode-442000 dockerd[650]: time="2024-03-14T19:40:52.826812323Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0314 19:42:14.293291    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 systemd[1]: docker.service: Deactivated successfully.
	I0314 19:42:14.293291    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 systemd[1]: Stopped Docker Application Container Engine.
	I0314 19:42:14.293291    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 systemd[1]: Starting Docker Application Container Engine...
	I0314 19:42:14.293291    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1043]: time="2024-03-14T19:40:53.899936864Z" level=info msg="Starting up"
	I0314 19:42:14.293291    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1043]: time="2024-03-14T19:40:53.900739426Z" level=info msg="containerd not running, starting managed containerd"
	I0314 19:42:14.293291    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1043]: time="2024-03-14T19:40:53.901763504Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1049
	I0314 19:42:14.293291    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.930795337Z" level=info msg="starting containerd" revision=dcf2847247e18caba8dce86522029642f60fe96b version=v1.7.14
	I0314 19:42:14.293291    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.957961927Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0314 19:42:14.293291    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.958063735Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0314 19:42:14.293291    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.958107338Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0314 19:42:14.293291    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.958123339Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0314 19:42:14.293291    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.958150841Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0314 19:42:14.293291    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.958163842Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0314 19:42:14.293291    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.958360458Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0314 19:42:14.293291    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.958444864Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0314 19:42:14.293829    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.958463766Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0314 19:42:14.293829    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.958475466Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0314 19:42:14.293936    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.958502569Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0314 19:42:14.293936    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.958670881Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0314 19:42:14.294024    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.961627209Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0314 19:42:14.294024    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.961715316Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0314 19:42:14.294108    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.961871928Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0314 19:42:14.294191    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.961949634Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0314 19:42:14.294266    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.961985336Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0314 19:42:14.294266    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.962005238Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0314 19:42:14.294266    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.962017139Z" level=info msg="metadata content store policy set" policy=shared
	I0314 19:42:14.294266    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.962188852Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0314 19:42:14.294266    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.962280259Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0314 19:42:14.294266    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.962311462Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0314 19:42:14.294266    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.962328263Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0314 19:42:14.294266    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.962344564Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0314 19:42:14.294266    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.962393368Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0314 19:42:14.294266    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.962810900Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0314 19:42:14.294266    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.962939310Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0314 19:42:14.294266    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.963018216Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0314 19:42:14.294266    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.963036317Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0314 19:42:14.294266    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.963060419Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0314 19:42:14.294266    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.963076820Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0314 19:42:14.294266    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.963091221Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0314 19:42:14.294813    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.963106323Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0314 19:42:14.294813    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.963121324Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0314 19:42:14.294813    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.963135425Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0314 19:42:14.295045    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.963148726Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0314 19:42:14.295045    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.963162027Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0314 19:42:14.295199    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.963184029Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0314 19:42:14.295240    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.963205330Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0314 19:42:14.295240    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.963220631Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0314 19:42:14.295240    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.963270235Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0314 19:42:14.295331    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.963286336Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0314 19:42:14.295331    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.963300438Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0314 19:42:14.295331    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.963313039Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0314 19:42:14.295331    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.963326640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0314 19:42:14.295471    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.963341141Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0314 19:42:14.295471    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.963357642Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0314 19:42:14.295538    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.963369743Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0314 19:42:14.295538    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.963382444Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0314 19:42:14.295597    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.963395545Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0314 19:42:14.295597    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.963411646Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0314 19:42:14.295597    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.963433148Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0314 19:42:14.295597    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.963449149Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0314 19:42:14.295713    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.963461550Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0314 19:42:14.295713    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.963512954Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0314 19:42:14.295713    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.963529855Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	I0314 19:42:14.295713    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.963593860Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0314 19:42:14.295823    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.963606261Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	I0314 19:42:14.295823    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.963665466Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0314 19:42:14.295823    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.963679767Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0314 19:42:14.295823    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.963695368Z" level=info msg="NRI interface is disabled by configuration."
	I0314 19:42:14.295823    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.964176205Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0314 19:42:14.295823    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.964503330Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0314 19:42:14.295823    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.965392899Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0314 19:42:14.296130    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.966787506Z" level=info msg="containerd successfully booted in 0.037267s"
	I0314 19:42:14.296130    8428 command_runner.go:130] > Mar 14 19:40:54 multinode-442000 dockerd[1043]: time="2024-03-14T19:40:54.945087153Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0314 19:42:14.296216    8428 command_runner.go:130] > Mar 14 19:40:54 multinode-442000 dockerd[1043]: time="2024-03-14T19:40:54.972020025Z" level=info msg="Loading containers: start."
	I0314 19:42:14.296251    8428 command_runner.go:130] > Mar 14 19:40:55 multinode-442000 dockerd[1043]: time="2024-03-14T19:40:55.259462934Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.18.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0314 19:42:14.296297    8428 command_runner.go:130] > Mar 14 19:40:55 multinode-442000 dockerd[1043]: time="2024-03-14T19:40:55.336883289Z" level=info msg="Loading containers: done."
	I0314 19:42:14.296336    8428 command_runner.go:130] > Mar 14 19:40:55 multinode-442000 dockerd[1043]: time="2024-03-14T19:40:55.370669888Z" level=info msg="Docker daemon" commit=061aa95 containerd-snapshotter=false storage-driver=overlay2 version=25.0.4
	I0314 19:42:14.296411    8428 command_runner.go:130] > Mar 14 19:40:55 multinode-442000 dockerd[1043]: time="2024-03-14T19:40:55.370874904Z" level=info msg="Daemon has completed initialization"
	I0314 19:42:14.296439    8428 command_runner.go:130] > Mar 14 19:40:55 multinode-442000 dockerd[1043]: time="2024-03-14T19:40:55.415311921Z" level=info msg="API listen on /var/run/docker.sock"
	I0314 19:42:14.296439    8428 command_runner.go:130] > Mar 14 19:40:55 multinode-442000 dockerd[1043]: time="2024-03-14T19:40:55.415467233Z" level=info msg="API listen on [::]:2376"
	I0314 19:42:14.296439    8428 command_runner.go:130] > Mar 14 19:40:55 multinode-442000 systemd[1]: Started Docker Application Container Engine.
	I0314 19:42:14.296439    8428 command_runner.go:130] > Mar 14 19:40:56 multinode-442000 systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0314 19:42:14.296439    8428 command_runner.go:130] > Mar 14 19:40:56 multinode-442000 cri-dockerd[1267]: time="2024-03-14T19:40:56Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0314 19:42:14.296439    8428 command_runner.go:130] > Mar 14 19:40:56 multinode-442000 cri-dockerd[1267]: time="2024-03-14T19:40:56Z" level=info msg="Start docker client with request timeout 0s"
	I0314 19:42:14.296439    8428 command_runner.go:130] > Mar 14 19:40:56 multinode-442000 cri-dockerd[1267]: time="2024-03-14T19:40:56Z" level=info msg="Hairpin mode is set to hairpin-veth"
	I0314 19:42:14.296439    8428 command_runner.go:130] > Mar 14 19:40:56 multinode-442000 cri-dockerd[1267]: time="2024-03-14T19:40:56Z" level=info msg="Loaded network plugin cni"
	I0314 19:42:14.296439    8428 command_runner.go:130] > Mar 14 19:40:56 multinode-442000 cri-dockerd[1267]: time="2024-03-14T19:40:56Z" level=info msg="Docker cri networking managed by network plugin cni"
	I0314 19:42:14.296439    8428 command_runner.go:130] > Mar 14 19:40:56 multinode-442000 cri-dockerd[1267]: time="2024-03-14T19:40:56Z" level=info msg="Docker Info: &{ID:04f4855f-417a-422c-b5bb-3cf8a43fb438 Containers:18 ContainersRunning:0 ContainersPaused:0 ContainersStopped:18 Images:10 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:[] Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:[] Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6tables:true Debug:false NFd:26 OomKillDisable:false NGoroutines:52 SystemTime:2024-03-14T19:40:56.401787998Z LoggingDriver:json-file CgroupDriver:cgroupfs CgroupVersion:2 NEventsListener:0 Ke
rnelVersion:5.10.207 OperatingSystem:Buildroot 2023.02.9 OSVersion:2023.02.9 OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:0xc0004c0150 NCPU:2 MemTotal:2216210432 GenericResources:[] DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:multinode-442000 Labels:[provider=hyperv] ExperimentalBuild:false ServerVersion:25.0.4 ClusterStore: ClusterAdvertise: Runtimes:map[io.containerd.runc.v2:{Path:runc Args:[] Shim:<nil>} runc:{Path:runc Args:[] Shim:<nil>}] DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:[] Nodes:0 Managers:0 Cluster:<nil> Warnings:[]} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dcf2847247e18caba8dce86522029642f60fe96b Expected:dcf2847247e18caba8dce86522029642f60fe96b} RuncCommit:{ID:51d5e94601ceffbbd85688df1c928ecccbfa4685 Expected:51d5e94601ceffbbd85688df1c928ecccbfa4685} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[nam
e=seccomp,profile=builtin name=cgroupns] ProductLicense:Community Engine DefaultAddressPools:[] Warnings:[]}"
	I0314 19:42:14.296439    8428 command_runner.go:130] > Mar 14 19:40:56 multinode-442000 cri-dockerd[1267]: time="2024-03-14T19:40:56Z" level=info msg="Setting cgroupDriver cgroupfs"
	I0314 19:42:14.296439    8428 command_runner.go:130] > Mar 14 19:40:56 multinode-442000 cri-dockerd[1267]: time="2024-03-14T19:40:56Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	I0314 19:42:14.296439    8428 command_runner.go:130] > Mar 14 19:40:56 multinode-442000 cri-dockerd[1267]: time="2024-03-14T19:40:56Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	I0314 19:42:14.296439    8428 command_runner.go:130] > Mar 14 19:40:56 multinode-442000 cri-dockerd[1267]: time="2024-03-14T19:40:56Z" level=info msg="Start cri-dockerd grpc backend"
	I0314 19:42:14.296439    8428 command_runner.go:130] > Mar 14 19:40:56 multinode-442000 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	I0314 19:42:14.296439    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 cri-dockerd[1267]: time="2024-03-14T19:41:00Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"busybox-5b5d89c9d6-7446n_default\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"fa0f2372c88eef3de0c7caa0041064157c314aff4c14bf6622f34dd89106f773\""
	I0314 19:42:14.297011    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 cri-dockerd[1267]: time="2024-03-14T19:41:00Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-5dd5756b68-d22jc_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"a3dba3fc54c01e7fb1675536e155d6b541ed5782f664675ccd953639013f50b0\""
	I0314 19:42:14.297074    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:01.294795352Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0314 19:42:14.297106    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:01.294882858Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0314 19:42:14.297134    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:01.294903860Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:14.297168    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:01.295303891Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:14.297207    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:01.380666857Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0314 19:42:14.297248    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:01.380946878Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0314 19:42:14.297248    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:01.381075288Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:14.297248    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:01.381588628Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:14.297248    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:01.418754186Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0314 19:42:14.297337    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:01.418872295Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0314 19:42:14.297337    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:01.418919499Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:14.297380    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:01.419130315Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:14.297422    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 cri-dockerd[1267]: time="2024-03-14T19:41:01Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/35dd339c8a08d84d0d1a4d2c062b04d44baff78d20c6ed33ce967d50c18eaa3c/resolv.conf as [nameserver 172.17.80.1]"
	I0314 19:42:14.297422    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:01.449937485Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0314 19:42:14.297422    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:01.450067495Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0314 19:42:14.297480    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:01.450100297Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:14.297480    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:01.450295012Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:14.297538    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 cri-dockerd[1267]: time="2024-03-14T19:41:01Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/67475bf80ddd91df7549842450a8d92c27cd16f814cd4e4c750a7cad7d82fc9f/resolv.conf as [nameserver 172.17.80.1]"
	I0314 19:42:14.297538    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 cri-dockerd[1267]: time="2024-03-14T19:41:01Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a27fa2188ee4cf0c44cde0f8cae03a83655bc574c856082192e3261801efcc72/resolv.conf as [nameserver 172.17.80.1]"
	I0314 19:42:14.297598    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 cri-dockerd[1267]: time="2024-03-14T19:41:01Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/c70744e60ac50b50085376d0c124ff15cc884b8a836b0085ef71a65ddb06bcfd/resolv.conf as [nameserver 172.17.80.1]"
	I0314 19:42:14.297640    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:01.782527266Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0314 19:42:14.297640    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:01.782834890Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0314 19:42:14.297682    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:01.782945299Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:14.297682    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:01.783324628Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:14.297682    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:01.950307171Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0314 19:42:14.297682    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:01.950638097Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0314 19:42:14.297682    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:01.950847113Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:14.297682    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:01.951959699Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:14.297682    8428 command_runner.go:130] > Mar 14 19:41:02 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:02.033329657Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0314 19:42:14.297682    8428 command_runner.go:130] > Mar 14 19:41:02 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:02.033826996Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0314 19:42:14.297682    8428 command_runner.go:130] > Mar 14 19:41:02 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:02.034090516Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:14.297682    8428 command_runner.go:130] > Mar 14 19:41:02 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:02.034801671Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:14.297682    8428 command_runner.go:130] > Mar 14 19:41:02 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:02.038389546Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0314 19:42:14.297682    8428 command_runner.go:130] > Mar 14 19:41:02 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:02.038570160Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0314 19:42:14.297682    8428 command_runner.go:130] > Mar 14 19:41:02 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:02.038686569Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:14.297682    8428 command_runner.go:130] > Mar 14 19:41:02 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:02.038972291Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:14.297682    8428 command_runner.go:130] > Mar 14 19:41:05 multinode-442000 cri-dockerd[1267]: time="2024-03-14T19:41:05Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	I0314 19:42:14.297682    8428 command_runner.go:130] > Mar 14 19:41:07 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:07.056067890Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0314 19:42:14.297682    8428 command_runner.go:130] > Mar 14 19:41:07 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:07.056148096Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0314 19:42:14.297682    8428 command_runner.go:130] > Mar 14 19:41:07 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:07.056166397Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:14.297682    8428 command_runner.go:130] > Mar 14 19:41:07 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:07.056406816Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:14.297682    8428 command_runner.go:130] > Mar 14 19:41:07 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:07.109761119Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0314 19:42:14.297682    8428 command_runner.go:130] > Mar 14 19:41:07 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:07.110023440Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0314 19:42:14.297682    8428 command_runner.go:130] > Mar 14 19:41:07 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:07.110099145Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:14.297682    8428 command_runner.go:130] > Mar 14 19:41:07 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:07.110475674Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:14.297682    8428 command_runner.go:130] > Mar 14 19:41:07 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:07.116978275Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0314 19:42:14.298211    8428 command_runner.go:130] > Mar 14 19:41:07 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:07.117046280Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0314 19:42:14.298250    8428 command_runner.go:130] > Mar 14 19:41:07 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:07.117060481Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:14.298250    8428 command_runner.go:130] > Mar 14 19:41:07 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:07.117158888Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:14.298286    8428 command_runner.go:130] > Mar 14 19:41:07 multinode-442000 cri-dockerd[1267]: time="2024-03-14T19:41:07Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a723f141543f2007cc07e048ef5836fca4ae70749b7266630f6c890bb233c09a/resolv.conf as [nameserver 172.17.80.1]"
	I0314 19:42:14.298286    8428 command_runner.go:130] > Mar 14 19:41:07 multinode-442000 cri-dockerd[1267]: time="2024-03-14T19:41:07Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/f513a7aff67200987eb0f28647720ea4cb9bbdb684fc85d1b08c0dd54563517d/resolv.conf as [nameserver 172.17.80.1]"
	I0314 19:42:14.298286    8428 command_runner.go:130] > Mar 14 19:41:07 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:07.432676357Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0314 19:42:14.298286    8428 command_runner.go:130] > Mar 14 19:41:07 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:07.432829669Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0314 19:42:14.298286    8428 command_runner.go:130] > Mar 14 19:41:07 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:07.432849370Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:14.298286    8428 command_runner.go:130] > Mar 14 19:41:07 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:07.433004382Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:14.298286    8428 command_runner.go:130] > Mar 14 19:41:07 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:07.579105320Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0314 19:42:14.298286    8428 command_runner.go:130] > Mar 14 19:41:07 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:07.580432922Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0314 19:42:14.298286    8428 command_runner.go:130] > Mar 14 19:41:07 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:07.580451623Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:14.298286    8428 command_runner.go:130] > Mar 14 19:41:07 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:07.580554931Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:14.298286    8428 command_runner.go:130] > Mar 14 19:41:07 multinode-442000 cri-dockerd[1267]: time="2024-03-14T19:41:07Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a9176b55446637c4407c9a64ce7d85fce2b395bcc0a22061f5f7ff304ff2d47f/resolv.conf as [nameserver 172.17.80.1]"
	I0314 19:42:14.298286    8428 command_runner.go:130] > Mar 14 19:41:07 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:07.897653021Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0314 19:42:14.298286    8428 command_runner.go:130] > Mar 14 19:41:07 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:07.897936143Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0314 19:42:14.298286    8428 command_runner.go:130] > Mar 14 19:41:07 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:07.898062553Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:14.298286    8428 command_runner.go:130] > Mar 14 19:41:07 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:07.898459584Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:14.298286    8428 command_runner.go:130] > Mar 14 19:41:37 multinode-442000 dockerd[1043]: time="2024-03-14T19:41:37.705977514Z" level=info msg="ignoring event" container=2876622a2618d9b60f7cb4f182054a8b2d30209e3bd14c5d4afe515101547bc8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0314 19:42:14.298286    8428 command_runner.go:130] > Mar 14 19:41:37 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:37.706482647Z" level=info msg="shim disconnected" id=2876622a2618d9b60f7cb4f182054a8b2d30209e3bd14c5d4afe515101547bc8 namespace=moby
	I0314 19:42:14.298286    8428 command_runner.go:130] > Mar 14 19:41:37 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:37.706677460Z" level=warning msg="cleaning up after shim disconnected" id=2876622a2618d9b60f7cb4f182054a8b2d30209e3bd14c5d4afe515101547bc8 namespace=moby
	I0314 19:42:14.298286    8428 command_runner.go:130] > Mar 14 19:41:37 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:37.706692261Z" level=info msg="cleaning up dead shim" namespace=moby
	I0314 19:42:14.298286    8428 command_runner.go:130] > Mar 14 19:41:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:53.663136392Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0314 19:42:14.298286    8428 command_runner.go:130] > Mar 14 19:41:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:53.663371709Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0314 19:42:14.298286    8428 command_runner.go:130] > Mar 14 19:41:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:53.663411212Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:14.298286    8428 command_runner.go:130] > Mar 14 19:41:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:53.663537821Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:14.298286    8428 command_runner.go:130] > Mar 14 19:42:10 multinode-442000 dockerd[1049]: time="2024-03-14T19:42:10.837487028Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0314 19:42:14.298286    8428 command_runner.go:130] > Mar 14 19:42:10 multinode-442000 dockerd[1049]: time="2024-03-14T19:42:10.837604337Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0314 19:42:14.298286    8428 command_runner.go:130] > Mar 14 19:42:10 multinode-442000 dockerd[1049]: time="2024-03-14T19:42:10.837625738Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:14.298286    8428 command_runner.go:130] > Mar 14 19:42:10 multinode-442000 dockerd[1049]: time="2024-03-14T19:42:10.837719345Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:14.298286    8428 command_runner.go:130] > Mar 14 19:42:10 multinode-442000 dockerd[1049]: time="2024-03-14T19:42:10.848167835Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0314 19:42:14.298286    8428 command_runner.go:130] > Mar 14 19:42:10 multinode-442000 dockerd[1049]: time="2024-03-14T19:42:10.849098605Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0314 19:42:14.298286    8428 command_runner.go:130] > Mar 14 19:42:10 multinode-442000 dockerd[1049]: time="2024-03-14T19:42:10.849287919Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:14.298286    8428 command_runner.go:130] > Mar 14 19:42:10 multinode-442000 dockerd[1049]: time="2024-03-14T19:42:10.849656747Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:14.298286    8428 command_runner.go:130] > Mar 14 19:42:11 multinode-442000 cri-dockerd[1267]: time="2024-03-14T19:42:11Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/cddebe360bf3a58d057146523ff9f043ddb40843d3e55a24f8f364524780a439/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	I0314 19:42:14.298286    8428 command_runner.go:130] > Mar 14 19:42:11 multinode-442000 cri-dockerd[1267]: time="2024-03-14T19:42:11Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/89f326046d00d990fbe8611867f6438ef498caad91d78b4f265633a7cd56307f/resolv.conf as [nameserver 172.17.80.1]"
	I0314 19:42:14.298286    8428 command_runner.go:130] > Mar 14 19:42:11 multinode-442000 dockerd[1049]: time="2024-03-14T19:42:11.575693713Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0314 19:42:14.298286    8428 command_runner.go:130] > Mar 14 19:42:11 multinode-442000 dockerd[1049]: time="2024-03-14T19:42:11.575950032Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0314 19:42:14.298286    8428 command_runner.go:130] > Mar 14 19:42:11 multinode-442000 dockerd[1049]: time="2024-03-14T19:42:11.576019637Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:14.298286    8428 command_runner.go:130] > Mar 14 19:42:11 multinode-442000 dockerd[1049]: time="2024-03-14T19:42:11.577004211Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0314 19:42:14.298286    8428 command_runner.go:130] > Mar 14 19:42:11 multinode-442000 dockerd[1049]: time="2024-03-14T19:42:11.577168224Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0314 19:42:14.298286    8428 command_runner.go:130] > Mar 14 19:42:11 multinode-442000 dockerd[1049]: time="2024-03-14T19:42:11.577288033Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:14.298286    8428 command_runner.go:130] > Mar 14 19:42:11 multinode-442000 dockerd[1049]: time="2024-03-14T19:42:11.577583255Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:14.298286    8428 command_runner.go:130] > Mar 14 19:42:11 multinode-442000 dockerd[1049]: time="2024-03-14T19:42:11.576656985Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:14.298286    8428 command_runner.go:130] > Mar 14 19:42:13 multinode-442000 dockerd[1043]: 2024/03/14 19:42:13 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0314 19:42:14.298286    8428 command_runner.go:130] > Mar 14 19:42:14 multinode-442000 dockerd[1043]: 2024/03/14 19:42:14 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0314 19:42:14.298286    8428 command_runner.go:130] > Mar 14 19:42:14 multinode-442000 dockerd[1043]: 2024/03/14 19:42:14 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0314 19:42:14.298286    8428 command_runner.go:130] > Mar 14 19:42:14 multinode-442000 dockerd[1043]: 2024/03/14 19:42:14 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0314 19:42:14.298286    8428 command_runner.go:130] > Mar 14 19:42:14 multinode-442000 dockerd[1043]: 2024/03/14 19:42:14 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0314 19:42:14.298286    8428 command_runner.go:130] > Mar 14 19:42:14 multinode-442000 dockerd[1043]: 2024/03/14 19:42:14 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0314 19:42:14.327783    8428 logs.go:123] Gathering logs for kube-proxy [2a62baf3f1b4] ...
	I0314 19:42:14.327783    8428 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2a62baf3f1b4"
	I0314 19:42:14.354736    8428 command_runner.go:130] ! I0314 19:19:18.247796       1 server_others.go:69] "Using iptables proxy"
	I0314 19:42:14.354736    8428 command_runner.go:130] ! I0314 19:19:18.275162       1 node.go:141] Successfully retrieved node IP: 172.17.86.124
	I0314 19:42:14.354736    8428 command_runner.go:130] ! I0314 19:19:18.379821       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0314 19:42:14.354736    8428 command_runner.go:130] ! I0314 19:19:18.379851       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0314 19:42:14.354736    8428 command_runner.go:130] ! I0314 19:19:18.395429       1 server_others.go:152] "Using iptables Proxier"
	I0314 19:42:14.354736    8428 command_runner.go:130] ! I0314 19:19:18.395506       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0314 19:42:14.354736    8428 command_runner.go:130] ! I0314 19:19:18.395856       1 server.go:846] "Version info" version="v1.28.4"
	I0314 19:42:14.354736    8428 command_runner.go:130] ! I0314 19:19:18.395890       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0314 19:42:14.354736    8428 command_runner.go:130] ! I0314 19:19:18.417861       1 config.go:188] "Starting service config controller"
	I0314 19:42:14.354736    8428 command_runner.go:130] ! I0314 19:19:18.417913       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0314 19:42:14.354736    8428 command_runner.go:130] ! I0314 19:19:18.417950       1 config.go:97] "Starting endpoint slice config controller"
	I0314 19:42:14.354736    8428 command_runner.go:130] ! I0314 19:19:18.420511       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0314 19:42:14.354736    8428 command_runner.go:130] ! I0314 19:19:18.426566       1 config.go:315] "Starting node config controller"
	I0314 19:42:14.354736    8428 command_runner.go:130] ! I0314 19:19:18.426600       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0314 19:42:14.354736    8428 command_runner.go:130] ! I0314 19:19:18.519508       1 shared_informer.go:318] Caches are synced for service config
	I0314 19:42:14.354736    8428 command_runner.go:130] ! I0314 19:19:18.524347       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0314 19:42:14.354736    8428 command_runner.go:130] ! I0314 19:19:18.527360       1 shared_informer.go:318] Caches are synced for node config
	I0314 19:42:14.356981    8428 logs.go:123] Gathering logs for dmesg ...
	I0314 19:42:14.356981    8428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:42:14.379440    8428 command_runner.go:130] > [Mar14 19:39] You have booted with nomodeset. This means your GPU drivers are DISABLED
	I0314 19:42:14.379477    8428 command_runner.go:130] > [  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	I0314 19:42:14.379477    8428 command_runner.go:130] > [  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	I0314 19:42:14.379556    8428 command_runner.go:130] > [  +0.111500] RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible!
	I0314 19:42:14.379556    8428 command_runner.go:130] > [  +0.025646] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	I0314 19:42:14.379556    8428 command_runner.go:130] > [  +0.000006] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	I0314 19:42:14.379616    8428 command_runner.go:130] > [  +0.000001] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	I0314 19:42:14.379616    8428 command_runner.go:130] > [  +0.051209] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	I0314 19:42:14.379616    8428 command_runner.go:130] > [  +0.017569] * Found PM-Timer Bug on the chipset. Due to workarounds for a bug,
	I0314 19:42:14.379616    8428 command_runner.go:130] >               * this clock source is slow. Consider trying other clock sources
	I0314 19:42:14.379616    8428 command_runner.go:130] > [  +5.774438] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	I0314 19:42:14.379616    8428 command_runner.go:130] > [  +0.663188] psmouse serio1: trackpoint: failed to get extended button data, assuming 3 buttons
	I0314 19:42:14.379616    8428 command_runner.go:130] > [  +1.473946] systemd-fstab-generator[113]: Ignoring "noauto" option for root device
	I0314 19:42:14.379616    8428 command_runner.go:130] > [  +5.849126] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	I0314 19:42:14.379616    8428 command_runner.go:130] > [  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	I0314 19:42:14.379616    8428 command_runner.go:130] > [  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	I0314 19:42:14.379616    8428 command_runner.go:130] > [Mar14 19:40] systemd-fstab-generator[630]: Ignoring "noauto" option for root device
	I0314 19:42:14.379616    8428 command_runner.go:130] > [  +0.179743] systemd-fstab-generator[642]: Ignoring "noauto" option for root device
	I0314 19:42:14.379616    8428 command_runner.go:130] > [ +24.853688] systemd-fstab-generator[971]: Ignoring "noauto" option for root device
	I0314 19:42:14.379616    8428 command_runner.go:130] > [  +0.096946] kauditd_printk_skb: 73 callbacks suppressed
	I0314 19:42:14.379616    8428 command_runner.go:130] > [  +0.497369] systemd-fstab-generator[1009]: Ignoring "noauto" option for root device
	I0314 19:42:14.379616    8428 command_runner.go:130] > [  +0.185545] systemd-fstab-generator[1021]: Ignoring "noauto" option for root device
	I0314 19:42:14.379616    8428 command_runner.go:130] > [  +0.215423] systemd-fstab-generator[1035]: Ignoring "noauto" option for root device
	I0314 19:42:14.379616    8428 command_runner.go:130] > [  +2.887443] systemd-fstab-generator[1220]: Ignoring "noauto" option for root device
	I0314 19:42:14.379616    8428 command_runner.go:130] > [  +0.193519] systemd-fstab-generator[1232]: Ignoring "noauto" option for root device
	I0314 19:42:14.379616    8428 command_runner.go:130] > [  +0.182072] systemd-fstab-generator[1244]: Ignoring "noauto" option for root device
	I0314 19:42:14.379616    8428 command_runner.go:130] > [  +0.258988] systemd-fstab-generator[1259]: Ignoring "noauto" option for root device
	I0314 19:42:14.379616    8428 command_runner.go:130] > [  +0.819687] systemd-fstab-generator[1381]: Ignoring "noauto" option for root device
	I0314 19:42:14.379616    8428 command_runner.go:130] > [  +0.099817] kauditd_printk_skb: 205 callbacks suppressed
	I0314 19:42:14.379616    8428 command_runner.go:130] > [  +2.940519] systemd-fstab-generator[1516]: Ignoring "noauto" option for root device
	I0314 19:42:14.379616    8428 command_runner.go:130] > [Mar14 19:41] kauditd_printk_skb: 84 callbacks suppressed
	I0314 19:42:14.379616    8428 command_runner.go:130] > [  +4.042735] systemd-fstab-generator[3087]: Ignoring "noauto" option for root device
	I0314 19:42:14.379616    8428 command_runner.go:130] > [  +7.733278] kauditd_printk_skb: 70 callbacks suppressed
	I0314 19:42:14.381741    8428 logs.go:123] Gathering logs for kube-apiserver [a598d24960de] ...
	I0314 19:42:14.381741    8428 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a598d24960de"
	I0314 19:42:14.412838    8428 command_runner.go:130] ! I0314 19:41:02.580148       1 options.go:220] external host was not specified, using 172.17.93.236
	I0314 19:42:14.412838    8428 command_runner.go:130] ! I0314 19:41:02.584195       1 server.go:148] Version: v1.28.4
	I0314 19:42:14.412838    8428 command_runner.go:130] ! I0314 19:41:02.584361       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0314 19:42:14.412838    8428 command_runner.go:130] ! I0314 19:41:03.945945       1 shared_informer.go:311] Waiting for caches to sync for node_authorizer
	I0314 19:42:14.412838    8428 command_runner.go:130] ! I0314 19:41:03.963375       1 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0314 19:42:14.412838    8428 command_runner.go:130] ! I0314 19:41:03.963415       1 plugins.go:161] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0314 19:42:14.412838    8428 command_runner.go:130] ! I0314 19:41:03.963973       1 instance.go:298] Using reconciler: lease
	I0314 19:42:14.412838    8428 command_runner.go:130] ! I0314 19:41:04.031000       1 handler.go:232] Adding GroupVersion apiextensions.k8s.io v1 to ResourceManager
	I0314 19:42:14.413376    8428 command_runner.go:130] ! W0314 19:41:04.031118       1 genericapiserver.go:744] Skipping API apiextensions.k8s.io/v1beta1 because it has no resources.
	I0314 19:42:14.413376    8428 command_runner.go:130] ! I0314 19:41:04.342643       1 handler.go:232] Adding GroupVersion  v1 to ResourceManager
	I0314 19:42:14.413434    8428 command_runner.go:130] ! I0314 19:41:04.343120       1 instance.go:709] API group "internal.apiserver.k8s.io" is not enabled, skipping.
	I0314 19:42:14.413434    8428 command_runner.go:130] ! I0314 19:41:04.862959       1 instance.go:709] API group "resource.k8s.io" is not enabled, skipping.
	I0314 19:42:14.413478    8428 command_runner.go:130] ! I0314 19:41:04.875745       1 handler.go:232] Adding GroupVersion authentication.k8s.io v1 to ResourceManager
	I0314 19:42:14.413517    8428 command_runner.go:130] ! W0314 19:41:04.875858       1 genericapiserver.go:744] Skipping API authentication.k8s.io/v1beta1 because it has no resources.
	I0314 19:42:14.413543    8428 command_runner.go:130] ! W0314 19:41:04.875867       1 genericapiserver.go:744] Skipping API authentication.k8s.io/v1alpha1 because it has no resources.
	I0314 19:42:14.413586    8428 command_runner.go:130] ! I0314 19:41:04.876422       1 handler.go:232] Adding GroupVersion authorization.k8s.io v1 to ResourceManager
	I0314 19:42:14.413625    8428 command_runner.go:130] ! W0314 19:41:04.876506       1 genericapiserver.go:744] Skipping API authorization.k8s.io/v1beta1 because it has no resources.
	I0314 19:42:14.413625    8428 command_runner.go:130] ! I0314 19:41:04.877676       1 handler.go:232] Adding GroupVersion autoscaling v2 to ResourceManager
	I0314 19:42:14.413675    8428 command_runner.go:130] ! I0314 19:41:04.878707       1 handler.go:232] Adding GroupVersion autoscaling v1 to ResourceManager
	I0314 19:42:14.413675    8428 command_runner.go:130] ! W0314 19:41:04.878804       1 genericapiserver.go:744] Skipping API autoscaling/v2beta1 because it has no resources.
	I0314 19:42:14.413675    8428 command_runner.go:130] ! W0314 19:41:04.878812       1 genericapiserver.go:744] Skipping API autoscaling/v2beta2 because it has no resources.
	I0314 19:42:14.413675    8428 command_runner.go:130] ! I0314 19:41:04.881331       1 handler.go:232] Adding GroupVersion batch v1 to ResourceManager
	I0314 19:42:14.413763    8428 command_runner.go:130] ! W0314 19:41:04.881418       1 genericapiserver.go:744] Skipping API batch/v1beta1 because it has no resources.
	I0314 19:42:14.413763    8428 command_runner.go:130] ! I0314 19:41:04.882613       1 handler.go:232] Adding GroupVersion certificates.k8s.io v1 to ResourceManager
	I0314 19:42:14.413763    8428 command_runner.go:130] ! W0314 19:41:04.882706       1 genericapiserver.go:744] Skipping API certificates.k8s.io/v1beta1 because it has no resources.
	I0314 19:42:14.413763    8428 command_runner.go:130] ! W0314 19:41:04.882714       1 genericapiserver.go:744] Skipping API certificates.k8s.io/v1alpha1 because it has no resources.
	I0314 19:42:14.413901    8428 command_runner.go:130] ! I0314 19:41:04.883473       1 handler.go:232] Adding GroupVersion coordination.k8s.io v1 to ResourceManager
	I0314 19:42:14.413901    8428 command_runner.go:130] ! W0314 19:41:04.883562       1 genericapiserver.go:744] Skipping API coordination.k8s.io/v1beta1 because it has no resources.
	I0314 19:42:14.413969    8428 command_runner.go:130] ! W0314 19:41:04.883619       1 genericapiserver.go:744] Skipping API discovery.k8s.io/v1beta1 because it has no resources.
	I0314 19:42:14.414044    8428 command_runner.go:130] ! I0314 19:41:04.884340       1 handler.go:232] Adding GroupVersion discovery.k8s.io v1 to ResourceManager
	I0314 19:42:14.414044    8428 command_runner.go:130] ! I0314 19:41:04.886289       1 handler.go:232] Adding GroupVersion networking.k8s.io v1 to ResourceManager
	I0314 19:42:14.414044    8428 command_runner.go:130] ! W0314 19:41:04.886373       1 genericapiserver.go:744] Skipping API networking.k8s.io/v1beta1 because it has no resources.
	I0314 19:42:14.414044    8428 command_runner.go:130] ! W0314 19:41:04.886382       1 genericapiserver.go:744] Skipping API networking.k8s.io/v1alpha1 because it has no resources.
	I0314 19:42:14.414044    8428 command_runner.go:130] ! I0314 19:41:04.886877       1 handler.go:232] Adding GroupVersion node.k8s.io v1 to ResourceManager
	I0314 19:42:14.414044    8428 command_runner.go:130] ! W0314 19:41:04.886971       1 genericapiserver.go:744] Skipping API node.k8s.io/v1beta1 because it has no resources.
	I0314 19:42:14.414044    8428 command_runner.go:130] ! W0314 19:41:04.886979       1 genericapiserver.go:744] Skipping API node.k8s.io/v1alpha1 because it has no resources.
	I0314 19:42:14.414044    8428 command_runner.go:130] ! I0314 19:41:04.888213       1 handler.go:232] Adding GroupVersion policy v1 to ResourceManager
	I0314 19:42:14.414044    8428 command_runner.go:130] ! W0314 19:41:04.888261       1 genericapiserver.go:744] Skipping API policy/v1beta1 because it has no resources.
	I0314 19:42:14.414044    8428 command_runner.go:130] ! I0314 19:41:04.903461       1 handler.go:232] Adding GroupVersion rbac.authorization.k8s.io v1 to ResourceManager
	I0314 19:42:14.414044    8428 command_runner.go:130] ! W0314 19:41:04.903509       1 genericapiserver.go:744] Skipping API rbac.authorization.k8s.io/v1beta1 because it has no resources.
	I0314 19:42:14.414044    8428 command_runner.go:130] ! W0314 19:41:04.903517       1 genericapiserver.go:744] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
	I0314 19:42:14.414044    8428 command_runner.go:130] ! I0314 19:41:04.906409       1 handler.go:232] Adding GroupVersion scheduling.k8s.io v1 to ResourceManager
	I0314 19:42:14.414044    8428 command_runner.go:130] ! W0314 19:41:04.906458       1 genericapiserver.go:744] Skipping API scheduling.k8s.io/v1beta1 because it has no resources.
	I0314 19:42:14.414044    8428 command_runner.go:130] ! W0314 19:41:04.906466       1 genericapiserver.go:744] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
	I0314 19:42:14.414576    8428 command_runner.go:130] ! I0314 19:41:04.915366       1 handler.go:232] Adding GroupVersion storage.k8s.io v1 to ResourceManager
	I0314 19:42:14.414622    8428 command_runner.go:130] ! W0314 19:41:04.915463       1 genericapiserver.go:744] Skipping API storage.k8s.io/v1beta1 because it has no resources.
	I0314 19:42:14.414622    8428 command_runner.go:130] ! W0314 19:41:04.915472       1 genericapiserver.go:744] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
	I0314 19:42:14.414729    8428 command_runner.go:130] ! I0314 19:41:04.916839       1 handler.go:232] Adding GroupVersion flowcontrol.apiserver.k8s.io v1beta3 to ResourceManager
	I0314 19:42:14.414729    8428 command_runner.go:130] ! I0314 19:41:04.918318       1 handler.go:232] Adding GroupVersion flowcontrol.apiserver.k8s.io v1beta2 to ResourceManager
	I0314 19:42:14.414817    8428 command_runner.go:130] ! W0314 19:41:04.918410       1 genericapiserver.go:744] Skipping API flowcontrol.apiserver.k8s.io/v1beta1 because it has no resources.
	I0314 19:42:14.414878    8428 command_runner.go:130] ! W0314 19:41:04.918418       1 genericapiserver.go:744] Skipping API flowcontrol.apiserver.k8s.io/v1alpha1 because it has no resources.
	I0314 19:42:14.414918    8428 command_runner.go:130] ! I0314 19:41:04.922469       1 handler.go:232] Adding GroupVersion apps v1 to ResourceManager
	I0314 19:42:14.414965    8428 command_runner.go:130] ! W0314 19:41:04.922563       1 genericapiserver.go:744] Skipping API apps/v1beta2 because it has no resources.
	I0314 19:42:14.415014    8428 command_runner.go:130] ! W0314 19:41:04.922576       1 genericapiserver.go:744] Skipping API apps/v1beta1 because it has no resources.
	I0314 19:42:14.415058    8428 command_runner.go:130] ! I0314 19:41:04.923589       1 handler.go:232] Adding GroupVersion admissionregistration.k8s.io v1 to ResourceManager
	I0314 19:42:14.415107    8428 command_runner.go:130] ! W0314 19:41:04.923671       1 genericapiserver.go:744] Skipping API admissionregistration.k8s.io/v1beta1 because it has no resources.
	I0314 19:42:14.415167    8428 command_runner.go:130] ! W0314 19:41:04.923678       1 genericapiserver.go:744] Skipping API admissionregistration.k8s.io/v1alpha1 because it has no resources.
	I0314 19:42:14.415218    8428 command_runner.go:130] ! I0314 19:41:04.924323       1 handler.go:232] Adding GroupVersion events.k8s.io v1 to ResourceManager
	I0314 19:42:14.415218    8428 command_runner.go:130] ! W0314 19:41:04.924409       1 genericapiserver.go:744] Skipping API events.k8s.io/v1beta1 because it has no resources.
	I0314 19:42:14.415274    8428 command_runner.go:130] ! I0314 19:41:04.946149       1 handler.go:232] Adding GroupVersion apiregistration.k8s.io v1 to ResourceManager
	I0314 19:42:14.415373    8428 command_runner.go:130] ! W0314 19:41:04.946188       1 genericapiserver.go:744] Skipping API apiregistration.k8s.io/v1beta1 because it has no resources.
	I0314 19:42:14.415412    8428 command_runner.go:130] ! I0314 19:41:05.649195       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0314 19:42:14.415448    8428 command_runner.go:130] ! I0314 19:41:05.649351       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0314 19:42:14.415448    8428 command_runner.go:130] ! I0314 19:41:05.650113       1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	I0314 19:42:14.415448    8428 command_runner.go:130] ! I0314 19:41:05.651281       1 secure_serving.go:213] Serving securely on [::]:8443
	I0314 19:42:14.415448    8428 command_runner.go:130] ! I0314 19:41:05.651311       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0314 19:42:14.415448    8428 command_runner.go:130] ! I0314 19:41:05.651726       1 handler_discovery.go:412] Starting ResourceDiscoveryManager
	I0314 19:42:14.415448    8428 command_runner.go:130] ! I0314 19:41:05.651907       1 dynamic_serving_content.go:132] "Starting controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	I0314 19:42:14.415448    8428 command_runner.go:130] ! I0314 19:41:05.654468       1 controller.go:80] Starting OpenAPI V3 AggregationController
	I0314 19:42:14.415448    8428 command_runner.go:130] ! I0314 19:41:05.654814       1 gc_controller.go:78] Starting apiserver lease garbage collector
	I0314 19:42:14.415448    8428 command_runner.go:130] ! I0314 19:41:05.655201       1 gc_controller.go:78] Starting apiserver lease garbage collector
	I0314 19:42:14.415448    8428 command_runner.go:130] ! I0314 19:41:05.656049       1 apf_controller.go:372] Starting API Priority and Fairness config controller
	I0314 19:42:14.415448    8428 command_runner.go:130] ! I0314 19:41:05.656308       1 available_controller.go:423] Starting AvailableConditionController
	I0314 19:42:14.415448    8428 command_runner.go:130] ! I0314 19:41:05.656404       1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
	I0314 19:42:14.415448    8428 command_runner.go:130] ! I0314 19:41:05.651597       1 apiservice_controller.go:97] Starting APIServiceRegistrationController
	I0314 19:42:14.415448    8428 command_runner.go:130] ! I0314 19:41:05.656599       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
	I0314 19:42:14.415448    8428 command_runner.go:130] ! I0314 19:41:05.658623       1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
	I0314 19:42:14.415448    8428 command_runner.go:130] ! I0314 19:41:05.658785       1 shared_informer.go:311] Waiting for caches to sync for cluster_authentication_trust_controller
	I0314 19:42:14.415448    8428 command_runner.go:130] ! I0314 19:41:05.659483       1 system_namespaces_controller.go:67] Starting system namespaces controller
	I0314 19:42:14.415979    8428 command_runner.go:130] ! I0314 19:41:05.661076       1 aggregator.go:164] waiting for initial CRD sync...
	I0314 19:42:14.416026    8428 command_runner.go:130] ! I0314 19:41:05.662487       1 customresource_discovery_controller.go:289] Starting DiscoveryController
	I0314 19:42:14.416069    8428 command_runner.go:130] ! I0314 19:41:05.662789       1 controller.go:78] Starting OpenAPI AggregationController
	I0314 19:42:14.416109    8428 command_runner.go:130] ! I0314 19:41:05.727194       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0314 19:42:14.416143    8428 command_runner.go:130] ! I0314 19:41:05.728512       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0314 19:42:14.416143    8428 command_runner.go:130] ! I0314 19:41:05.729067       1 controller.go:116] Starting legacy_token_tracking_controller
	I0314 19:42:14.416143    8428 command_runner.go:130] ! I0314 19:41:05.729317       1 shared_informer.go:311] Waiting for caches to sync for configmaps
	I0314 19:42:14.416143    8428 command_runner.go:130] ! I0314 19:41:05.729432       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0314 19:42:14.416143    8428 command_runner.go:130] ! I0314 19:41:05.729507       1 shared_informer.go:311] Waiting for caches to sync for crd-autoregister
	I0314 19:42:14.416143    8428 command_runner.go:130] ! I0314 19:41:05.729606       1 controller.go:134] Starting OpenAPI controller
	I0314 19:42:14.416143    8428 command_runner.go:130] ! I0314 19:41:05.729710       1 controller.go:85] Starting OpenAPI V3 controller
	I0314 19:42:14.416143    8428 command_runner.go:130] ! I0314 19:41:05.729812       1 naming_controller.go:291] Starting NamingConditionController
	I0314 19:42:14.416143    8428 command_runner.go:130] ! I0314 19:41:05.729911       1 establishing_controller.go:76] Starting EstablishingController
	I0314 19:42:14.416143    8428 command_runner.go:130] ! I0314 19:41:05.730411       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0314 19:42:14.416143    8428 command_runner.go:130] ! I0314 19:41:05.730521       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0314 19:42:14.416143    8428 command_runner.go:130] ! I0314 19:41:05.730616       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0314 19:42:14.416143    8428 command_runner.go:130] ! I0314 19:41:05.799477       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0314 19:42:14.416143    8428 command_runner.go:130] ! I0314 19:41:05.813580       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0314 19:42:14.416143    8428 command_runner.go:130] ! I0314 19:41:05.830168       1 shared_informer.go:318] Caches are synced for configmaps
	I0314 19:42:14.416143    8428 command_runner.go:130] ! I0314 19:41:05.830217       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0314 19:42:14.416143    8428 command_runner.go:130] ! I0314 19:41:05.830281       1 aggregator.go:166] initial CRD sync complete...
	I0314 19:42:14.416143    8428 command_runner.go:130] ! I0314 19:41:05.830289       1 autoregister_controller.go:141] Starting autoregister controller
	I0314 19:42:14.416143    8428 command_runner.go:130] ! I0314 19:41:05.830295       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0314 19:42:14.416672    8428 command_runner.go:130] ! I0314 19:41:05.830301       1 cache.go:39] Caches are synced for autoregister controller
	I0314 19:42:14.416720    8428 command_runner.go:130] ! I0314 19:41:05.846941       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0314 19:42:14.416782    8428 command_runner.go:130] ! I0314 19:41:05.857057       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0314 19:42:14.416829    8428 command_runner.go:130] ! I0314 19:41:05.858966       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0314 19:42:14.416873    8428 command_runner.go:130] ! I0314 19:41:05.865554       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I0314 19:42:14.416963    8428 command_runner.go:130] ! I0314 19:41:05.865721       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I0314 19:42:14.416988    8428 command_runner.go:130] ! I0314 19:41:06.667315       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0314 19:42:14.417089    8428 command_runner.go:130] ! W0314 19:41:07.118314       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [172.17.93.236]
	I0314 19:42:14.417132    8428 command_runner.go:130] ! I0314 19:41:07.120612       1 controller.go:624] quota admission added evaluator for: endpoints
	I0314 19:42:14.417132    8428 command_runner.go:130] ! I0314 19:41:07.135973       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0314 19:42:14.417207    8428 command_runner.go:130] ! I0314 19:41:09.049225       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0314 19:42:14.417207    8428 command_runner.go:130] ! I0314 19:41:09.264220       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0314 19:42:14.417207    8428 command_runner.go:130] ! I0314 19:41:09.277110       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0314 19:42:14.417207    8428 command_runner.go:130] ! I0314 19:41:09.393446       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0314 19:42:14.417207    8428 command_runner.go:130] ! I0314 19:41:09.422214       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0314 19:42:14.424616    8428 logs.go:123] Gathering logs for coredns [b159aedddf94] ...
	I0314 19:42:14.424616    8428 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b159aedddf94"
	I0314 19:42:14.458823    8428 command_runner.go:130] > .:53
	I0314 19:42:14.459229    8428 command_runner.go:130] > [INFO] plugin/reload: Running configuration SHA512 = d518b2f22d7013b4ce33ee954d9f8802810eac8bed02a1cf0be20d76208a6f83258316421f15df605ab13f1704501370ffcd7655fbac5818a200880248c94b94
	I0314 19:42:14.459229    8428 command_runner.go:130] > CoreDNS-1.10.1
	I0314 19:42:14.459229    8428 command_runner.go:130] > linux/amd64, go1.20, 055b2c3
	I0314 19:42:14.459269    8428 command_runner.go:130] > [INFO] 127.0.0.1:38965 - 37747 "HINFO IN 9162400456686827331.1281991328183180689. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.052220616s
	I0314 19:42:14.459541    8428 logs.go:123] Gathering logs for coredns [8899bc003893] ...
	I0314 19:42:14.459541    8428 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8899bc003893"
	I0314 19:42:14.488904    8428 command_runner.go:130] > .:53
	I0314 19:42:14.488904    8428 command_runner.go:130] > [INFO] plugin/reload: Running configuration SHA512 = d518b2f22d7013b4ce33ee954d9f8802810eac8bed02a1cf0be20d76208a6f83258316421f15df605ab13f1704501370ffcd7655fbac5818a200880248c94b94
	I0314 19:42:14.488904    8428 command_runner.go:130] > CoreDNS-1.10.1
	I0314 19:42:14.488904    8428 command_runner.go:130] > linux/amd64, go1.20, 055b2c3
	I0314 19:42:14.488904    8428 command_runner.go:130] > [INFO] 127.0.0.1:56069 - 18242 "HINFO IN 687842018263708116.264844942244880205. udp 55 false 512" NXDOMAIN qr,rd,ra 130 0.040568923s
	I0314 19:42:14.488904    8428 command_runner.go:130] > [INFO] 10.244.0.3:42598 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000297623s
	I0314 19:42:14.488904    8428 command_runner.go:130] > [INFO] 10.244.0.3:49284 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.094729955s
	I0314 19:42:14.488904    8428 command_runner.go:130] > [INFO] 10.244.0.3:58753 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd 60 0.047978925s
	I0314 19:42:14.488904    8428 command_runner.go:130] > [INFO] 10.244.0.3:60240 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 1.250879171s
	I0314 19:42:14.488904    8428 command_runner.go:130] > [INFO] 10.244.1.2:35705 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000107809s
	I0314 19:42:14.488904    8428 command_runner.go:130] > [INFO] 10.244.1.2:38792 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 140 0.00013461s
	I0314 19:42:14.488904    8428 command_runner.go:130] > [INFO] 10.244.1.2:53339 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd 60 0.000060304s
	I0314 19:42:14.488904    8428 command_runner.go:130] > [INFO] 10.244.1.2:55975 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,aa,rd,ra 140 0.000059805s
	I0314 19:42:14.488904    8428 command_runner.go:130] > [INFO] 10.244.0.3:55630 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000117109s
	I0314 19:42:14.488904    8428 command_runner.go:130] > [INFO] 10.244.0.3:50181 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.122219329s
	I0314 19:42:14.488904    8428 command_runner.go:130] > [INFO] 10.244.0.3:58918 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000194615s
	I0314 19:42:14.488904    8428 command_runner.go:130] > [INFO] 10.244.0.3:48641 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00012501s
	I0314 19:42:14.488904    8428 command_runner.go:130] > [INFO] 10.244.0.3:57540 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.0346353s
	I0314 19:42:14.488904    8428 command_runner.go:130] > [INFO] 10.244.0.3:59969 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000278722s
	I0314 19:42:14.488904    8428 command_runner.go:130] > [INFO] 10.244.0.3:51295 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000167413s
	I0314 19:42:14.488904    8428 command_runner.go:130] > [INFO] 10.244.0.3:45005 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000148512s
	I0314 19:42:14.488904    8428 command_runner.go:130] > [INFO] 10.244.1.2:51938 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000100608s
	I0314 19:42:14.488904    8428 command_runner.go:130] > [INFO] 10.244.1.2:46248 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.00024762s
	I0314 19:42:14.488904    8428 command_runner.go:130] > [INFO] 10.244.1.2:46501 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000100408s
	I0314 19:42:14.488904    8428 command_runner.go:130] > [INFO] 10.244.1.2:52414 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000056704s
	I0314 19:42:14.488904    8428 command_runner.go:130] > [INFO] 10.244.1.2:44908 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000121409s
	I0314 19:42:14.489918    8428 command_runner.go:130] > [INFO] 10.244.1.2:49578 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00011941s
	I0314 19:42:14.489918    8428 command_runner.go:130] > [INFO] 10.244.1.2:51057 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000060205s
	I0314 19:42:14.489918    8428 command_runner.go:130] > [INFO] 10.244.1.2:56240 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000055805s
	I0314 19:42:14.489918    8428 command_runner.go:130] > [INFO] 10.244.0.3:32901 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000172914s
	I0314 19:42:14.489918    8428 command_runner.go:130] > [INFO] 10.244.0.3:41115 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000149912s
	I0314 19:42:14.490029    8428 command_runner.go:130] > [INFO] 10.244.0.3:40494 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00013161s
	I0314 19:42:14.490076    8428 command_runner.go:130] > [INFO] 10.244.0.3:40575 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000077106s
	I0314 19:42:14.490076    8428 command_runner.go:130] > [INFO] 10.244.1.2:55307 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000194115s
	I0314 19:42:14.490076    8428 command_runner.go:130] > [INFO] 10.244.1.2:46435 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00025832s
	I0314 19:42:14.490158    8428 command_runner.go:130] > [INFO] 10.244.1.2:52095 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000156813s
	I0314 19:42:14.490158    8428 command_runner.go:130] > [INFO] 10.244.1.2:57849 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00012701s
	I0314 19:42:14.490158    8428 command_runner.go:130] > [INFO] 10.244.0.3:47270 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000244119s
	I0314 19:42:14.490158    8428 command_runner.go:130] > [INFO] 10.244.0.3:59009 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000411532s
	I0314 19:42:14.490253    8428 command_runner.go:130] > [INFO] 10.244.0.3:40925 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000108108s
	I0314 19:42:14.490253    8428 command_runner.go:130] > [INFO] 10.244.0.3:56417 - 5 "PTR IN 1.80.17.172.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000067706s
	I0314 19:42:14.490253    8428 command_runner.go:130] > [INFO] 10.244.1.2:36896 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000108409s
	I0314 19:42:14.490253    8428 command_runner.go:130] > [INFO] 10.244.1.2:38949 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000118209s
	I0314 19:42:14.490253    8428 command_runner.go:130] > [INFO] 10.244.1.2:56933 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000156413s
	I0314 19:42:14.490350    8428 command_runner.go:130] > [INFO] 10.244.1.2:35971 - 5 "PTR IN 1.80.17.172.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000072406s
	I0314 19:42:14.490350    8428 command_runner.go:130] > [INFO] SIGTERM: Shutting down servers then terminating
	I0314 19:42:14.490350    8428 command_runner.go:130] > [INFO] plugin/health: Going into lameduck mode for 5s
	I0314 19:42:14.493252    8428 logs.go:123] Gathering logs for kube-scheduler [32d90a3ea213] ...
	I0314 19:42:14.493324    8428 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32d90a3ea213"
	I0314 19:42:14.520031    8428 command_runner.go:130] ! I0314 19:41:03.376319       1 serving.go:348] Generated self-signed cert in-memory
	I0314 19:42:14.520224    8428 command_runner.go:130] ! W0314 19:41:05.770317       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	I0314 19:42:14.520224    8428 command_runner.go:130] ! W0314 19:41:05.770426       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	I0314 19:42:14.520302    8428 command_runner.go:130] ! W0314 19:41:05.770581       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	I0314 19:42:14.520302    8428 command_runner.go:130] ! W0314 19:41:05.770640       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0314 19:42:14.520302    8428 command_runner.go:130] ! I0314 19:41:05.841573       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.4"
	I0314 19:42:14.520302    8428 command_runner.go:130] ! I0314 19:41:05.841674       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0314 19:42:14.520302    8428 command_runner.go:130] ! I0314 19:41:05.844125       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0314 19:42:14.520302    8428 command_runner.go:130] ! I0314 19:41:05.845062       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0314 19:42:14.520302    8428 command_runner.go:130] ! I0314 19:41:05.845143       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0314 19:42:14.520302    8428 command_runner.go:130] ! I0314 19:41:05.845293       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0314 19:42:14.520302    8428 command_runner.go:130] ! I0314 19:41:05.946840       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0314 19:42:14.523302    8428 logs.go:123] Gathering logs for kube-controller-manager [12baf105f0bb] ...
	I0314 19:42:14.523375    8428 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12baf105f0bb"
	I0314 19:42:14.555558    8428 command_runner.go:130] ! I0314 19:41:03.101287       1 serving.go:348] Generated self-signed cert in-memory
	I0314 19:42:14.555558    8428 command_runner.go:130] ! I0314 19:41:03.872151       1 controllermanager.go:189] "Starting" version="v1.28.4"
	I0314 19:42:14.555558    8428 command_runner.go:130] ! I0314 19:41:03.874301       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0314 19:42:14.555648    8428 command_runner.go:130] ! I0314 19:41:03.879645       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0314 19:42:14.555648    8428 command_runner.go:130] ! I0314 19:41:03.880765       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0314 19:42:14.555648    8428 command_runner.go:130] ! I0314 19:41:03.883873       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0314 19:42:14.555648    8428 command_runner.go:130] ! I0314 19:41:03.883977       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0314 19:42:14.555648    8428 command_runner.go:130] ! I0314 19:41:07.787609       1 shared_informer.go:311] Waiting for caches to sync for tokens
	I0314 19:42:14.555710    8428 command_runner.go:130] ! I0314 19:41:07.796442       1 controllermanager.go:642] "Started controller" controller="replicationcontroller-controller"
	I0314 19:42:14.555710    8428 command_runner.go:130] ! I0314 19:41:07.796953       1 replica_set.go:214] "Starting controller" name="replicationcontroller"
	I0314 19:42:14.555710    8428 command_runner.go:130] ! I0314 19:41:07.798900       1 shared_informer.go:311] Waiting for caches to sync for ReplicationController
	I0314 19:42:14.555750    8428 command_runner.go:130] ! I0314 19:41:07.848846       1 controllermanager.go:642] "Started controller" controller="namespace-controller"
	I0314 19:42:14.555750    8428 command_runner.go:130] ! I0314 19:41:07.849015       1 namespace_controller.go:197] "Starting namespace controller"
	I0314 19:42:14.555750    8428 command_runner.go:130] ! I0314 19:41:07.849025       1 shared_informer.go:311] Waiting for caches to sync for namespace
	I0314 19:42:14.555750    8428 command_runner.go:130] ! I0314 19:41:07.855296       1 controllermanager.go:642] "Started controller" controller="persistentvolumeclaim-protection-controller"
	I0314 19:42:14.555750    8428 command_runner.go:130] ! I0314 19:41:07.858491       1 pvc_protection_controller.go:102] "Starting PVC protection controller"
	I0314 19:42:14.555750    8428 command_runner.go:130] ! I0314 19:41:07.858512       1 shared_informer.go:311] Waiting for caches to sync for PVC protection
	I0314 19:42:14.555750    8428 command_runner.go:130] ! I0314 19:41:07.864964       1 controllermanager.go:642] "Started controller" controller="endpoints-controller"
	I0314 19:42:14.555750    8428 command_runner.go:130] ! I0314 19:41:07.865080       1 endpoints_controller.go:174] "Starting endpoint controller"
	I0314 19:42:14.555750    8428 command_runner.go:130] ! I0314 19:41:07.865088       1 shared_informer.go:311] Waiting for caches to sync for endpoint
	I0314 19:42:14.555750    8428 command_runner.go:130] ! I0314 19:41:07.870629       1 controllermanager.go:642] "Started controller" controller="daemonset-controller"
	I0314 19:42:14.555750    8428 command_runner.go:130] ! I0314 19:41:07.871089       1 daemon_controller.go:291] "Starting daemon sets controller"
	I0314 19:42:14.555750    8428 command_runner.go:130] ! I0314 19:41:07.871332       1 shared_informer.go:311] Waiting for caches to sync for daemon sets
	I0314 19:42:14.555750    8428 command_runner.go:130] ! I0314 19:41:07.889997       1 shared_informer.go:318] Caches are synced for tokens
	I0314 19:42:14.555750    8428 command_runner.go:130] ! I0314 19:41:07.899597       1 controllermanager.go:642] "Started controller" controller="horizontal-pod-autoscaler-controller"
	I0314 19:42:14.555750    8428 command_runner.go:130] ! I0314 19:41:07.900355       1 horizontal.go:200] "Starting HPA controller"
	I0314 19:42:14.555750    8428 command_runner.go:130] ! I0314 19:41:07.901325       1 shared_informer.go:311] Waiting for caches to sync for HPA
	I0314 19:42:14.555750    8428 command_runner.go:130] ! I0314 19:41:07.921217       1 controllermanager.go:642] "Started controller" controller="disruption-controller"
	I0314 19:42:14.555750    8428 command_runner.go:130] ! I0314 19:41:07.922072       1 disruption.go:433] "Sending events to api server."
	I0314 19:42:14.555750    8428 command_runner.go:130] ! I0314 19:41:07.922293       1 disruption.go:444] "Starting disruption controller"
	I0314 19:42:14.555750    8428 command_runner.go:130] ! I0314 19:41:07.922481       1 shared_informer.go:311] Waiting for caches to sync for disruption
	I0314 19:42:14.555750    8428 command_runner.go:130] ! I0314 19:41:07.927437       1 controllermanager.go:642] "Started controller" controller="persistentvolume-attach-detach-controller"
	I0314 19:42:14.555750    8428 command_runner.go:130] ! I0314 19:41:07.929290       1 attach_detach_controller.go:337] "Starting attach detach controller"
	I0314 19:42:14.555750    8428 command_runner.go:130] ! I0314 19:41:07.929325       1 shared_informer.go:311] Waiting for caches to sync for attach detach
	I0314 19:42:14.555750    8428 command_runner.go:130] ! I0314 19:41:07.936410       1 controllermanager.go:642] "Started controller" controller="ephemeral-volume-controller"
	I0314 19:42:14.555750    8428 command_runner.go:130] ! I0314 19:41:07.936565       1 controller.go:169] "Starting ephemeral volume controller"
	I0314 19:42:14.555750    8428 command_runner.go:130] ! I0314 19:41:07.936765       1 shared_informer.go:311] Waiting for caches to sync for ephemeral
	I0314 19:42:14.555750    8428 command_runner.go:130] ! I0314 19:41:07.954720       1 controllermanager.go:642] "Started controller" controller="cronjob-controller"
	I0314 19:42:14.555750    8428 command_runner.go:130] ! I0314 19:41:07.954939       1 cronjob_controllerv2.go:139] "Starting cronjob controller v2"
	I0314 19:42:14.555750    8428 command_runner.go:130] ! I0314 19:41:07.955142       1 shared_informer.go:311] Waiting for caches to sync for cronjob
	I0314 19:42:14.555750    8428 command_runner.go:130] ! I0314 19:41:07.970387       1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-kubelet-serving"
	I0314 19:42:14.555750    8428 command_runner.go:130] ! I0314 19:41:07.970474       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I0314 19:42:14.555750    8428 command_runner.go:130] ! I0314 19:41:07.970624       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0314 19:42:14.555750    8428 command_runner.go:130] ! I0314 19:41:07.971307       1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-kubelet-client"
	I0314 19:42:14.555750    8428 command_runner.go:130] ! I0314 19:41:07.975049       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I0314 19:42:14.555750    8428 command_runner.go:130] ! I0314 19:41:07.973288       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0314 19:42:14.555750    8428 command_runner.go:130] ! I0314 19:41:07.974848       1 controllermanager.go:642] "Started controller" controller="certificatesigningrequest-signing-controller"
	I0314 19:42:14.555750    8428 command_runner.go:130] ! I0314 19:41:07.974977       1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-legacy-unknown"
	I0314 19:42:14.556276    8428 command_runner.go:130] ! I0314 19:41:07.977476       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0314 19:42:14.556276    8428 command_runner.go:130] ! I0314 19:41:07.974992       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0314 19:42:14.556276    8428 command_runner.go:130] ! I0314 19:41:07.975020       1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-kube-apiserver-client"
	I0314 19:42:14.556276    8428 command_runner.go:130] ! I0314 19:41:07.977827       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I0314 19:42:14.556336    8428 command_runner.go:130] ! I0314 19:41:07.975030       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0314 19:42:14.556336    8428 command_runner.go:130] ! I0314 19:41:07.990774       1 controllermanager.go:642] "Started controller" controller="ttl-controller"
	I0314 19:42:14.556336    8428 command_runner.go:130] ! I0314 19:41:07.995647       1 ttl_controller.go:124] "Starting TTL controller"
	I0314 19:42:14.556336    8428 command_runner.go:130] ! I0314 19:41:07.995667       1 shared_informer.go:311] Waiting for caches to sync for TTL
	I0314 19:42:14.556386    8428 command_runner.go:130] ! I0314 19:41:08.019000       1 controllermanager.go:642] "Started controller" controller="token-cleaner-controller"
	I0314 19:42:14.556386    8428 command_runner.go:130] ! I0314 19:41:08.019415       1 tokencleaner.go:112] "Starting token cleaner controller"
	I0314 19:42:14.556386    8428 command_runner.go:130] ! I0314 19:41:08.019568       1 shared_informer.go:311] Waiting for caches to sync for token_cleaner
	I0314 19:42:14.556386    8428 command_runner.go:130] ! I0314 19:41:08.019700       1 shared_informer.go:318] Caches are synced for token_cleaner
	I0314 19:42:14.556386    8428 command_runner.go:130] ! E0314 19:41:08.029770       1 core.go:92] "Failed to start service controller" err="WARNING: no cloud provider provided, services of type LoadBalancer will fail"
	I0314 19:42:14.556442    8428 command_runner.go:130] ! I0314 19:41:08.029950       1 controllermanager.go:620] "Warning: skipping controller" controller="service-lb-controller"
	I0314 19:42:14.556442    8428 command_runner.go:130] ! I0314 19:41:08.030066       1 core.go:228] "Warning: configure-cloud-routes is set, but no cloud provider specified. Will not configure cloud provider routes."
	I0314 19:42:14.556442    8428 command_runner.go:130] ! I0314 19:41:08.030148       1 controllermanager.go:620] "Warning: skipping controller" controller="node-route-controller"
	I0314 19:42:14.556442    8428 command_runner.go:130] ! I0314 19:41:08.056856       1 controllermanager.go:642] "Started controller" controller="clusterrole-aggregation-controller"
	I0314 19:42:14.556489    8428 command_runner.go:130] ! I0314 19:41:08.058933       1 clusterroleaggregation_controller.go:189] "Starting ClusterRoleAggregator controller"
	I0314 19:42:14.556489    8428 command_runner.go:130] ! I0314 19:41:08.059323       1 shared_informer.go:311] Waiting for caches to sync for ClusterRoleAggregator
	I0314 19:42:14.556489    8428 command_runner.go:130] ! I0314 19:41:08.062839       1 controllermanager.go:642] "Started controller" controller="endpointslice-controller"
	I0314 19:42:14.556489    8428 command_runner.go:130] ! I0314 19:41:08.063208       1 endpointslice_controller.go:264] "Starting endpoint slice controller"
	I0314 19:42:14.556489    8428 command_runner.go:130] ! I0314 19:41:08.063512       1 shared_informer.go:311] Waiting for caches to sync for endpoint_slice
	I0314 19:42:14.556489    8428 command_runner.go:130] ! I0314 19:41:08.070376       1 node_lifecycle_controller.go:431] "Controller will reconcile labels"
	I0314 19:42:14.556489    8428 command_runner.go:130] ! I0314 19:41:08.070635       1 controllermanager.go:642] "Started controller" controller="node-lifecycle-controller"
	I0314 19:42:14.556489    8428 command_runner.go:130] ! I0314 19:41:08.070748       1 node_lifecycle_controller.go:465] "Sending events to api server"
	I0314 19:42:14.556489    8428 command_runner.go:130] ! I0314 19:41:08.071006       1 node_lifecycle_controller.go:476] "Starting node controller"
	I0314 19:42:14.556489    8428 command_runner.go:130] ! I0314 19:41:08.071615       1 shared_informer.go:311] Waiting for caches to sync for taint
	I0314 19:42:14.556489    8428 command_runner.go:130] ! I0314 19:41:08.079849       1 controllermanager.go:642] "Started controller" controller="persistentvolume-binder-controller"
	I0314 19:42:14.556489    8428 command_runner.go:130] ! I0314 19:41:08.080117       1 pv_controller_base.go:319] "Starting persistent volume controller"
	I0314 19:42:14.556489    8428 command_runner.go:130] ! I0314 19:41:08.081765       1 shared_informer.go:311] Waiting for caches to sync for persistent volume
	I0314 19:42:14.556489    8428 command_runner.go:130] ! I0314 19:41:08.084328       1 controllermanager.go:642] "Started controller" controller="ttl-after-finished-controller"
	I0314 19:42:14.556489    8428 command_runner.go:130] ! I0314 19:41:08.084731       1 ttlafterfinished_controller.go:109] "Starting TTL after finished controller"
	I0314 19:42:14.556489    8428 command_runner.go:130] ! I0314 19:41:08.085301       1 shared_informer.go:311] Waiting for caches to sync for TTL after finished
	I0314 19:42:14.556489    8428 command_runner.go:130] ! I0314 19:41:08.092529       1 controllermanager.go:642] "Started controller" controller="garbage-collector-controller"
	I0314 19:42:14.556489    8428 command_runner.go:130] ! I0314 19:41:08.092761       1 garbagecollector.go:155] "Starting controller" controller="garbagecollector"
	I0314 19:42:14.556489    8428 command_runner.go:130] ! I0314 19:41:08.092771       1 shared_informer.go:311] Waiting for caches to sync for garbage collector
	I0314 19:42:14.556489    8428 command_runner.go:130] ! I0314 19:41:08.097268       1 controllermanager.go:642] "Started controller" controller="persistentvolume-expander-controller"
	I0314 19:42:14.556489    8428 command_runner.go:130] ! I0314 19:41:08.097521       1 expand_controller.go:328] "Starting expand controller"
	I0314 19:42:14.556489    8428 command_runner.go:130] ! I0314 19:41:08.097531       1 shared_informer.go:311] Waiting for caches to sync for expand
	I0314 19:42:14.556489    8428 command_runner.go:130] ! I0314 19:41:08.097559       1 graph_builder.go:294] "Running" component="GraphBuilder"
	I0314 19:42:14.556489    8428 command_runner.go:130] ! I0314 19:41:08.117374       1 controllermanager.go:642] "Started controller" controller="root-ca-certificate-publisher-controller"
	I0314 19:42:14.556489    8428 command_runner.go:130] ! I0314 19:41:08.117512       1 publisher.go:102] "Starting root CA cert publisher controller"
	I0314 19:42:14.556489    8428 command_runner.go:130] ! I0314 19:41:08.117524       1 shared_informer.go:311] Waiting for caches to sync for crt configmap
	I0314 19:42:14.556489    8428 command_runner.go:130] ! I0314 19:41:08.126388       1 controllermanager.go:642] "Started controller" controller="statefulset-controller"
	I0314 19:42:14.556489    8428 command_runner.go:130] ! I0314 19:41:08.127645       1 stateful_set.go:161] "Starting stateful set controller"
	I0314 19:42:14.556489    8428 command_runner.go:130] ! I0314 19:41:08.127702       1 shared_informer.go:311] Waiting for caches to sync for stateful set
	I0314 19:42:14.556489    8428 command_runner.go:130] ! I0314 19:41:08.131336       1 controllermanager.go:642] "Started controller" controller="certificatesigningrequest-cleaner-controller"
	I0314 19:42:14.556489    8428 command_runner.go:130] ! I0314 19:41:08.131505       1 cleaner.go:83] "Starting CSR cleaner controller"
	I0314 19:42:14.556489    8428 command_runner.go:130] ! E0314 19:41:08.142589       1 core.go:213] "Failed to start cloud node lifecycle controller" err="no cloud provider provided"
	I0314 19:42:14.556489    8428 command_runner.go:130] ! I0314 19:41:08.142621       1 controllermanager.go:620] "Warning: skipping controller" controller="cloud-node-lifecycle-controller"
	I0314 19:42:14.556489    8428 command_runner.go:130] ! I0314 19:41:08.150057       1 controllermanager.go:642] "Started controller" controller="pod-garbage-collector-controller"
	I0314 19:42:14.556489    8428 command_runner.go:130] ! I0314 19:41:08.152574       1 gc_controller.go:101] "Starting GC controller"
	I0314 19:42:14.556489    8428 command_runner.go:130] ! I0314 19:41:08.152724       1 shared_informer.go:311] Waiting for caches to sync for GC
	I0314 19:42:14.556489    8428 command_runner.go:130] ! I0314 19:41:08.302881       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="ingresses.networking.k8s.io"
	I0314 19:42:14.556489    8428 command_runner.go:130] ! I0314 19:41:08.303337       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="endpoints"
	I0314 19:42:14.556489    8428 command_runner.go:130] ! W0314 19:41:08.303671       1 shared_informer.go:593] resyncPeriod 21h24m41.293167603s is smaller than resyncCheckPeriod 22h48m56.659186017s and the informer has already started. Changing it to 22h48m56.659186017s
	I0314 19:42:14.556489    8428 command_runner.go:130] ! I0314 19:41:08.303970       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="limitranges"
	I0314 19:42:14.557015    8428 command_runner.go:130] ! I0314 19:41:08.304292       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="podtemplates"
	I0314 19:42:14.557015    8428 command_runner.go:130] ! I0314 19:41:08.304532       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="endpointslices.discovery.k8s.io"
	I0314 19:42:14.557015    8428 command_runner.go:130] ! I0314 19:41:08.304816       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="daemonsets.apps"
	I0314 19:42:14.557073    8428 command_runner.go:130] ! I0314 19:41:08.305073       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="rolebindings.rbac.authorization.k8s.io"
	I0314 19:42:14.557073    8428 command_runner.go:130] ! I0314 19:41:08.305373       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="roles.rbac.authorization.k8s.io"
	I0314 19:42:14.557073    8428 command_runner.go:130] ! I0314 19:41:08.305634       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="leases.coordination.k8s.io"
	I0314 19:42:14.557073    8428 command_runner.go:130] ! I0314 19:41:08.305976       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="statefulsets.apps"
	I0314 19:42:14.557121    8428 command_runner.go:130] ! I0314 19:41:08.306286       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="horizontalpodautoscalers.autoscaling"
	I0314 19:42:14.557121    8428 command_runner.go:130] ! I0314 19:41:08.306541       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="networkpolicies.networking.k8s.io"
	I0314 19:42:14.557121    8428 command_runner.go:130] ! I0314 19:41:08.306699       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="controllerrevisions.apps"
	I0314 19:42:14.557121    8428 command_runner.go:130] ! I0314 19:41:08.306843       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="deployments.apps"
	I0314 19:42:14.557186    8428 command_runner.go:130] ! I0314 19:41:08.307119       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="jobs.batch"
	I0314 19:42:14.557186    8428 command_runner.go:130] ! I0314 19:41:08.307379       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="serviceaccounts"
	I0314 19:42:14.557186    8428 command_runner.go:130] ! I0314 19:41:08.307553       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="cronjobs.batch"
	I0314 19:42:14.557186    8428 command_runner.go:130] ! I0314 19:41:08.307700       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="poddisruptionbudgets.policy"
	I0314 19:42:14.557237    8428 command_runner.go:130] ! I0314 19:41:08.308022       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="csistoragecapacities.storage.k8s.io"
	I0314 19:42:14.557237    8428 command_runner.go:130] ! I0314 19:41:08.308207       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="replicasets.apps"
	I0314 19:42:14.557237    8428 command_runner.go:130] ! I0314 19:41:08.308473       1 controllermanager.go:642] "Started controller" controller="resourcequota-controller"
	I0314 19:42:14.557237    8428 command_runner.go:130] ! I0314 19:41:08.308664       1 resource_quota_controller.go:294] "Starting resource quota controller"
	I0314 19:42:14.557292    8428 command_runner.go:130] ! I0314 19:41:08.309850       1 shared_informer.go:311] Waiting for caches to sync for resource quota
	I0314 19:42:14.557292    8428 command_runner.go:130] ! I0314 19:41:08.310060       1 resource_quota_monitor.go:305] "QuotaMonitor running"
	I0314 19:42:14.557356    8428 command_runner.go:130] ! I0314 19:41:08.344084       1 controllermanager.go:642] "Started controller" controller="serviceaccount-controller"
	I0314 19:42:14.557356    8428 command_runner.go:130] ! I0314 19:41:08.344536       1 serviceaccounts_controller.go:111] "Starting service account controller"
	I0314 19:42:14.557356    8428 command_runner.go:130] ! I0314 19:41:08.344832       1 shared_informer.go:311] Waiting for caches to sync for service account
	I0314 19:42:14.557469    8428 command_runner.go:130] ! I0314 19:41:08.397742       1 controllermanager.go:642] "Started controller" controller="deployment-controller"
	I0314 19:42:14.557469    8428 command_runner.go:130] ! I0314 19:41:08.400742       1 deployment_controller.go:168] "Starting controller" controller="deployment"
	I0314 19:42:14.557469    8428 command_runner.go:130] ! I0314 19:41:08.401126       1 shared_informer.go:311] Waiting for caches to sync for deployment
	I0314 19:42:14.557469    8428 command_runner.go:130] ! I0314 19:41:08.448054       1 controllermanager.go:642] "Started controller" controller="bootstrap-signer-controller"
	I0314 19:42:14.557469    8428 command_runner.go:130] ! I0314 19:41:08.448538       1 shared_informer.go:311] Waiting for caches to sync for bootstrap_signer
	I0314 19:42:14.557469    8428 command_runner.go:130] ! I0314 19:41:08.495738       1 controllermanager.go:642] "Started controller" controller="persistentvolume-protection-controller"
	I0314 19:42:14.557539    8428 command_runner.go:130] ! I0314 19:41:08.496045       1 pv_protection_controller.go:78] "Starting PV protection controller"
	I0314 19:42:14.557539    8428 command_runner.go:130] ! I0314 19:41:08.496112       1 shared_informer.go:311] Waiting for caches to sync for PV protection
	I0314 19:42:14.557539    8428 command_runner.go:130] ! I0314 19:41:08.547967       1 controllermanager.go:642] "Started controller" controller="endpointslice-mirroring-controller"
	I0314 19:42:14.557539    8428 command_runner.go:130] ! I0314 19:41:08.548352       1 endpointslicemirroring_controller.go:223] "Starting EndpointSliceMirroring controller"
	I0314 19:42:14.557580    8428 command_runner.go:130] ! I0314 19:41:08.548556       1 shared_informer.go:311] Waiting for caches to sync for endpoint_slice_mirroring
	I0314 19:42:14.557580    8428 command_runner.go:130] ! I0314 19:41:08.593742       1 controllermanager.go:642] "Started controller" controller="job-controller"
	I0314 19:42:14.557580    8428 command_runner.go:130] ! I0314 19:41:08.593860       1 job_controller.go:226] "Starting job controller"
	I0314 19:42:14.557580    8428 command_runner.go:130] ! I0314 19:41:08.594297       1 shared_informer.go:311] Waiting for caches to sync for job
	I0314 19:42:14.557580    8428 command_runner.go:130] ! I0314 19:41:08.650392       1 controllermanager.go:642] "Started controller" controller="replicaset-controller"
	I0314 19:42:14.557580    8428 command_runner.go:130] ! I0314 19:41:08.650668       1 replica_set.go:214] "Starting controller" name="replicaset"
	I0314 19:42:14.557580    8428 command_runner.go:130] ! I0314 19:41:08.650851       1 shared_informer.go:311] Waiting for caches to sync for ReplicaSet
	I0314 19:42:14.557580    8428 command_runner.go:130] ! I0314 19:41:08.704591       1 controllermanager.go:642] "Started controller" controller="certificatesigningrequest-approving-controller"
	I0314 19:42:14.557580    8428 command_runner.go:130] ! I0314 19:41:08.704627       1 certificate_controller.go:115] "Starting certificate controller" name="csrapproving"
	I0314 19:42:14.557580    8428 command_runner.go:130] ! I0314 19:41:08.704645       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrapproving
	I0314 19:42:14.557580    8428 command_runner.go:130] ! I0314 19:41:18.768485       1 range_allocator.go:111] "No Secondary Service CIDR provided. Skipping filtering out secondary service addresses"
	I0314 19:42:14.557580    8428 command_runner.go:130] ! I0314 19:41:18.768824       1 controllermanager.go:642] "Started controller" controller="node-ipam-controller"
	I0314 19:42:14.557580    8428 command_runner.go:130] ! I0314 19:41:18.769281       1 node_ipam_controller.go:162] "Starting ipam controller"
	I0314 19:42:14.557580    8428 command_runner.go:130] ! I0314 19:41:18.769315       1 shared_informer.go:311] Waiting for caches to sync for node
	I0314 19:42:14.557580    8428 command_runner.go:130] ! I0314 19:41:18.779639       1 shared_informer.go:311] Waiting for caches to sync for resource quota
	I0314 19:42:14.557580    8428 command_runner.go:130] ! I0314 19:41:18.796167       1 shared_informer.go:318] Caches are synced for PV protection
	I0314 19:42:14.557580    8428 command_runner.go:130] ! I0314 19:41:18.796514       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-442000-m02"
	I0314 19:42:14.557580    8428 command_runner.go:130] ! I0314 19:41:18.796299       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-442000\" does not exist"
	I0314 19:42:14.557580    8428 command_runner.go:130] ! I0314 19:41:18.799471       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-442000-m02\" does not exist"
	I0314 19:42:14.557580    8428 command_runner.go:130] ! I0314 19:41:18.799722       1 shared_informer.go:318] Caches are synced for ReplicationController
	I0314 19:42:14.557580    8428 command_runner.go:130] ! I0314 19:41:18.799937       1 shared_informer.go:318] Caches are synced for TTL
	I0314 19:42:14.557580    8428 command_runner.go:130] ! I0314 19:41:18.800165       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-442000-m03\" does not exist"
	I0314 19:42:14.557580    8428 command_runner.go:130] ! I0314 19:41:18.802329       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-442000-m02"
	I0314 19:42:14.557580    8428 command_runner.go:130] ! I0314 19:41:18.802379       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-442000-m02"
	I0314 19:42:14.557580    8428 command_runner.go:130] ! I0314 19:41:18.806338       1 shared_informer.go:318] Caches are synced for certificate-csrapproving
	I0314 19:42:14.557580    8428 command_runner.go:130] ! I0314 19:41:18.836188       1 shared_informer.go:318] Caches are synced for attach detach
	I0314 19:42:14.557580    8428 command_runner.go:130] ! I0314 19:41:18.842003       1 shared_informer.go:318] Caches are synced for ephemeral
	I0314 19:42:14.557580    8428 command_runner.go:130] ! I0314 19:41:18.842516       1 shared_informer.go:318] Caches are synced for stateful set
	I0314 19:42:14.557580    8428 command_runner.go:130] ! I0314 19:41:18.845380       1 shared_informer.go:318] Caches are synced for service account
	I0314 19:42:14.558315    8428 command_runner.go:130] ! I0314 19:41:18.848744       1 shared_informer.go:318] Caches are synced for bootstrap_signer
	I0314 19:42:14.558315    8428 command_runner.go:130] ! I0314 19:41:18.849154       1 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring
	I0314 19:42:14.558315    8428 command_runner.go:130] ! I0314 19:41:18.849988       1 shared_informer.go:318] Caches are synced for namespace
	I0314 19:42:14.558315    8428 command_runner.go:130] ! I0314 19:41:18.850447       1 shared_informer.go:311] Waiting for caches to sync for garbage collector
	I0314 19:42:14.558315    8428 command_runner.go:130] ! I0314 19:41:18.851139       1 shared_informer.go:318] Caches are synced for ReplicaSet
	I0314 19:42:14.558315    8428 command_runner.go:130] ! I0314 19:41:18.852942       1 shared_informer.go:318] Caches are synced for GC
	I0314 19:42:14.558315    8428 command_runner.go:130] ! I0314 19:41:18.860631       1 shared_informer.go:318] Caches are synced for ClusterRoleAggregator
	I0314 19:42:14.558315    8428 command_runner.go:130] ! I0314 19:41:18.862001       1 shared_informer.go:318] Caches are synced for cronjob
	I0314 19:42:14.558315    8428 command_runner.go:130] ! I0314 19:41:18.862045       1 shared_informer.go:318] Caches are synced for PVC protection
	I0314 19:42:14.558315    8428 command_runner.go:130] ! I0314 19:41:18.864453       1 shared_informer.go:318] Caches are synced for endpoint_slice
	I0314 19:42:14.558315    8428 command_runner.go:130] ! I0314 19:41:18.865205       1 shared_informer.go:318] Caches are synced for endpoint
	I0314 19:42:14.558315    8428 command_runner.go:130] ! I0314 19:41:18.870312       1 shared_informer.go:318] Caches are synced for node
	I0314 19:42:14.558315    8428 command_runner.go:130] ! I0314 19:41:18.871490       1 range_allocator.go:174] "Sending events to api server"
	I0314 19:42:14.558315    8428 command_runner.go:130] ! I0314 19:41:18.871652       1 range_allocator.go:178] "Starting range CIDR allocator"
	I0314 19:42:14.558315    8428 command_runner.go:130] ! I0314 19:41:18.871843       1 shared_informer.go:311] Waiting for caches to sync for cidrallocator
	I0314 19:42:14.558315    8428 command_runner.go:130] ! I0314 19:41:18.871901       1 shared_informer.go:318] Caches are synced for cidrallocator
	I0314 19:42:14.558315    8428 command_runner.go:130] ! I0314 19:41:18.871655       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-serving
	I0314 19:42:14.558315    8428 command_runner.go:130] ! I0314 19:41:18.871600       1 shared_informer.go:318] Caches are synced for daemon sets
	I0314 19:42:14.558315    8428 command_runner.go:130] ! I0314 19:41:18.877449       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-client
	I0314 19:42:14.558315    8428 command_runner.go:130] ! I0314 19:41:18.878919       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-legacy-unknown
	I0314 19:42:14.558315    8428 command_runner.go:130] ! I0314 19:41:18.880521       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0314 19:42:14.558315    8428 command_runner.go:130] ! I0314 19:41:18.886337       1 shared_informer.go:318] Caches are synced for TTL after finished
	I0314 19:42:14.558315    8428 command_runner.go:130] ! I0314 19:41:18.895206       1 shared_informer.go:318] Caches are synced for job
	I0314 19:42:14.558315    8428 command_runner.go:130] ! I0314 19:41:18.898522       1 shared_informer.go:318] Caches are synced for expand
	I0314 19:42:14.558315    8428 command_runner.go:130] ! I0314 19:41:18.902360       1 shared_informer.go:318] Caches are synced for deployment
	I0314 19:42:14.558315    8428 command_runner.go:130] ! I0314 19:41:18.905493       1 shared_informer.go:318] Caches are synced for HPA
	I0314 19:42:14.558315    8428 command_runner.go:130] ! I0314 19:41:18.906213       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="54.805878ms"
	I0314 19:42:14.558315    8428 command_runner.go:130] ! I0314 19:41:18.908178       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="47.802µs"
	I0314 19:42:14.558315    8428 command_runner.go:130] ! I0314 19:41:18.908549       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="56.720551ms"
	I0314 19:42:14.558315    8428 command_runner.go:130] ! I0314 19:41:18.911784       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="133.705µs"
	I0314 19:42:14.558315    8428 command_runner.go:130] ! I0314 19:41:18.919410       1 shared_informer.go:318] Caches are synced for crt configmap
	I0314 19:42:14.558315    8428 command_runner.go:130] ! I0314 19:41:18.923587       1 shared_informer.go:318] Caches are synced for disruption
	I0314 19:42:14.558315    8428 command_runner.go:130] ! I0314 19:41:18.974303       1 shared_informer.go:318] Caches are synced for taint
	I0314 19:42:14.558315    8428 command_runner.go:130] ! I0314 19:41:18.974653       1 node_lifecycle_controller.go:1225] "Initializing eviction metric for zone" zone=""
	I0314 19:42:14.558315    8428 command_runner.go:130] ! I0314 19:41:18.975178       1 taint_manager.go:205] "Starting NoExecuteTaintManager"
	I0314 19:42:14.558315    8428 command_runner.go:130] ! I0314 19:41:18.975416       1 taint_manager.go:210] "Sending events to api server"
	I0314 19:42:14.558315    8428 command_runner.go:130] ! I0314 19:41:18.977051       1 event.go:307] "Event occurred" object="multinode-442000" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-442000 event: Registered Node multinode-442000 in Controller"
	I0314 19:42:14.558315    8428 command_runner.go:130] ! I0314 19:41:18.977995       1 event.go:307] "Event occurred" object="multinode-442000-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-442000-m02 event: Registered Node multinode-442000-m02 in Controller"
	I0314 19:42:14.558315    8428 command_runner.go:130] ! I0314 19:41:18.978165       1 event.go:307] "Event occurred" object="multinode-442000-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-442000-m03 event: Registered Node multinode-442000-m03 in Controller"
	I0314 19:42:14.558315    8428 command_runner.go:130] ! I0314 19:41:18.980168       1 shared_informer.go:318] Caches are synced for resource quota
	I0314 19:42:14.558315    8428 command_runner.go:130] ! I0314 19:41:18.982162       1 shared_informer.go:318] Caches are synced for persistent volume
	I0314 19:42:14.558315    8428 command_runner.go:130] ! I0314 19:41:19.001384       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-442000"
	I0314 19:42:14.558315    8428 command_runner.go:130] ! I0314 19:41:19.002299       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-442000-m02"
	I0314 19:42:14.558315    8428 command_runner.go:130] ! I0314 19:41:19.002838       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-442000-m03"
	I0314 19:42:14.558315    8428 command_runner.go:130] ! I0314 19:41:19.003844       1 node_lifecycle_controller.go:1071] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I0314 19:42:14.558315    8428 command_runner.go:130] ! I0314 19:41:19.010468       1 shared_informer.go:318] Caches are synced for resource quota
	I0314 19:42:14.558917    8428 command_runner.go:130] ! I0314 19:41:19.393074       1 shared_informer.go:318] Caches are synced for garbage collector
	I0314 19:42:14.558917    8428 command_runner.go:130] ! I0314 19:41:19.393161       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0314 19:42:14.558917    8428 command_runner.go:130] ! I0314 19:41:19.450734       1 shared_informer.go:318] Caches are synced for garbage collector
	I0314 19:42:14.558917    8428 command_runner.go:130] ! I0314 19:41:41.542550       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-442000-m02"
	I0314 19:42:14.558917    8428 command_runner.go:130] ! I0314 19:41:44.029818       1 event.go:307] "Event occurred" object="kube-system/storage-provisioner" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod kube-system/storage-provisioner"
	I0314 19:42:14.558917    8428 command_runner.go:130] ! I0314 19:41:44.029853       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68-d22jc" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod kube-system/coredns-5dd5756b68-d22jc"
	I0314 19:42:14.558917    8428 command_runner.go:130] ! I0314 19:41:44.029866       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6-7446n" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-5b5d89c9d6-7446n"
	I0314 19:42:14.558917    8428 command_runner.go:130] ! I0314 19:41:59.058949       1 event.go:307] "Event occurred" object="multinode-442000-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-442000-m02 status is now: NodeNotReady"
	I0314 19:42:14.559040    8428 command_runner.go:130] ! I0314 19:41:59.074940       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6-8drpb" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0314 19:42:14.559040    8428 command_runner.go:130] ! I0314 19:41:59.085508       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="9.938337ms"
	I0314 19:42:14.559040    8428 command_runner.go:130] ! I0314 19:41:59.086845       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="54.804µs"
	I0314 19:42:14.559040    8428 command_runner.go:130] ! I0314 19:41:59.099029       1 event.go:307] "Event occurred" object="kube-system/kindnet-c7m4p" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0314 19:42:14.559040    8428 command_runner.go:130] ! I0314 19:41:59.122329       1 event.go:307] "Event occurred" object="kube-system/kube-proxy-72dzs" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0314 19:42:14.559124    8428 command_runner.go:130] ! I0314 19:42:12.281109       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="13.332951ms"
	I0314 19:42:14.559124    8428 command_runner.go:130] ! I0314 19:42:12.281325       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="115.209µs"
	I0314 19:42:14.559124    8428 command_runner.go:130] ! I0314 19:42:12.305037       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="74.006µs"
	I0314 19:42:14.559175    8428 command_runner.go:130] ! I0314 19:42:12.366507       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="32.074928ms"
	I0314 19:42:14.559175    8428 command_runner.go:130] ! I0314 19:42:12.368560       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="106.408µs"
	I0314 19:42:14.573536    8428 logs.go:123] Gathering logs for container status ...
	I0314 19:42:14.573536    8428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:42:14.664901    8428 command_runner.go:130] > CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	I0314 19:42:14.664901    8428 command_runner.go:130] > b159aedddf94a       ead0a4a53df89                                                                                         3 seconds ago        Running             coredns                   1                   89f326046d00d       coredns-5dd5756b68-d22jc
	I0314 19:42:14.664901    8428 command_runner.go:130] > 813492ad2d666       8c811b4aec35f                                                                                         3 seconds ago        Running             busybox                   1                   cddebe360bf3a       busybox-5b5d89c9d6-7446n
	I0314 19:42:14.664901    8428 command_runner.go:130] > 3167caea2534f       6e38f40d628db                                                                                         21 seconds ago       Running             storage-provisioner       2                   a723f141543f2       storage-provisioner
	I0314 19:42:14.664901    8428 command_runner.go:130] > 999e4c168afef       4950bb10b3f87                                                                                         About a minute ago   Running             kindnet-cni               1                   a9176b5544663       kindnet-7b9lf
	I0314 19:42:14.664901    8428 command_runner.go:130] > 497007582e446       83f6cc407eed8                                                                                         About a minute ago   Running             kube-proxy                1                   f513a7aff6720       kube-proxy-cg28g
	I0314 19:42:14.664901    8428 command_runner.go:130] > 2876622a2618d       6e38f40d628db                                                                                         About a minute ago   Exited              storage-provisioner       1                   a723f141543f2       storage-provisioner
	I0314 19:42:14.664901    8428 command_runner.go:130] > 32d90a3ea2131       e3db313c6dbc0                                                                                         About a minute ago   Running             kube-scheduler            1                   c70744e60ac50       kube-scheduler-multinode-442000
	I0314 19:42:14.664901    8428 command_runner.go:130] > a598d24960de8       7fe0e6f37db33                                                                                         About a minute ago   Running             kube-apiserver            0                   a27fa2188ee4c       kube-apiserver-multinode-442000
	I0314 19:42:14.664901    8428 command_runner.go:130] > 12baf105f0bb2       d058aa5ab969c                                                                                         About a minute ago   Running             kube-controller-manager   1                   67475bf80ddd9       kube-controller-manager-multinode-442000
	I0314 19:42:14.664901    8428 command_runner.go:130] > a81a9c43c3552       73deb9a3f7025                                                                                         About a minute ago   Running             etcd                      0                   35dd339c8a08d       etcd-multinode-442000
	I0314 19:42:14.664901    8428 command_runner.go:130] > 0cd43cdaa31c9       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   19 minutes ago       Exited              busybox                   0                   fa0f2372c88ee       busybox-5b5d89c9d6-7446n
	I0314 19:42:14.664901    8428 command_runner.go:130] > 8899bc0038935       ead0a4a53df89                                                                                         22 minutes ago       Exited              coredns                   0                   a3dba3fc54c01       coredns-5dd5756b68-d22jc
	I0314 19:42:14.664901    8428 command_runner.go:130] > 1a321c0e89971       kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988              22 minutes ago       Exited              kindnet-cni               0                   b046b896affe9       kindnet-7b9lf
	I0314 19:42:14.664901    8428 command_runner.go:130] > 2a62baf3f1b46       83f6cc407eed8                                                                                         22 minutes ago       Exited              kube-proxy                0                   9b3244b47278e       kube-proxy-cg28g
	I0314 19:42:14.664901    8428 command_runner.go:130] > dbb603289bf16       e3db313c6dbc0                                                                                         23 minutes ago       Exited              kube-scheduler            0                   54e39762d7a64       kube-scheduler-multinode-442000
	I0314 19:42:14.664901    8428 command_runner.go:130] > 16b80f73683dc       d058aa5ab969c                                                                                         23 minutes ago       Exited              kube-controller-manager   0                   102c907609a3a       kube-controller-manager-multinode-442000
	I0314 19:42:17.187279    8428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:42:17.211159    8428 command_runner.go:130] > 2008
	I0314 19:42:17.211296    8428 api_server.go:72] duration metric: took 1m6.3722812s to wait for apiserver process to appear ...
	I0314 19:42:17.211296    8428 api_server.go:88] waiting for apiserver healthz status ...
	I0314 19:42:17.219950    8428 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0314 19:42:17.246006    8428 command_runner.go:130] > a598d24960de
	I0314 19:42:17.246006    8428 logs.go:276] 1 containers: [a598d24960de]
	I0314 19:42:17.252918    8428 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0314 19:42:17.278095    8428 command_runner.go:130] > a81a9c43c355
	I0314 19:42:17.278260    8428 logs.go:276] 1 containers: [a81a9c43c355]
	I0314 19:42:17.285100    8428 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0314 19:42:17.312071    8428 command_runner.go:130] > b159aedddf94
	I0314 19:42:17.312363    8428 command_runner.go:130] > 8899bc003893
	I0314 19:42:17.312399    8428 logs.go:276] 2 containers: [b159aedddf94 8899bc003893]
	I0314 19:42:17.321791    8428 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0314 19:42:17.350447    8428 command_runner.go:130] > 32d90a3ea213
	I0314 19:42:17.350447    8428 command_runner.go:130] > dbb603289bf1
	I0314 19:42:17.350447    8428 logs.go:276] 2 containers: [32d90a3ea213 dbb603289bf1]
	I0314 19:42:17.358067    8428 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0314 19:42:17.386725    8428 command_runner.go:130] > 497007582e44
	I0314 19:42:17.386725    8428 command_runner.go:130] > 2a62baf3f1b4
	I0314 19:42:17.386725    8428 logs.go:276] 2 containers: [497007582e44 2a62baf3f1b4]
	I0314 19:42:17.394510    8428 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0314 19:42:17.419653    8428 command_runner.go:130] > 12baf105f0bb
	I0314 19:42:17.419653    8428 command_runner.go:130] > 16b80f73683d
	I0314 19:42:17.419653    8428 logs.go:276] 2 containers: [12baf105f0bb 16b80f73683d]
	I0314 19:42:17.426754    8428 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0314 19:42:17.452548    8428 command_runner.go:130] > 999e4c168afe
	I0314 19:42:17.452609    8428 command_runner.go:130] > 1a321c0e8997
	I0314 19:42:17.452609    8428 logs.go:276] 2 containers: [999e4c168afe 1a321c0e8997]
	I0314 19:42:17.452609    8428 logs.go:123] Gathering logs for coredns [b159aedddf94] ...
	I0314 19:42:17.452609    8428 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b159aedddf94"
	I0314 19:42:17.479998    8428 command_runner.go:130] > .:53
	I0314 19:42:17.479998    8428 command_runner.go:130] > [INFO] plugin/reload: Running configuration SHA512 = d518b2f22d7013b4ce33ee954d9f8802810eac8bed02a1cf0be20d76208a6f83258316421f15df605ab13f1704501370ffcd7655fbac5818a200880248c94b94
	I0314 19:42:17.479998    8428 command_runner.go:130] > CoreDNS-1.10.1
	I0314 19:42:17.479998    8428 command_runner.go:130] > linux/amd64, go1.20, 055b2c3
	I0314 19:42:17.479998    8428 command_runner.go:130] > [INFO] 127.0.0.1:38965 - 37747 "HINFO IN 9162400456686827331.1281991328183180689. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.052220616s
	I0314 19:42:17.481847    8428 logs.go:123] Gathering logs for coredns [8899bc003893] ...
	I0314 19:42:17.481847    8428 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8899bc003893"
	I0314 19:42:17.511937    8428 command_runner.go:130] > .:53
	I0314 19:42:17.511937    8428 command_runner.go:130] > [INFO] plugin/reload: Running configuration SHA512 = d518b2f22d7013b4ce33ee954d9f8802810eac8bed02a1cf0be20d76208a6f83258316421f15df605ab13f1704501370ffcd7655fbac5818a200880248c94b94
	I0314 19:42:17.511937    8428 command_runner.go:130] > CoreDNS-1.10.1
	I0314 19:42:17.511937    8428 command_runner.go:130] > linux/amd64, go1.20, 055b2c3
	I0314 19:42:17.511937    8428 command_runner.go:130] > [INFO] 127.0.0.1:56069 - 18242 "HINFO IN 687842018263708116.264844942244880205. udp 55 false 512" NXDOMAIN qr,rd,ra 130 0.040568923s
	I0314 19:42:17.511937    8428 command_runner.go:130] > [INFO] 10.244.0.3:42598 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000297623s
	I0314 19:42:17.511937    8428 command_runner.go:130] > [INFO] 10.244.0.3:49284 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.094729955s
	I0314 19:42:17.511937    8428 command_runner.go:130] > [INFO] 10.244.0.3:58753 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd 60 0.047978925s
	I0314 19:42:17.511937    8428 command_runner.go:130] > [INFO] 10.244.0.3:60240 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 1.250879171s
	I0314 19:42:17.511937    8428 command_runner.go:130] > [INFO] 10.244.1.2:35705 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000107809s
	I0314 19:42:17.511937    8428 command_runner.go:130] > [INFO] 10.244.1.2:38792 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 140 0.00013461s
	I0314 19:42:17.511937    8428 command_runner.go:130] > [INFO] 10.244.1.2:53339 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd 60 0.000060304s
	I0314 19:42:17.512947    8428 command_runner.go:130] > [INFO] 10.244.1.2:55975 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,aa,rd,ra 140 0.000059805s
	I0314 19:42:17.512947    8428 command_runner.go:130] > [INFO] 10.244.0.3:55630 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000117109s
	I0314 19:42:17.512947    8428 command_runner.go:130] > [INFO] 10.244.0.3:50181 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.122219329s
	I0314 19:42:17.512947    8428 command_runner.go:130] > [INFO] 10.244.0.3:58918 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000194615s
	I0314 19:42:17.512947    8428 command_runner.go:130] > [INFO] 10.244.0.3:48641 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00012501s
	I0314 19:42:17.512947    8428 command_runner.go:130] > [INFO] 10.244.0.3:57540 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.0346353s
	I0314 19:42:17.512947    8428 command_runner.go:130] > [INFO] 10.244.0.3:59969 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000278722s
	I0314 19:42:17.512947    8428 command_runner.go:130] > [INFO] 10.244.0.3:51295 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000167413s
	I0314 19:42:17.512947    8428 command_runner.go:130] > [INFO] 10.244.0.3:45005 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000148512s
	I0314 19:42:17.512947    8428 command_runner.go:130] > [INFO] 10.244.1.2:51938 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000100608s
	I0314 19:42:17.512947    8428 command_runner.go:130] > [INFO] 10.244.1.2:46248 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.00024762s
	I0314 19:42:17.512947    8428 command_runner.go:130] > [INFO] 10.244.1.2:46501 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000100408s
	I0314 19:42:17.512947    8428 command_runner.go:130] > [INFO] 10.244.1.2:52414 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000056704s
	I0314 19:42:17.512947    8428 command_runner.go:130] > [INFO] 10.244.1.2:44908 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000121409s
	I0314 19:42:17.512947    8428 command_runner.go:130] > [INFO] 10.244.1.2:49578 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00011941s
	I0314 19:42:17.512947    8428 command_runner.go:130] > [INFO] 10.244.1.2:51057 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000060205s
	I0314 19:42:17.512947    8428 command_runner.go:130] > [INFO] 10.244.1.2:56240 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000055805s
	I0314 19:42:17.512947    8428 command_runner.go:130] > [INFO] 10.244.0.3:32901 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000172914s
	I0314 19:42:17.512947    8428 command_runner.go:130] > [INFO] 10.244.0.3:41115 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000149912s
	I0314 19:42:17.512947    8428 command_runner.go:130] > [INFO] 10.244.0.3:40494 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00013161s
	I0314 19:42:17.512947    8428 command_runner.go:130] > [INFO] 10.244.0.3:40575 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000077106s
	I0314 19:42:17.512947    8428 command_runner.go:130] > [INFO] 10.244.1.2:55307 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000194115s
	I0314 19:42:17.512947    8428 command_runner.go:130] > [INFO] 10.244.1.2:46435 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00025832s
	I0314 19:42:17.512947    8428 command_runner.go:130] > [INFO] 10.244.1.2:52095 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000156813s
	I0314 19:42:17.512947    8428 command_runner.go:130] > [INFO] 10.244.1.2:57849 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00012701s
	I0314 19:42:17.512947    8428 command_runner.go:130] > [INFO] 10.244.0.3:47270 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000244119s
	I0314 19:42:17.512947    8428 command_runner.go:130] > [INFO] 10.244.0.3:59009 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000411532s
	I0314 19:42:17.512947    8428 command_runner.go:130] > [INFO] 10.244.0.3:40925 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000108108s
	I0314 19:42:17.512947    8428 command_runner.go:130] > [INFO] 10.244.0.3:56417 - 5 "PTR IN 1.80.17.172.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000067706s
	I0314 19:42:17.512947    8428 command_runner.go:130] > [INFO] 10.244.1.2:36896 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000108409s
	I0314 19:42:17.512947    8428 command_runner.go:130] > [INFO] 10.244.1.2:38949 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000118209s
	I0314 19:42:17.512947    8428 command_runner.go:130] > [INFO] 10.244.1.2:56933 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000156413s
	I0314 19:42:17.512947    8428 command_runner.go:130] > [INFO] 10.244.1.2:35971 - 5 "PTR IN 1.80.17.172.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000072406s
	I0314 19:42:17.512947    8428 command_runner.go:130] > [INFO] SIGTERM: Shutting down servers then terminating
	I0314 19:42:17.512947    8428 command_runner.go:130] > [INFO] plugin/health: Going into lameduck mode for 5s
	I0314 19:42:17.515934    8428 logs.go:123] Gathering logs for kube-scheduler [dbb603289bf1] ...
	I0314 19:42:17.515934    8428 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbb603289bf1"
	I0314 19:42:17.549523    8428 command_runner.go:130] ! I0314 19:18:59.007917       1 serving.go:348] Generated self-signed cert in-memory
	I0314 19:42:17.549523    8428 command_runner.go:130] ! W0314 19:19:00.211611       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	I0314 19:42:17.549523    8428 command_runner.go:130] ! W0314 19:19:00.212802       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	I0314 19:42:17.549523    8428 command_runner.go:130] ! W0314 19:19:00.212990       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	I0314 19:42:17.549523    8428 command_runner.go:130] ! W0314 19:19:00.213108       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0314 19:42:17.549523    8428 command_runner.go:130] ! I0314 19:19:00.283055       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.4"
	I0314 19:42:17.549523    8428 command_runner.go:130] ! I0314 19:19:00.284207       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0314 19:42:17.549523    8428 command_runner.go:130] ! I0314 19:19:00.288027       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0314 19:42:17.549523    8428 command_runner.go:130] ! I0314 19:19:00.288233       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0314 19:42:17.549523    8428 command_runner.go:130] ! I0314 19:19:00.288206       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0314 19:42:17.549523    8428 command_runner.go:130] ! I0314 19:19:00.290233       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0314 19:42:17.550063    8428 command_runner.go:130] ! W0314 19:19:00.293166       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0314 19:42:17.550116    8428 command_runner.go:130] ! E0314 19:19:00.293367       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0314 19:42:17.550148    8428 command_runner.go:130] ! W0314 19:19:00.311723       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0314 19:42:17.550148    8428 command_runner.go:130] ! E0314 19:19:00.311803       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0314 19:42:17.550148    8428 command_runner.go:130] ! W0314 19:19:00.312480       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0314 19:42:17.550148    8428 command_runner.go:130] ! E0314 19:19:00.317665       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0314 19:42:17.550148    8428 command_runner.go:130] ! W0314 19:19:00.313212       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0314 19:42:17.550148    8428 command_runner.go:130] ! W0314 19:19:00.313379       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0314 19:42:17.550148    8428 command_runner.go:130] ! W0314 19:19:00.313450       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0314 19:42:17.550148    8428 command_runner.go:130] ! W0314 19:19:00.313586       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0314 19:42:17.550148    8428 command_runner.go:130] ! W0314 19:19:00.313632       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0314 19:42:17.550148    8428 command_runner.go:130] ! W0314 19:19:00.313705       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0314 19:42:17.550148    8428 command_runner.go:130] ! W0314 19:19:00.313774       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0314 19:42:17.550676    8428 command_runner.go:130] ! W0314 19:19:00.313864       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0314 19:42:17.550676    8428 command_runner.go:130] ! W0314 19:19:00.313910       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0314 19:42:17.550752    8428 command_runner.go:130] ! W0314 19:19:00.313978       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0314 19:42:17.550752    8428 command_runner.go:130] ! W0314 19:19:00.314056       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0314 19:42:17.550752    8428 command_runner.go:130] ! W0314 19:19:00.314091       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0314 19:42:17.550752    8428 command_runner.go:130] ! E0314 19:19:00.318101       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0314 19:42:17.550752    8428 command_runner.go:130] ! E0314 19:19:00.318394       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0314 19:42:17.550752    8428 command_runner.go:130] ! E0314 19:19:00.318606       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0314 19:42:17.550752    8428 command_runner.go:130] ! E0314 19:19:00.318728       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0314 19:42:17.550752    8428 command_runner.go:130] ! E0314 19:19:00.318953       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0314 19:42:17.550752    8428 command_runner.go:130] ! E0314 19:19:00.319076       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0314 19:42:17.550752    8428 command_runner.go:130] ! E0314 19:19:00.319318       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0314 19:42:17.550752    8428 command_runner.go:130] ! E0314 19:19:00.319575       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0314 19:42:17.550752    8428 command_runner.go:130] ! E0314 19:19:00.319588       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0314 19:42:17.551281    8428 command_runner.go:130] ! E0314 19:19:00.319719       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0314 19:42:17.551321    8428 command_runner.go:130] ! E0314 19:19:00.319732       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0314 19:42:17.551321    8428 command_runner.go:130] ! E0314 19:19:00.319788       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0314 19:42:17.551321    8428 command_runner.go:130] ! W0314 19:19:01.268901       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0314 19:42:17.551321    8428 command_runner.go:130] ! E0314 19:19:01.269219       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0314 19:42:17.551321    8428 command_runner.go:130] ! W0314 19:19:01.309661       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0314 19:42:17.551321    8428 command_runner.go:130] ! E0314 19:19:01.309894       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0314 19:42:17.551321    8428 command_runner.go:130] ! W0314 19:19:01.318104       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0314 19:42:17.551321    8428 command_runner.go:130] ! E0314 19:19:01.318410       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0314 19:42:17.551321    8428 command_runner.go:130] ! W0314 19:19:01.382148       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0314 19:42:17.551321    8428 command_runner.go:130] ! E0314 19:19:01.382194       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0314 19:42:17.551321    8428 command_runner.go:130] ! W0314 19:19:01.454259       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0314 19:42:17.551321    8428 command_runner.go:130] ! E0314 19:19:01.454398       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0314 19:42:17.551321    8428 command_runner.go:130] ! W0314 19:19:01.505982       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0314 19:42:17.551321    8428 command_runner.go:130] ! E0314 19:19:01.506182       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0314 19:42:17.551849    8428 command_runner.go:130] ! W0314 19:19:01.640521       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0314 19:42:17.551906    8428 command_runner.go:130] ! E0314 19:19:01.640836       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0314 19:42:17.551906    8428 command_runner.go:130] ! W0314 19:19:01.681052       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0314 19:42:17.551906    8428 command_runner.go:130] ! E0314 19:19:01.681953       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0314 19:42:17.551906    8428 command_runner.go:130] ! W0314 19:19:01.732243       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0314 19:42:17.551906    8428 command_runner.go:130] ! E0314 19:19:01.732288       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0314 19:42:17.551906    8428 command_runner.go:130] ! W0314 19:19:01.767241       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0314 19:42:17.551906    8428 command_runner.go:130] ! E0314 19:19:01.767329       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0314 19:42:17.551906    8428 command_runner.go:130] ! W0314 19:19:01.783665       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0314 19:42:17.551906    8428 command_runner.go:130] ! E0314 19:19:01.783845       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0314 19:42:17.551906    8428 command_runner.go:130] ! W0314 19:19:01.812936       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0314 19:42:17.551906    8428 command_runner.go:130] ! E0314 19:19:01.813027       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0314 19:42:17.551906    8428 command_runner.go:130] ! W0314 19:19:01.821109       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0314 19:42:17.551906    8428 command_runner.go:130] ! E0314 19:19:01.821267       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0314 19:42:17.551906    8428 command_runner.go:130] ! W0314 19:19:01.843311       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0314 19:42:17.551906    8428 command_runner.go:130] ! E0314 19:19:01.843339       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0314 19:42:17.552435    8428 command_runner.go:130] ! W0314 19:19:01.914649       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0314 19:42:17.552435    8428 command_runner.go:130] ! E0314 19:19:01.914986       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0314 19:42:17.552435    8428 command_runner.go:130] ! I0314 19:19:04.090863       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0314 19:42:17.552435    8428 command_runner.go:130] ! I0314 19:38:43.236637       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0314 19:42:17.552435    8428 command_runner.go:130] ! I0314 19:38:43.237145       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0314 19:42:17.552518    8428 command_runner.go:130] ! E0314 19:38:43.237439       1 run.go:74] "command failed" err="finished without leader elect"
	I0314 19:42:17.562343    8428 logs.go:123] Gathering logs for kindnet [1a321c0e8997] ...
	I0314 19:42:17.562343    8428 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a321c0e8997"
	I0314 19:42:17.594928    8428 command_runner.go:130] ! I0314 19:27:36.366640       1 main.go:227] handling current node
	I0314 19:42:17.594928    8428 command_runner.go:130] ! I0314 19:27:36.366652       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:17.595497    8428 command_runner.go:130] ! I0314 19:27:36.366658       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:17.595497    8428 command_runner.go:130] ! I0314 19:27:36.366818       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:17.595497    8428 command_runner.go:130] ! I0314 19:27:36.366827       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:17.595497    8428 command_runner.go:130] ! I0314 19:27:46.378468       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:17.595555    8428 command_runner.go:130] ! I0314 19:27:46.378496       1 main.go:227] handling current node
	I0314 19:42:17.595555    8428 command_runner.go:130] ! I0314 19:27:46.378506       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:17.595555    8428 command_runner.go:130] ! I0314 19:27:46.378513       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:17.595555    8428 command_runner.go:130] ! I0314 19:27:46.379039       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:17.595600    8428 command_runner.go:130] ! I0314 19:27:46.379130       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:17.595600    8428 command_runner.go:130] ! I0314 19:27:56.393642       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:17.595600    8428 command_runner.go:130] ! I0314 19:27:56.393700       1 main.go:227] handling current node
	I0314 19:42:17.595600    8428 command_runner.go:130] ! I0314 19:27:56.393723       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:17.595600    8428 command_runner.go:130] ! I0314 19:27:56.393733       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:17.595600    8428 command_runner.go:130] ! I0314 19:27:56.394716       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:17.595600    8428 command_runner.go:130] ! I0314 19:27:56.394779       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:17.595600    8428 command_runner.go:130] ! I0314 19:28:06.403171       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:17.595600    8428 command_runner.go:130] ! I0314 19:28:06.403199       1 main.go:227] handling current node
	I0314 19:42:17.595600    8428 command_runner.go:130] ! I0314 19:28:06.403212       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:17.595600    8428 command_runner.go:130] ! I0314 19:28:06.403219       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:17.595600    8428 command_runner.go:130] ! I0314 19:28:06.403663       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:17.595600    8428 command_runner.go:130] ! I0314 19:28:06.403834       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:17.595600    8428 command_runner.go:130] ! I0314 19:28:16.415146       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:17.595600    8428 command_runner.go:130] ! I0314 19:28:16.415237       1 main.go:227] handling current node
	I0314 19:42:17.595600    8428 command_runner.go:130] ! I0314 19:28:16.415250       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:17.595600    8428 command_runner.go:130] ! I0314 19:28:16.415260       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:17.595600    8428 command_runner.go:130] ! I0314 19:28:16.415497       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:17.595600    8428 command_runner.go:130] ! I0314 19:28:16.415703       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:17.595600    8428 command_runner.go:130] ! I0314 19:28:26.430257       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:17.595600    8428 command_runner.go:130] ! I0314 19:28:26.430350       1 main.go:227] handling current node
	I0314 19:42:17.595600    8428 command_runner.go:130] ! I0314 19:28:26.430364       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:17.595600    8428 command_runner.go:130] ! I0314 19:28:26.430372       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:17.595600    8428 command_runner.go:130] ! I0314 19:28:26.430709       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:17.595600    8428 command_runner.go:130] ! I0314 19:28:26.430804       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:17.595600    8428 command_runner.go:130] ! I0314 19:28:36.445854       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:17.595600    8428 command_runner.go:130] ! I0314 19:28:36.445897       1 main.go:227] handling current node
	I0314 19:42:17.595600    8428 command_runner.go:130] ! I0314 19:28:36.445915       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:17.595600    8428 command_runner.go:130] ! I0314 19:28:36.446285       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:17.595600    8428 command_runner.go:130] ! I0314 19:28:36.446702       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:17.595600    8428 command_runner.go:130] ! I0314 19:28:36.446731       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:17.595600    8428 command_runner.go:130] ! I0314 19:28:46.461369       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:17.595600    8428 command_runner.go:130] ! I0314 19:28:46.462057       1 main.go:227] handling current node
	I0314 19:42:17.595600    8428 command_runner.go:130] ! I0314 19:28:46.462235       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:17.595600    8428 command_runner.go:130] ! I0314 19:28:46.462250       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:17.595600    8428 command_runner.go:130] ! I0314 19:28:46.462593       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:17.595600    8428 command_runner.go:130] ! I0314 19:28:46.462770       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:17.595600    8428 command_runner.go:130] ! I0314 19:28:56.477451       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:17.595600    8428 command_runner.go:130] ! I0314 19:28:56.477483       1 main.go:227] handling current node
	I0314 19:42:17.595600    8428 command_runner.go:130] ! I0314 19:28:56.477496       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:17.595600    8428 command_runner.go:130] ! I0314 19:28:56.477508       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:17.596140    8428 command_runner.go:130] ! I0314 19:28:56.478007       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:17.596140    8428 command_runner.go:130] ! I0314 19:28:56.478089       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:17.596140    8428 command_runner.go:130] ! I0314 19:29:06.484423       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:17.596140    8428 command_runner.go:130] ! I0314 19:29:06.484497       1 main.go:227] handling current node
	I0314 19:42:17.596140    8428 command_runner.go:130] ! I0314 19:29:06.484559       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:17.596140    8428 command_runner.go:130] ! I0314 19:29:06.484624       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:17.596140    8428 command_runner.go:130] ! I0314 19:29:06.484852       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:17.596140    8428 command_runner.go:130] ! I0314 19:29:06.484945       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:17.596216    8428 command_runner.go:130] ! I0314 19:29:16.500812       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:17.596216    8428 command_runner.go:130] ! I0314 19:29:16.500909       1 main.go:227] handling current node
	I0314 19:42:17.596216    8428 command_runner.go:130] ! I0314 19:29:16.500924       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:17.596333    8428 command_runner.go:130] ! I0314 19:29:16.500932       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:17.596333    8428 command_runner.go:130] ! I0314 19:29:16.501505       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:17.596333    8428 command_runner.go:130] ! I0314 19:29:16.501585       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:17.596333    8428 command_runner.go:130] ! I0314 19:29:26.508494       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:17.596333    8428 command_runner.go:130] ! I0314 19:29:26.508585       1 main.go:227] handling current node
	I0314 19:42:17.596333    8428 command_runner.go:130] ! I0314 19:29:26.508601       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:17.596333    8428 command_runner.go:130] ! I0314 19:29:26.508609       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:17.596333    8428 command_runner.go:130] ! I0314 19:29:26.508822       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:17.596333    8428 command_runner.go:130] ! I0314 19:29:26.508837       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:17.596333    8428 command_runner.go:130] ! I0314 19:29:36.517002       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:17.596333    8428 command_runner.go:130] ! I0314 19:29:36.517123       1 main.go:227] handling current node
	I0314 19:42:17.596333    8428 command_runner.go:130] ! I0314 19:29:36.517142       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:17.596333    8428 command_runner.go:130] ! I0314 19:29:36.517155       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:17.596333    8428 command_runner.go:130] ! I0314 19:29:36.517648       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:17.596333    8428 command_runner.go:130] ! I0314 19:29:36.517836       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:17.596333    8428 command_runner.go:130] ! I0314 19:29:46.530826       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:17.596333    8428 command_runner.go:130] ! I0314 19:29:46.530962       1 main.go:227] handling current node
	I0314 19:42:17.596333    8428 command_runner.go:130] ! I0314 19:29:46.530978       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:17.596333    8428 command_runner.go:130] ! I0314 19:29:46.531314       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:17.596333    8428 command_runner.go:130] ! I0314 19:29:46.531557       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:17.596333    8428 command_runner.go:130] ! I0314 19:29:46.531706       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:17.596333    8428 command_runner.go:130] ! I0314 19:29:56.551916       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:17.596333    8428 command_runner.go:130] ! I0314 19:29:56.551953       1 main.go:227] handling current node
	I0314 19:42:17.596333    8428 command_runner.go:130] ! I0314 19:29:56.551965       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:17.596333    8428 command_runner.go:130] ! I0314 19:29:56.551971       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:17.596333    8428 command_runner.go:130] ! I0314 19:29:56.552084       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:17.596333    8428 command_runner.go:130] ! I0314 19:29:56.552107       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:17.596333    8428 command_runner.go:130] ! I0314 19:30:06.560066       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:17.596333    8428 command_runner.go:130] ! I0314 19:30:06.560115       1 main.go:227] handling current node
	I0314 19:42:17.596333    8428 command_runner.go:130] ! I0314 19:30:06.560129       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:17.596333    8428 command_runner.go:130] ! I0314 19:30:06.560136       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:17.596333    8428 command_runner.go:130] ! I0314 19:30:06.560429       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:17.596333    8428 command_runner.go:130] ! I0314 19:30:06.560534       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:17.596333    8428 command_runner.go:130] ! I0314 19:30:16.573690       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:17.596333    8428 command_runner.go:130] ! I0314 19:30:16.573731       1 main.go:227] handling current node
	I0314 19:42:17.596333    8428 command_runner.go:130] ! I0314 19:30:16.573978       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:17.596873    8428 command_runner.go:130] ! I0314 19:30:16.574067       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:17.596873    8428 command_runner.go:130] ! I0314 19:30:16.574385       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:17.596873    8428 command_runner.go:130] ! I0314 19:30:16.574414       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:17.596873    8428 command_runner.go:130] ! I0314 19:30:26.589277       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:17.596873    8428 command_runner.go:130] ! I0314 19:30:26.589488       1 main.go:227] handling current node
	I0314 19:42:17.596930    8428 command_runner.go:130] ! I0314 19:30:26.589534       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:17.596930    8428 command_runner.go:130] ! I0314 19:30:26.589557       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:17.596930    8428 command_runner.go:130] ! I0314 19:30:26.589802       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:17.596930    8428 command_runner.go:130] ! I0314 19:30:26.589885       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:17.596930    8428 command_runner.go:130] ! I0314 19:30:36.605356       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:17.596930    8428 command_runner.go:130] ! I0314 19:30:36.605400       1 main.go:227] handling current node
	I0314 19:42:17.596986    8428 command_runner.go:130] ! I0314 19:30:36.605412       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:17.596986    8428 command_runner.go:130] ! I0314 19:30:36.605418       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:17.596986    8428 command_runner.go:130] ! I0314 19:30:36.605556       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:17.596986    8428 command_runner.go:130] ! I0314 19:30:36.605625       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:17.596986    8428 command_runner.go:130] ! I0314 19:30:46.612911       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:17.597039    8428 command_runner.go:130] ! I0314 19:30:46.613010       1 main.go:227] handling current node
	I0314 19:42:17.597039    8428 command_runner.go:130] ! I0314 19:30:46.613025       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:17.597039    8428 command_runner.go:130] ! I0314 19:30:46.613034       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:17.597039    8428 command_runner.go:130] ! I0314 19:30:46.613445       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:17.597084    8428 command_runner.go:130] ! I0314 19:30:46.615380       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:17.597084    8428 command_runner.go:130] ! I0314 19:30:56.630605       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:17.597084    8428 command_runner.go:130] ! I0314 19:30:56.630965       1 main.go:227] handling current node
	I0314 19:42:17.597084    8428 command_runner.go:130] ! I0314 19:30:56.631076       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:17.597126    8428 command_runner.go:130] ! I0314 19:30:56.631132       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:17.597126    8428 command_runner.go:130] ! I0314 19:30:56.631442       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:17.597327    8428 command_runner.go:130] ! I0314 19:30:56.631542       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:17.597327    8428 command_runner.go:130] ! I0314 19:31:06.643588       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:17.597327    8428 command_runner.go:130] ! I0314 19:31:06.643631       1 main.go:227] handling current node
	I0314 19:42:17.597327    8428 command_runner.go:130] ! I0314 19:31:06.643643       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:17.597327    8428 command_runner.go:130] ! I0314 19:31:06.643650       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:17.597327    8428 command_runner.go:130] ! I0314 19:31:06.644160       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:17.597327    8428 command_runner.go:130] ! I0314 19:31:06.644255       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:17.597327    8428 command_runner.go:130] ! I0314 19:31:16.650940       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:17.597327    8428 command_runner.go:130] ! I0314 19:31:16.651187       1 main.go:227] handling current node
	I0314 19:42:17.597327    8428 command_runner.go:130] ! I0314 19:31:16.651208       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:17.597327    8428 command_runner.go:130] ! I0314 19:31:16.651236       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:17.597327    8428 command_runner.go:130] ! I0314 19:31:16.651354       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:17.597327    8428 command_runner.go:130] ! I0314 19:31:16.651374       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:17.597327    8428 command_runner.go:130] ! I0314 19:31:26.665304       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:17.597327    8428 command_runner.go:130] ! I0314 19:31:26.665403       1 main.go:227] handling current node
	I0314 19:42:17.597327    8428 command_runner.go:130] ! I0314 19:31:26.665418       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:17.597327    8428 command_runner.go:130] ! I0314 19:31:26.665427       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:17.597327    8428 command_runner.go:130] ! I0314 19:31:26.665674       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:17.597327    8428 command_runner.go:130] ! I0314 19:31:26.665859       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:17.597327    8428 command_runner.go:130] ! I0314 19:31:36.681645       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:17.597327    8428 command_runner.go:130] ! I0314 19:31:36.681680       1 main.go:227] handling current node
	I0314 19:42:17.597327    8428 command_runner.go:130] ! I0314 19:31:36.681695       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:17.597327    8428 command_runner.go:130] ! I0314 19:31:36.681704       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:17.597327    8428 command_runner.go:130] ! I0314 19:31:36.682032       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:17.597327    8428 command_runner.go:130] ! I0314 19:31:36.682062       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:17.597327    8428 command_runner.go:130] ! I0314 19:31:46.697305       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:17.597327    8428 command_runner.go:130] ! I0314 19:31:46.697415       1 main.go:227] handling current node
	I0314 19:42:17.597327    8428 command_runner.go:130] ! I0314 19:31:46.697432       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:17.597327    8428 command_runner.go:130] ! I0314 19:31:46.697444       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:17.597327    8428 command_runner.go:130] ! I0314 19:31:46.697965       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:17.597327    8428 command_runner.go:130] ! I0314 19:31:46.698093       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:17.597327    8428 command_runner.go:130] ! I0314 19:31:56.705518       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:17.597327    8428 command_runner.go:130] ! I0314 19:31:56.705613       1 main.go:227] handling current node
	I0314 19:42:17.597327    8428 command_runner.go:130] ! I0314 19:31:56.705627       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:17.597327    8428 command_runner.go:130] ! I0314 19:31:56.705635       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:17.597327    8428 command_runner.go:130] ! I0314 19:31:56.706151       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:17.597327    8428 command_runner.go:130] ! I0314 19:31:56.706269       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:17.597327    8428 command_runner.go:130] ! I0314 19:32:06.716977       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:17.597327    8428 command_runner.go:130] ! I0314 19:32:06.717087       1 main.go:227] handling current node
	I0314 19:42:17.597327    8428 command_runner.go:130] ! I0314 19:32:06.717105       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:17.597327    8428 command_runner.go:130] ! I0314 19:32:06.717116       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:17.597877    8428 command_runner.go:130] ! I0314 19:32:06.717701       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:17.597877    8428 command_runner.go:130] ! I0314 19:32:06.717870       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:17.597877    8428 command_runner.go:130] ! I0314 19:32:16.738903       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:17.597877    8428 command_runner.go:130] ! I0314 19:32:16.738946       1 main.go:227] handling current node
	I0314 19:42:17.597877    8428 command_runner.go:130] ! I0314 19:32:16.738962       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:17.597877    8428 command_runner.go:130] ! I0314 19:32:16.738971       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:17.597943    8428 command_runner.go:130] ! I0314 19:32:16.739310       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:17.597943    8428 command_runner.go:130] ! I0314 19:32:16.739420       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:17.597943    8428 command_runner.go:130] ! I0314 19:32:26.749067       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:17.597972    8428 command_runner.go:130] ! I0314 19:32:26.749521       1 main.go:227] handling current node
	I0314 19:42:17.597972    8428 command_runner.go:130] ! I0314 19:32:26.749656       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:17.597972    8428 command_runner.go:130] ! I0314 19:32:26.749670       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:17.597972    8428 command_runner.go:130] ! I0314 19:32:26.750040       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:17.597972    8428 command_runner.go:130] ! I0314 19:32:26.750074       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:17.597972    8428 command_runner.go:130] ! I0314 19:32:36.765313       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:17.597972    8428 command_runner.go:130] ! I0314 19:32:36.765423       1 main.go:227] handling current node
	I0314 19:42:17.597972    8428 command_runner.go:130] ! I0314 19:32:36.765442       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:17.597972    8428 command_runner.go:130] ! I0314 19:32:36.765453       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:17.597972    8428 command_runner.go:130] ! I0314 19:32:36.766102       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:17.597972    8428 command_runner.go:130] ! I0314 19:32:36.766130       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:17.597972    8428 command_runner.go:130] ! I0314 19:32:46.781715       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:17.597972    8428 command_runner.go:130] ! I0314 19:32:46.781800       1 main.go:227] handling current node
	I0314 19:42:17.597972    8428 command_runner.go:130] ! I0314 19:32:46.782151       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:17.597972    8428 command_runner.go:130] ! I0314 19:32:46.782168       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:17.597972    8428 command_runner.go:130] ! I0314 19:32:46.782370       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:17.597972    8428 command_runner.go:130] ! I0314 19:32:46.782396       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:17.597972    8428 command_runner.go:130] ! I0314 19:32:56.797473       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:17.597972    8428 command_runner.go:130] ! I0314 19:32:56.797568       1 main.go:227] handling current node
	I0314 19:42:17.597972    8428 command_runner.go:130] ! I0314 19:32:56.797583       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:17.597972    8428 command_runner.go:130] ! I0314 19:32:56.797621       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:17.597972    8428 command_runner.go:130] ! I0314 19:32:56.797733       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:17.597972    8428 command_runner.go:130] ! I0314 19:32:56.797772       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:17.597972    8428 command_runner.go:130] ! I0314 19:33:06.803421       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:17.597972    8428 command_runner.go:130] ! I0314 19:33:06.803513       1 main.go:227] handling current node
	I0314 19:42:17.597972    8428 command_runner.go:130] ! I0314 19:33:06.803527       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:17.597972    8428 command_runner.go:130] ! I0314 19:33:06.803534       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:17.597972    8428 command_runner.go:130] ! I0314 19:33:06.804158       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:17.597972    8428 command_runner.go:130] ! I0314 19:33:06.804237       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:17.597972    8428 command_runner.go:130] ! I0314 19:33:16.818983       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:17.597972    8428 command_runner.go:130] ! I0314 19:33:16.819134       1 main.go:227] handling current node
	I0314 19:42:17.597972    8428 command_runner.go:130] ! I0314 19:33:16.819149       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:17.597972    8428 command_runner.go:130] ! I0314 19:33:16.819157       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:17.597972    8428 command_runner.go:130] ! I0314 19:33:16.819421       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:17.597972    8428 command_runner.go:130] ! I0314 19:33:16.819491       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:17.597972    8428 command_runner.go:130] ! I0314 19:33:26.826209       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:17.597972    8428 command_runner.go:130] ! I0314 19:33:26.826474       1 main.go:227] handling current node
	I0314 19:42:17.597972    8428 command_runner.go:130] ! I0314 19:33:26.826509       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:17.597972    8428 command_runner.go:130] ! I0314 19:33:26.826519       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:17.597972    8428 command_runner.go:130] ! I0314 19:33:26.826794       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:17.597972    8428 command_runner.go:130] ! I0314 19:33:26.826886       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:17.597972    8428 command_runner.go:130] ! I0314 19:33:36.839979       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:17.597972    8428 command_runner.go:130] ! I0314 19:33:36.840555       1 main.go:227] handling current node
	I0314 19:42:17.598511    8428 command_runner.go:130] ! I0314 19:33:36.840828       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:17.598511    8428 command_runner.go:130] ! I0314 19:33:36.840855       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:17.598511    8428 command_runner.go:130] ! I0314 19:33:36.841055       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:17.598511    8428 command_runner.go:130] ! I0314 19:33:36.841183       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:17.598511    8428 command_runner.go:130] ! I0314 19:33:46.854483       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:17.598566    8428 command_runner.go:130] ! I0314 19:33:46.854585       1 main.go:227] handling current node
	I0314 19:42:17.598566    8428 command_runner.go:130] ! I0314 19:33:46.854600       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:17.598566    8428 command_runner.go:130] ! I0314 19:33:46.854608       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:17.598599    8428 command_runner.go:130] ! I0314 19:33:46.855303       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:17.598599    8428 command_runner.go:130] ! I0314 19:33:46.855389       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:17.598599    8428 command_runner.go:130] ! I0314 19:33:56.867052       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:17.598599    8428 command_runner.go:130] ! I0314 19:33:56.867136       1 main.go:227] handling current node
	I0314 19:42:17.598599    8428 command_runner.go:130] ! I0314 19:33:56.867150       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:17.598599    8428 command_runner.go:130] ! I0314 19:33:56.867158       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:17.598599    8428 command_runner.go:130] ! I0314 19:33:56.867493       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:17.598599    8428 command_runner.go:130] ! I0314 19:33:56.867886       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:17.598599    8428 command_runner.go:130] ! I0314 19:34:06.874298       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:17.598599    8428 command_runner.go:130] ! I0314 19:34:06.874391       1 main.go:227] handling current node
	I0314 19:42:17.598599    8428 command_runner.go:130] ! I0314 19:34:06.874405       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:17.598599    8428 command_runner.go:130] ! I0314 19:34:06.874413       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:17.598599    8428 command_runner.go:130] ! I0314 19:34:06.874932       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:17.598599    8428 command_runner.go:130] ! I0314 19:34:06.874962       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:17.598599    8428 command_runner.go:130] ! I0314 19:34:16.890513       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:17.598599    8428 command_runner.go:130] ! I0314 19:34:16.890589       1 main.go:227] handling current node
	I0314 19:42:17.598599    8428 command_runner.go:130] ! I0314 19:34:16.890604       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:17.598599    8428 command_runner.go:130] ! I0314 19:34:16.890612       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:17.598599    8428 command_runner.go:130] ! I0314 19:34:16.890870       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:17.598599    8428 command_runner.go:130] ! I0314 19:34:16.890953       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:17.598599    8428 command_runner.go:130] ! I0314 19:34:26.908423       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:17.598599    8428 command_runner.go:130] ! I0314 19:34:26.908576       1 main.go:227] handling current node
	I0314 19:42:17.598599    8428 command_runner.go:130] ! I0314 19:34:26.908597       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:17.598599    8428 command_runner.go:130] ! I0314 19:34:26.908606       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:17.598599    8428 command_runner.go:130] ! I0314 19:34:26.909103       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:17.598599    8428 command_runner.go:130] ! I0314 19:34:26.909271       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:17.598599    8428 command_runner.go:130] ! I0314 19:34:36.915794       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:17.598599    8428 command_runner.go:130] ! I0314 19:34:36.915910       1 main.go:227] handling current node
	I0314 19:42:17.598599    8428 command_runner.go:130] ! I0314 19:34:36.915926       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:17.598599    8428 command_runner.go:130] ! I0314 19:34:36.915935       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:17.598599    8428 command_runner.go:130] ! I0314 19:34:36.916282       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:17.598599    8428 command_runner.go:130] ! I0314 19:34:36.916372       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:17.598599    8428 command_runner.go:130] ! I0314 19:34:46.931699       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:17.598599    8428 command_runner.go:130] ! I0314 19:34:46.931833       1 main.go:227] handling current node
	I0314 19:42:17.598599    8428 command_runner.go:130] ! I0314 19:34:46.931849       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:17.598599    8428 command_runner.go:130] ! I0314 19:34:46.931858       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:17.598599    8428 command_runner.go:130] ! I0314 19:34:46.932099       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:17.598599    8428 command_runner.go:130] ! I0314 19:34:46.932124       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:17.598599    8428 command_runner.go:130] ! I0314 19:34:56.946470       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:17.598599    8428 command_runner.go:130] ! I0314 19:34:56.946565       1 main.go:227] handling current node
	I0314 19:42:17.598599    8428 command_runner.go:130] ! I0314 19:34:56.946580       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:17.598599    8428 command_runner.go:130] ! I0314 19:34:56.946588       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:17.598599    8428 command_runner.go:130] ! I0314 19:34:56.946812       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:17.598599    8428 command_runner.go:130] ! I0314 19:34:56.946927       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:17.599142    8428 command_runner.go:130] ! I0314 19:35:06.960844       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:17.599142    8428 command_runner.go:130] ! I0314 19:35:06.960939       1 main.go:227] handling current node
	I0314 19:42:17.599142    8428 command_runner.go:130] ! I0314 19:35:06.960954       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:17.599142    8428 command_runner.go:130] ! I0314 19:35:06.960962       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:17.599142    8428 command_runner.go:130] ! I0314 19:35:06.961467       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:17.599142    8428 command_runner.go:130] ! I0314 19:35:06.961574       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:17.599142    8428 command_runner.go:130] ! I0314 19:35:16.981993       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:17.599209    8428 command_runner.go:130] ! I0314 19:35:16.982080       1 main.go:227] handling current node
	I0314 19:42:17.599209    8428 command_runner.go:130] ! I0314 19:35:16.982095       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:17.599209    8428 command_runner.go:130] ! I0314 19:35:16.982103       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:17.599209    8428 command_runner.go:130] ! I0314 19:35:16.982594       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:17.599209    8428 command_runner.go:130] ! I0314 19:35:16.982673       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:17.599209    8428 command_runner.go:130] ! I0314 19:35:26.993848       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:17.599209    8428 command_runner.go:130] ! I0314 19:35:26.993940       1 main.go:227] handling current node
	I0314 19:42:17.599209    8428 command_runner.go:130] ! I0314 19:35:26.993955       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:17.599209    8428 command_runner.go:130] ! I0314 19:35:26.993963       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:17.599209    8428 command_runner.go:130] ! I0314 19:35:26.994360       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:17.599209    8428 command_runner.go:130] ! I0314 19:35:26.994437       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:17.599209    8428 command_runner.go:130] ! I0314 19:35:37.008613       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:17.599209    8428 command_runner.go:130] ! I0314 19:35:37.008706       1 main.go:227] handling current node
	I0314 19:42:17.599209    8428 command_runner.go:130] ! I0314 19:35:37.008720       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:17.599209    8428 command_runner.go:130] ! I0314 19:35:37.008727       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:17.599209    8428 command_runner.go:130] ! I0314 19:35:37.009233       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:17.599209    8428 command_runner.go:130] ! I0314 19:35:37.009320       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:17.599209    8428 command_runner.go:130] ! I0314 19:35:47.018420       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:17.599209    8428 command_runner.go:130] ! I0314 19:35:47.018526       1 main.go:227] handling current node
	I0314 19:42:17.599209    8428 command_runner.go:130] ! I0314 19:35:47.018541       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:17.599750    8428 command_runner.go:130] ! I0314 19:35:47.018549       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:17.599750    8428 command_runner.go:130] ! I0314 19:35:47.018669       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:17.599750    8428 command_runner.go:130] ! I0314 19:35:47.018680       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:17.599750    8428 command_runner.go:130] ! I0314 19:35:57.025132       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:17.599750    8428 command_runner.go:130] ! I0314 19:35:57.025207       1 main.go:227] handling current node
	I0314 19:42:17.599750    8428 command_runner.go:130] ! I0314 19:35:57.025220       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:17.599750    8428 command_runner.go:130] ! I0314 19:35:57.025228       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:17.599839    8428 command_runner.go:130] ! I0314 19:35:57.026009       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:17.599839    8428 command_runner.go:130] ! I0314 19:35:57.026145       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:17.599839    8428 command_runner.go:130] ! I0314 19:36:07.042281       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:17.599839    8428 command_runner.go:130] ! I0314 19:36:07.042353       1 main.go:227] handling current node
	I0314 19:42:17.599839    8428 command_runner.go:130] ! I0314 19:36:07.042367       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:17.599839    8428 command_runner.go:130] ! I0314 19:36:07.042375       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:17.599839    8428 command_runner.go:130] ! I0314 19:36:07.042493       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:17.599839    8428 command_runner.go:130] ! I0314 19:36:07.042500       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:17.599839    8428 command_runner.go:130] ! I0314 19:36:17.055539       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:17.599839    8428 command_runner.go:130] ! I0314 19:36:17.055567       1 main.go:227] handling current node
	I0314 19:42:17.599839    8428 command_runner.go:130] ! I0314 19:36:17.055581       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:17.599839    8428 command_runner.go:130] ! I0314 19:36:17.055588       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:17.599839    8428 command_runner.go:130] ! I0314 19:36:17.056312       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:17.599839    8428 command_runner.go:130] ! I0314 19:36:17.056341       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:17.599839    8428 command_runner.go:130] ! I0314 19:36:27.067921       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:17.599839    8428 command_runner.go:130] ! I0314 19:36:27.067961       1 main.go:227] handling current node
	I0314 19:42:17.599839    8428 command_runner.go:130] ! I0314 19:36:27.069052       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:17.599839    8428 command_runner.go:130] ! I0314 19:36:27.069179       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:17.599839    8428 command_runner.go:130] ! I0314 19:36:27.069306       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:17.599839    8428 command_runner.go:130] ! I0314 19:36:27.069332       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:17.599839    8428 command_runner.go:130] ! I0314 19:36:37.082322       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:17.599839    8428 command_runner.go:130] ! I0314 19:36:37.082413       1 main.go:227] handling current node
	I0314 19:42:17.599839    8428 command_runner.go:130] ! I0314 19:36:37.082429       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:17.599839    8428 command_runner.go:130] ! I0314 19:36:37.082437       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:17.599839    8428 command_runner.go:130] ! I0314 19:36:37.082972       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:17.599839    8428 command_runner.go:130] ! I0314 19:36:37.083000       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:17.599839    8428 command_runner.go:130] ! I0314 19:36:47.099685       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:17.599839    8428 command_runner.go:130] ! I0314 19:36:47.099830       1 main.go:227] handling current node
	I0314 19:42:17.599839    8428 command_runner.go:130] ! I0314 19:36:47.099862       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:17.599839    8428 command_runner.go:130] ! I0314 19:36:47.099982       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:17.599839    8428 command_runner.go:130] ! I0314 19:36:57.107274       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:17.599839    8428 command_runner.go:130] ! I0314 19:36:57.107368       1 main.go:227] handling current node
	I0314 19:42:17.599839    8428 command_runner.go:130] ! I0314 19:36:57.107382       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:17.599839    8428 command_runner.go:130] ! I0314 19:36:57.107390       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:17.599839    8428 command_runner.go:130] ! I0314 19:36:57.107827       1 main.go:223] Handling node with IPs: map[172.17.84.215:{}]
	I0314 19:42:17.599839    8428 command_runner.go:130] ! I0314 19:36:57.107942       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.3.0/24] 
	I0314 19:42:17.599839    8428 command_runner.go:130] ! I0314 19:36:57.108076       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 172.17.84.215 Flags: [] Table: 0} 
	I0314 19:42:17.599839    8428 command_runner.go:130] ! I0314 19:37:07.120709       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:17.599839    8428 command_runner.go:130] ! I0314 19:37:07.121059       1 main.go:227] handling current node
	I0314 19:42:17.599839    8428 command_runner.go:130] ! I0314 19:37:07.121098       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:17.599839    8428 command_runner.go:130] ! I0314 19:37:07.121109       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:17.600378    8428 command_runner.go:130] ! I0314 19:37:07.121440       1 main.go:223] Handling node with IPs: map[172.17.84.215:{}]
	I0314 19:42:17.600378    8428 command_runner.go:130] ! I0314 19:37:07.121455       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.3.0/24] 
	I0314 19:42:17.600378    8428 command_runner.go:130] ! I0314 19:37:17.137704       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:17.600378    8428 command_runner.go:130] ! I0314 19:37:17.137784       1 main.go:227] handling current node
	I0314 19:42:17.600378    8428 command_runner.go:130] ! I0314 19:37:17.137796       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:17.600378    8428 command_runner.go:130] ! I0314 19:37:17.137803       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:17.600378    8428 command_runner.go:130] ! I0314 19:37:17.138265       1 main.go:223] Handling node with IPs: map[172.17.84.215:{}]
	I0314 19:42:17.600378    8428 command_runner.go:130] ! I0314 19:37:17.138298       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.3.0/24] 
	I0314 19:42:17.600378    8428 command_runner.go:130] ! I0314 19:37:27.144505       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:17.600378    8428 command_runner.go:130] ! I0314 19:37:27.144594       1 main.go:227] handling current node
	I0314 19:42:17.600378    8428 command_runner.go:130] ! I0314 19:37:27.144607       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:17.600378    8428 command_runner.go:130] ! I0314 19:37:27.144615       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:17.600378    8428 command_runner.go:130] ! I0314 19:37:27.145062       1 main.go:223] Handling node with IPs: map[172.17.84.215:{}]
	I0314 19:42:17.600524    8428 command_runner.go:130] ! I0314 19:37:27.145092       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.3.0/24] 
	I0314 19:42:17.600524    8428 command_runner.go:130] ! I0314 19:37:37.154684       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:17.600524    8428 command_runner.go:130] ! I0314 19:37:37.154836       1 main.go:227] handling current node
	I0314 19:42:17.600524    8428 command_runner.go:130] ! I0314 19:37:37.154851       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:17.600524    8428 command_runner.go:130] ! I0314 19:37:37.154860       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:17.600524    8428 command_runner.go:130] ! I0314 19:37:37.155452       1 main.go:223] Handling node with IPs: map[172.17.84.215:{}]
	I0314 19:42:17.600524    8428 command_runner.go:130] ! I0314 19:37:37.155614       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.3.0/24] 
	I0314 19:42:17.600524    8428 command_runner.go:130] ! I0314 19:37:47.168249       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:17.600613    8428 command_runner.go:130] ! I0314 19:37:47.168338       1 main.go:227] handling current node
	I0314 19:42:17.600613    8428 command_runner.go:130] ! I0314 19:37:47.168352       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:17.600613    8428 command_runner.go:130] ! I0314 19:37:47.168360       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:17.600613    8428 command_runner.go:130] ! I0314 19:37:47.168976       1 main.go:223] Handling node with IPs: map[172.17.84.215:{}]
	I0314 19:42:17.600613    8428 command_runner.go:130] ! I0314 19:37:47.169064       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.3.0/24] 
	I0314 19:42:17.600613    8428 command_runner.go:130] ! I0314 19:37:57.176039       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:17.600613    8428 command_runner.go:130] ! I0314 19:37:57.176130       1 main.go:227] handling current node
	I0314 19:42:17.600613    8428 command_runner.go:130] ! I0314 19:37:57.176145       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:17.600613    8428 command_runner.go:130] ! I0314 19:37:57.176153       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:17.600613    8428 command_runner.go:130] ! I0314 19:37:57.176528       1 main.go:223] Handling node with IPs: map[172.17.84.215:{}]
	I0314 19:42:17.600613    8428 command_runner.go:130] ! I0314 19:37:57.176659       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.3.0/24] 
	I0314 19:42:17.600613    8428 command_runner.go:130] ! I0314 19:38:07.189890       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:17.600613    8428 command_runner.go:130] ! I0314 19:38:07.189993       1 main.go:227] handling current node
	I0314 19:42:17.600613    8428 command_runner.go:130] ! I0314 19:38:07.190008       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:17.600613    8428 command_runner.go:130] ! I0314 19:38:07.190016       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:17.600613    8428 command_runner.go:130] ! I0314 19:38:07.190217       1 main.go:223] Handling node with IPs: map[172.17.84.215:{}]
	I0314 19:42:17.600613    8428 command_runner.go:130] ! I0314 19:38:07.190245       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.3.0/24] 
	I0314 19:42:17.600613    8428 command_runner.go:130] ! I0314 19:38:17.196541       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:17.600613    8428 command_runner.go:130] ! I0314 19:38:17.196633       1 main.go:227] handling current node
	I0314 19:42:17.600613    8428 command_runner.go:130] ! I0314 19:38:17.196647       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:17.600613    8428 command_runner.go:130] ! I0314 19:38:17.196655       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:17.600613    8428 command_runner.go:130] ! I0314 19:38:17.196888       1 main.go:223] Handling node with IPs: map[172.17.84.215:{}]
	I0314 19:42:17.600613    8428 command_runner.go:130] ! I0314 19:38:17.197012       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.3.0/24] 
	I0314 19:42:17.600613    8428 command_runner.go:130] ! I0314 19:38:27.217365       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:17.600613    8428 command_runner.go:130] ! I0314 19:38:27.217460       1 main.go:227] handling current node
	I0314 19:42:17.600613    8428 command_runner.go:130] ! I0314 19:38:27.217475       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:17.600613    8428 command_runner.go:130] ! I0314 19:38:27.217483       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:17.600613    8428 command_runner.go:130] ! I0314 19:38:27.217621       1 main.go:223] Handling node with IPs: map[172.17.84.215:{}]
	I0314 19:42:17.600613    8428 command_runner.go:130] ! I0314 19:38:27.217634       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.3.0/24] 
	I0314 19:42:17.600613    8428 command_runner.go:130] ! I0314 19:38:37.229941       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:17.600613    8428 command_runner.go:130] ! I0314 19:38:37.230048       1 main.go:227] handling current node
	I0314 19:42:17.600613    8428 command_runner.go:130] ! I0314 19:38:37.230062       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:17.600613    8428 command_runner.go:130] ! I0314 19:38:37.230070       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:17.600613    8428 command_runner.go:130] ! I0314 19:38:37.230268       1 main.go:223] Handling node with IPs: map[172.17.84.215:{}]
	I0314 19:42:17.600613    8428 command_runner.go:130] ! I0314 19:38:37.230338       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.3.0/24] 
	I0314 19:42:17.617295    8428 logs.go:123] Gathering logs for dmesg ...
	I0314 19:42:17.617295    8428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:42:17.637870    8428 command_runner.go:130] > [Mar14 19:39] You have booted with nomodeset. This means your GPU drivers are DISABLED
	I0314 19:42:17.637870    8428 command_runner.go:130] > [  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	I0314 19:42:17.637870    8428 command_runner.go:130] > [  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	I0314 19:42:17.637870    8428 command_runner.go:130] > [  +0.111500] RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible!
	I0314 19:42:17.637870    8428 command_runner.go:130] > [  +0.025646] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	I0314 19:42:17.637870    8428 command_runner.go:130] > [  +0.000006] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	I0314 19:42:17.637870    8428 command_runner.go:130] > [  +0.000001] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	I0314 19:42:17.637870    8428 command_runner.go:130] > [  +0.051209] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	I0314 19:42:17.637870    8428 command_runner.go:130] > [  +0.017569] * Found PM-Timer Bug on the chipset. Due to workarounds for a bug,
	I0314 19:42:17.637870    8428 command_runner.go:130] >               * this clock source is slow. Consider trying other clock sources
	I0314 19:42:17.637870    8428 command_runner.go:130] > [  +5.774438] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	I0314 19:42:17.637870    8428 command_runner.go:130] > [  +0.663188] psmouse serio1: trackpoint: failed to get extended button data, assuming 3 buttons
	I0314 19:42:17.637870    8428 command_runner.go:130] > [  +1.473946] systemd-fstab-generator[113]: Ignoring "noauto" option for root device
	I0314 19:42:17.637870    8428 command_runner.go:130] > [  +5.849126] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	I0314 19:42:17.637870    8428 command_runner.go:130] > [  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	I0314 19:42:17.637870    8428 command_runner.go:130] > [  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	I0314 19:42:17.637870    8428 command_runner.go:130] > [Mar14 19:40] systemd-fstab-generator[630]: Ignoring "noauto" option for root device
	I0314 19:42:17.637870    8428 command_runner.go:130] > [  +0.179743] systemd-fstab-generator[642]: Ignoring "noauto" option for root device
	I0314 19:42:17.637870    8428 command_runner.go:130] > [ +24.853688] systemd-fstab-generator[971]: Ignoring "noauto" option for root device
	I0314 19:42:17.637870    8428 command_runner.go:130] > [  +0.096946] kauditd_printk_skb: 73 callbacks suppressed
	I0314 19:42:17.637870    8428 command_runner.go:130] > [  +0.497369] systemd-fstab-generator[1009]: Ignoring "noauto" option for root device
	I0314 19:42:17.637870    8428 command_runner.go:130] > [  +0.185545] systemd-fstab-generator[1021]: Ignoring "noauto" option for root device
	I0314 19:42:17.637870    8428 command_runner.go:130] > [  +0.215423] systemd-fstab-generator[1035]: Ignoring "noauto" option for root device
	I0314 19:42:17.637870    8428 command_runner.go:130] > [  +2.887443] systemd-fstab-generator[1220]: Ignoring "noauto" option for root device
	I0314 19:42:17.637870    8428 command_runner.go:130] > [  +0.193519] systemd-fstab-generator[1232]: Ignoring "noauto" option for root device
	I0314 19:42:17.637870    8428 command_runner.go:130] > [  +0.182072] systemd-fstab-generator[1244]: Ignoring "noauto" option for root device
	I0314 19:42:17.637870    8428 command_runner.go:130] > [  +0.258988] systemd-fstab-generator[1259]: Ignoring "noauto" option for root device
	I0314 19:42:17.637870    8428 command_runner.go:130] > [  +0.819687] systemd-fstab-generator[1381]: Ignoring "noauto" option for root device
	I0314 19:42:17.637870    8428 command_runner.go:130] > [  +0.099817] kauditd_printk_skb: 205 callbacks suppressed
	I0314 19:42:17.637870    8428 command_runner.go:130] > [  +2.940519] systemd-fstab-generator[1516]: Ignoring "noauto" option for root device
	I0314 19:42:17.637870    8428 command_runner.go:130] > [Mar14 19:41] kauditd_printk_skb: 84 callbacks suppressed
	I0314 19:42:17.637870    8428 command_runner.go:130] > [  +4.042735] systemd-fstab-generator[3087]: Ignoring "noauto" option for root device
	I0314 19:42:17.637870    8428 command_runner.go:130] > [  +7.733278] kauditd_printk_skb: 70 callbacks suppressed
	I0314 19:42:17.640374    8428 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:42:17.640374    8428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 19:42:17.851291    8428 command_runner.go:130] > Name:               multinode-442000
	I0314 19:42:17.851291    8428 command_runner.go:130] > Roles:              control-plane
	I0314 19:42:17.851291    8428 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0314 19:42:17.851291    8428 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0314 19:42:17.851291    8428 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0314 19:42:17.851291    8428 command_runner.go:130] >                     kubernetes.io/hostname=multinode-442000
	I0314 19:42:17.851291    8428 command_runner.go:130] >                     kubernetes.io/os=linux
	I0314 19:42:17.851413    8428 command_runner.go:130] >                     minikube.k8s.io/commit=c6f78a3db54ac629870afb44fb5bc8be9e04a8c7
	I0314 19:42:17.851413    8428 command_runner.go:130] >                     minikube.k8s.io/name=multinode-442000
	I0314 19:42:17.851449    8428 command_runner.go:130] >                     minikube.k8s.io/primary=true
	I0314 19:42:17.851449    8428 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_03_14T19_19_05_0700
	I0314 19:42:17.851449    8428 command_runner.go:130] >                     minikube.k8s.io/version=v1.32.0
	I0314 19:42:17.851449    8428 command_runner.go:130] >                     node-role.kubernetes.io/control-plane=
	I0314 19:42:17.851449    8428 command_runner.go:130] >                     node.kubernetes.io/exclude-from-external-load-balancers=
	I0314 19:42:17.851449    8428 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0314 19:42:17.851449    8428 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0314 19:42:17.851449    8428 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0314 19:42:17.851449    8428 command_runner.go:130] > CreationTimestamp:  Thu, 14 Mar 2024 19:19:00 +0000
	I0314 19:42:17.851449    8428 command_runner.go:130] > Taints:             <none>
	I0314 19:42:17.851449    8428 command_runner.go:130] > Unschedulable:      false
	I0314 19:42:17.851449    8428 command_runner.go:130] > Lease:
	I0314 19:42:17.851449    8428 command_runner.go:130] >   HolderIdentity:  multinode-442000
	I0314 19:42:17.851449    8428 command_runner.go:130] >   AcquireTime:     <unset>
	I0314 19:42:17.851449    8428 command_runner.go:130] >   RenewTime:       Thu, 14 Mar 2024 19:42:17 +0000
	I0314 19:42:17.851449    8428 command_runner.go:130] > Conditions:
	I0314 19:42:17.851449    8428 command_runner.go:130] >   Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	I0314 19:42:17.851449    8428 command_runner.go:130] >   ----             ------  -----------------                 ------------------                ------                       -------
	I0314 19:42:17.851449    8428 command_runner.go:130] >   MemoryPressure   False   Thu, 14 Mar 2024 19:41:41 +0000   Thu, 14 Mar 2024 19:18:58 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	I0314 19:42:17.851449    8428 command_runner.go:130] >   DiskPressure     False   Thu, 14 Mar 2024 19:41:41 +0000   Thu, 14 Mar 2024 19:18:58 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	I0314 19:42:17.851449    8428 command_runner.go:130] >   PIDPressure      False   Thu, 14 Mar 2024 19:41:41 +0000   Thu, 14 Mar 2024 19:18:58 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	I0314 19:42:17.851449    8428 command_runner.go:130] >   Ready            True    Thu, 14 Mar 2024 19:41:41 +0000   Thu, 14 Mar 2024 19:41:41 +0000   KubeletReady                 kubelet is posting ready status
	I0314 19:42:17.851449    8428 command_runner.go:130] > Addresses:
	I0314 19:42:17.851449    8428 command_runner.go:130] >   InternalIP:  172.17.93.236
	I0314 19:42:17.851449    8428 command_runner.go:130] >   Hostname:    multinode-442000
	I0314 19:42:17.851449    8428 command_runner.go:130] > Capacity:
	I0314 19:42:17.851449    8428 command_runner.go:130] >   cpu:                2
	I0314 19:42:17.851449    8428 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0314 19:42:17.851449    8428 command_runner.go:130] >   hugepages-2Mi:      0
	I0314 19:42:17.851449    8428 command_runner.go:130] >   memory:             2164268Ki
	I0314 19:42:17.851449    8428 command_runner.go:130] >   pods:               110
	I0314 19:42:17.851449    8428 command_runner.go:130] > Allocatable:
	I0314 19:42:17.851449    8428 command_runner.go:130] >   cpu:                2
	I0314 19:42:17.851449    8428 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0314 19:42:17.851449    8428 command_runner.go:130] >   hugepages-2Mi:      0
	I0314 19:42:17.851449    8428 command_runner.go:130] >   memory:             2164268Ki
	I0314 19:42:17.851449    8428 command_runner.go:130] >   pods:               110
	I0314 19:42:17.851449    8428 command_runner.go:130] > System Info:
	I0314 19:42:17.851449    8428 command_runner.go:130] >   Machine ID:                 37c811f81f1d4d709fd4a6eb79d70749
	I0314 19:42:17.851449    8428 command_runner.go:130] >   System UUID:                8469b663-ea90-da4f-856d-11034a8f65d8
	I0314 19:42:17.851449    8428 command_runner.go:130] >   Boot ID:                    91589624-f8f3-469e-b556-aa6dd64e54de
	I0314 19:42:17.851449    8428 command_runner.go:130] >   Kernel Version:             5.10.207
	I0314 19:42:17.851449    8428 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0314 19:42:17.851449    8428 command_runner.go:130] >   Operating System:           linux
	I0314 19:42:17.851449    8428 command_runner.go:130] >   Architecture:               amd64
	I0314 19:42:17.851449    8428 command_runner.go:130] >   Container Runtime Version:  docker://25.0.4
	I0314 19:42:17.851449    8428 command_runner.go:130] >   Kubelet Version:            v1.28.4
	I0314 19:42:17.851449    8428 command_runner.go:130] >   Kube-Proxy Version:         v1.28.4
	I0314 19:42:17.851449    8428 command_runner.go:130] > PodCIDR:                      10.244.0.0/24
	I0314 19:42:17.851449    8428 command_runner.go:130] > PodCIDRs:                     10.244.0.0/24
	I0314 19:42:17.851449    8428 command_runner.go:130] > Non-terminated Pods:          (9 in total)
	I0314 19:42:17.851449    8428 command_runner.go:130] >   Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0314 19:42:17.851971    8428 command_runner.go:130] >   ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	I0314 19:42:17.851971    8428 command_runner.go:130] >   default                     busybox-5b5d89c9d6-7446n                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	I0314 19:42:17.851971    8428 command_runner.go:130] >   kube-system                 coredns-5dd5756b68-d22jc                    100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     23m
	I0314 19:42:17.851971    8428 command_runner.go:130] >   kube-system                 etcd-multinode-442000                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         72s
	I0314 19:42:17.851971    8428 command_runner.go:130] >   kube-system                 kindnet-7b9lf                               100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      23m
	I0314 19:42:17.851971    8428 command_runner.go:130] >   kube-system                 kube-apiserver-multinode-442000             250m (12%)    0 (0%)      0 (0%)           0 (0%)         72s
	I0314 19:42:17.852054    8428 command_runner.go:130] >   kube-system                 kube-controller-manager-multinode-442000    200m (10%)    0 (0%)      0 (0%)           0 (0%)         23m
	I0314 19:42:17.852177    8428 command_runner.go:130] >   kube-system                 kube-proxy-cg28g                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         23m
	I0314 19:42:17.852177    8428 command_runner.go:130] >   kube-system                 kube-scheduler-multinode-442000             100m (5%)     0 (0%)      0 (0%)           0 (0%)         23m
	I0314 19:42:17.852177    8428 command_runner.go:130] >   kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         22m
	I0314 19:42:17.852177    8428 command_runner.go:130] > Allocated resources:
	I0314 19:42:17.852177    8428 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0314 19:42:17.852177    8428 command_runner.go:130] >   Resource           Requests     Limits
	I0314 19:42:17.852177    8428 command_runner.go:130] >   --------           --------     ------
	I0314 19:42:17.852177    8428 command_runner.go:130] >   cpu                850m (42%)   100m (5%)
	I0314 19:42:17.852271    8428 command_runner.go:130] >   memory             220Mi (10%)  220Mi (10%)
	I0314 19:42:17.852271    8428 command_runner.go:130] >   ephemeral-storage  0 (0%)       0 (0%)
	I0314 19:42:17.852271    8428 command_runner.go:130] >   hugepages-2Mi      0 (0%)       0 (0%)
	I0314 19:42:17.852271    8428 command_runner.go:130] > Events:
	I0314 19:42:17.852271    8428 command_runner.go:130] >   Type    Reason                   Age                From             Message
	I0314 19:42:17.852271    8428 command_runner.go:130] >   ----    ------                   ----               ----             -------
	I0314 19:42:17.852271    8428 command_runner.go:130] >   Normal  Starting                 22m                kube-proxy       
	I0314 19:42:17.852271    8428 command_runner.go:130] >   Normal  Starting                 69s                kube-proxy       
	I0314 19:42:17.852412    8428 command_runner.go:130] >   Normal  NodeHasSufficientMemory  23m (x8 over 23m)  kubelet          Node multinode-442000 status is now: NodeHasSufficientMemory
	I0314 19:42:17.852412    8428 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    23m (x8 over 23m)  kubelet          Node multinode-442000 status is now: NodeHasNoDiskPressure
	I0314 19:42:17.852412    8428 command_runner.go:130] >   Normal  NodeHasSufficientPID     23m (x7 over 23m)  kubelet          Node multinode-442000 status is now: NodeHasSufficientPID
	I0314 19:42:17.852412    8428 command_runner.go:130] >   Normal  NodeAllocatableEnforced  23m                kubelet          Updated Node Allocatable limit across pods
	I0314 19:42:17.852412    8428 command_runner.go:130] >   Normal  NodeHasSufficientMemory  23m                kubelet          Node multinode-442000 status is now: NodeHasSufficientMemory
	I0314 19:42:17.852412    8428 command_runner.go:130] >   Normal  NodeAllocatableEnforced  23m                kubelet          Updated Node Allocatable limit across pods
	I0314 19:42:17.852412    8428 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    23m                kubelet          Node multinode-442000 status is now: NodeHasNoDiskPressure
	I0314 19:42:17.852412    8428 command_runner.go:130] >   Normal  NodeHasSufficientPID     23m                kubelet          Node multinode-442000 status is now: NodeHasSufficientPID
	I0314 19:42:17.852412    8428 command_runner.go:130] >   Normal  Starting                 23m                kubelet          Starting kubelet.
	I0314 19:42:17.852412    8428 command_runner.go:130] >   Normal  RegisteredNode           23m                node-controller  Node multinode-442000 event: Registered Node multinode-442000 in Controller
	I0314 19:42:17.852412    8428 command_runner.go:130] >   Normal  NodeReady                22m                kubelet          Node multinode-442000 status is now: NodeReady
	I0314 19:42:17.852412    8428 command_runner.go:130] >   Normal  Starting                 78s                kubelet          Starting kubelet.
	I0314 19:42:17.852412    8428 command_runner.go:130] >   Normal  NodeHasSufficientMemory  78s (x8 over 78s)  kubelet          Node multinode-442000 status is now: NodeHasSufficientMemory
	I0314 19:42:17.852412    8428 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    78s (x8 over 78s)  kubelet          Node multinode-442000 status is now: NodeHasNoDiskPressure
	I0314 19:42:17.852412    8428 command_runner.go:130] >   Normal  NodeHasSufficientPID     78s (x7 over 78s)  kubelet          Node multinode-442000 status is now: NodeHasSufficientPID
	I0314 19:42:17.852412    8428 command_runner.go:130] >   Normal  NodeAllocatableEnforced  78s                kubelet          Updated Node Allocatable limit across pods
	I0314 19:42:17.852412    8428 command_runner.go:130] >   Normal  RegisteredNode           60s                node-controller  Node multinode-442000 event: Registered Node multinode-442000 in Controller
	I0314 19:42:17.852412    8428 command_runner.go:130] > Name:               multinode-442000-m02
	I0314 19:42:17.852412    8428 command_runner.go:130] > Roles:              <none>
	I0314 19:42:17.852412    8428 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0314 19:42:17.852412    8428 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0314 19:42:17.852412    8428 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0314 19:42:17.852412    8428 command_runner.go:130] >                     kubernetes.io/hostname=multinode-442000-m02
	I0314 19:42:17.852412    8428 command_runner.go:130] >                     kubernetes.io/os=linux
	I0314 19:42:17.852412    8428 command_runner.go:130] >                     minikube.k8s.io/commit=c6f78a3db54ac629870afb44fb5bc8be9e04a8c7
	I0314 19:42:17.852412    8428 command_runner.go:130] >                     minikube.k8s.io/name=multinode-442000
	I0314 19:42:17.852412    8428 command_runner.go:130] >                     minikube.k8s.io/primary=false
	I0314 19:42:17.852412    8428 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_03_14T19_22_02_0700
	I0314 19:42:17.852412    8428 command_runner.go:130] >                     minikube.k8s.io/version=v1.32.0
	I0314 19:42:17.852412    8428 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0314 19:42:17.852412    8428 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0314 19:42:17.852412    8428 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0314 19:42:17.852412    8428 command_runner.go:130] > CreationTimestamp:  Thu, 14 Mar 2024 19:22:02 +0000
	I0314 19:42:17.852412    8428 command_runner.go:130] > Taints:             node.kubernetes.io/unreachable:NoExecute
	I0314 19:42:17.852412    8428 command_runner.go:130] >                     node.kubernetes.io/unreachable:NoSchedule
	I0314 19:42:17.852412    8428 command_runner.go:130] > Unschedulable:      false
	I0314 19:42:17.852412    8428 command_runner.go:130] > Lease:
	I0314 19:42:17.852412    8428 command_runner.go:130] >   HolderIdentity:  multinode-442000-m02
	I0314 19:42:17.852412    8428 command_runner.go:130] >   AcquireTime:     <unset>
	I0314 19:42:17.852412    8428 command_runner.go:130] >   RenewTime:       Thu, 14 Mar 2024 19:38:03 +0000
	I0314 19:42:17.852412    8428 command_runner.go:130] > Conditions:
	I0314 19:42:17.852412    8428 command_runner.go:130] >   Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	I0314 19:42:17.852412    8428 command_runner.go:130] >   ----             ------    -----------------                 ------------------                ------              -------
	I0314 19:42:17.852412    8428 command_runner.go:130] >   MemoryPressure   Unknown   Thu, 14 Mar 2024 19:33:15 +0000   Thu, 14 Mar 2024 19:41:59 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0314 19:42:17.852939    8428 command_runner.go:130] >   DiskPressure     Unknown   Thu, 14 Mar 2024 19:33:15 +0000   Thu, 14 Mar 2024 19:41:59 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0314 19:42:17.852939    8428 command_runner.go:130] >   PIDPressure      Unknown   Thu, 14 Mar 2024 19:33:15 +0000   Thu, 14 Mar 2024 19:41:59 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0314 19:42:17.852939    8428 command_runner.go:130] >   Ready            Unknown   Thu, 14 Mar 2024 19:33:15 +0000   Thu, 14 Mar 2024 19:41:59 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0314 19:42:17.852939    8428 command_runner.go:130] > Addresses:
	I0314 19:42:17.852939    8428 command_runner.go:130] >   InternalIP:  172.17.80.135
	I0314 19:42:17.852939    8428 command_runner.go:130] >   Hostname:    multinode-442000-m02
	I0314 19:42:17.853143    8428 command_runner.go:130] > Capacity:
	I0314 19:42:17.853143    8428 command_runner.go:130] >   cpu:                2
	I0314 19:42:17.853143    8428 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0314 19:42:17.853143    8428 command_runner.go:130] >   hugepages-2Mi:      0
	I0314 19:42:17.853143    8428 command_runner.go:130] >   memory:             2164268Ki
	I0314 19:42:17.853143    8428 command_runner.go:130] >   pods:               110
	I0314 19:42:17.853143    8428 command_runner.go:130] > Allocatable:
	I0314 19:42:17.853143    8428 command_runner.go:130] >   cpu:                2
	I0314 19:42:17.853314    8428 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0314 19:42:17.853333    8428 command_runner.go:130] >   hugepages-2Mi:      0
	I0314 19:42:17.853333    8428 command_runner.go:130] >   memory:             2164268Ki
	I0314 19:42:17.853333    8428 command_runner.go:130] >   pods:               110
	I0314 19:42:17.853333    8428 command_runner.go:130] > System Info:
	I0314 19:42:17.853333    8428 command_runner.go:130] >   Machine ID:                 35b6f7da4d3943d99d8a5913cae1c8fb
	I0314 19:42:17.853333    8428 command_runner.go:130] >   System UUID:                0b9b8376-0767-f940-9973-d373e3dc050d
	I0314 19:42:17.853333    8428 command_runner.go:130] >   Boot ID:                    45d479cc-26e8-46a6-9431-50637071f586
	I0314 19:42:17.853392    8428 command_runner.go:130] >   Kernel Version:             5.10.207
	I0314 19:42:17.853392    8428 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0314 19:42:17.853409    8428 command_runner.go:130] >   Operating System:           linux
	I0314 19:42:17.853409    8428 command_runner.go:130] >   Architecture:               amd64
	I0314 19:42:17.853409    8428 command_runner.go:130] >   Container Runtime Version:  docker://25.0.4
	I0314 19:42:17.853409    8428 command_runner.go:130] >   Kubelet Version:            v1.28.4
	I0314 19:42:17.853409    8428 command_runner.go:130] >   Kube-Proxy Version:         v1.28.4
	I0314 19:42:17.853409    8428 command_runner.go:130] > PodCIDR:                      10.244.1.0/24
	I0314 19:42:17.853409    8428 command_runner.go:130] > PodCIDRs:                     10.244.1.0/24
	I0314 19:42:17.853409    8428 command_runner.go:130] > Non-terminated Pods:          (3 in total)
	I0314 19:42:17.853494    8428 command_runner.go:130] >   Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0314 19:42:17.853494    8428 command_runner.go:130] >   ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	I0314 19:42:17.853494    8428 command_runner.go:130] >   default                     busybox-5b5d89c9d6-8drpb    0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	I0314 19:42:17.853494    8428 command_runner.go:130] >   kube-system                 kindnet-c7m4p               100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      20m
	I0314 19:42:17.853494    8428 command_runner.go:130] >   kube-system                 kube-proxy-72dzs            0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	I0314 19:42:17.853494    8428 command_runner.go:130] > Allocated resources:
	I0314 19:42:17.853494    8428 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0314 19:42:17.853569    8428 command_runner.go:130] >   Resource           Requests   Limits
	I0314 19:42:17.853569    8428 command_runner.go:130] >   --------           --------   ------
	I0314 19:42:17.853569    8428 command_runner.go:130] >   cpu                100m (5%)  100m (5%)
	I0314 19:42:17.853569    8428 command_runner.go:130] >   memory             50Mi (2%)  50Mi (2%)
	I0314 19:42:17.853569    8428 command_runner.go:130] >   ephemeral-storage  0 (0%)     0 (0%)
	I0314 19:42:17.853569    8428 command_runner.go:130] >   hugepages-2Mi      0 (0%)     0 (0%)
	I0314 19:42:17.853569    8428 command_runner.go:130] > Events:
	I0314 19:42:17.853569    8428 command_runner.go:130] >   Type    Reason                   Age                From             Message
	I0314 19:42:17.853569    8428 command_runner.go:130] >   ----    ------                   ----               ----             -------
	I0314 19:42:17.853569    8428 command_runner.go:130] >   Normal  Starting                 20m                kube-proxy       
	I0314 19:42:17.853569    8428 command_runner.go:130] >   Normal  NodeHasSufficientMemory  20m (x5 over 20m)  kubelet          Node multinode-442000-m02 status is now: NodeHasSufficientMemory
	I0314 19:42:17.853569    8428 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    20m (x5 over 20m)  kubelet          Node multinode-442000-m02 status is now: NodeHasNoDiskPressure
	I0314 19:42:17.853569    8428 command_runner.go:130] >   Normal  NodeHasSufficientPID     20m (x5 over 20m)  kubelet          Node multinode-442000-m02 status is now: NodeHasSufficientPID
	I0314 19:42:17.853569    8428 command_runner.go:130] >   Normal  RegisteredNode           20m                node-controller  Node multinode-442000-m02 event: Registered Node multinode-442000-m02 in Controller
	I0314 19:42:17.853569    8428 command_runner.go:130] >   Normal  NodeReady                19m                kubelet          Node multinode-442000-m02 status is now: NodeReady
	I0314 19:42:17.853569    8428 command_runner.go:130] >   Normal  RegisteredNode           60s                node-controller  Node multinode-442000-m02 event: Registered Node multinode-442000-m02 in Controller
	I0314 19:42:17.853569    8428 command_runner.go:130] >   Normal  NodeNotReady             19s                node-controller  Node multinode-442000-m02 status is now: NodeNotReady
	I0314 19:42:17.853569    8428 command_runner.go:130] > Name:               multinode-442000-m03
	I0314 19:42:17.853569    8428 command_runner.go:130] > Roles:              <none>
	I0314 19:42:17.853569    8428 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0314 19:42:17.853569    8428 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0314 19:42:17.853569    8428 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0314 19:42:17.853569    8428 command_runner.go:130] >                     kubernetes.io/hostname=multinode-442000-m03
	I0314 19:42:17.853569    8428 command_runner.go:130] >                     kubernetes.io/os=linux
	I0314 19:42:17.853569    8428 command_runner.go:130] >                     minikube.k8s.io/commit=c6f78a3db54ac629870afb44fb5bc8be9e04a8c7
	I0314 19:42:17.853569    8428 command_runner.go:130] >                     minikube.k8s.io/name=multinode-442000
	I0314 19:42:17.853569    8428 command_runner.go:130] >                     minikube.k8s.io/primary=false
	I0314 19:42:17.853569    8428 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_03_14T19_36_47_0700
	I0314 19:42:17.853569    8428 command_runner.go:130] >                     minikube.k8s.io/version=v1.32.0
	I0314 19:42:17.853569    8428 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0314 19:42:17.853569    8428 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0314 19:42:17.853569    8428 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0314 19:42:17.853569    8428 command_runner.go:130] > CreationTimestamp:  Thu, 14 Mar 2024 19:36:47 +0000
	I0314 19:42:17.853569    8428 command_runner.go:130] > Taints:             node.kubernetes.io/unreachable:NoExecute
	I0314 19:42:17.853569    8428 command_runner.go:130] >                     node.kubernetes.io/unreachable:NoSchedule
	I0314 19:42:17.853569    8428 command_runner.go:130] > Unschedulable:      false
	I0314 19:42:17.853569    8428 command_runner.go:130] > Lease:
	I0314 19:42:17.853569    8428 command_runner.go:130] >   HolderIdentity:  multinode-442000-m03
	I0314 19:42:17.853569    8428 command_runner.go:130] >   AcquireTime:     <unset>
	I0314 19:42:17.853569    8428 command_runner.go:130] >   RenewTime:       Thu, 14 Mar 2024 19:37:37 +0000
	I0314 19:42:17.853569    8428 command_runner.go:130] > Conditions:
	I0314 19:42:17.853569    8428 command_runner.go:130] >   Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	I0314 19:42:17.853569    8428 command_runner.go:130] >   ----             ------    -----------------                 ------------------                ------              -------
	I0314 19:42:17.853569    8428 command_runner.go:130] >   MemoryPressure   Unknown   Thu, 14 Mar 2024 19:36:54 +0000   Thu, 14 Mar 2024 19:38:21 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0314 19:42:17.854104    8428 command_runner.go:130] >   DiskPressure     Unknown   Thu, 14 Mar 2024 19:36:54 +0000   Thu, 14 Mar 2024 19:38:21 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0314 19:42:17.854104    8428 command_runner.go:130] >   PIDPressure      Unknown   Thu, 14 Mar 2024 19:36:54 +0000   Thu, 14 Mar 2024 19:38:21 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0314 19:42:17.854104    8428 command_runner.go:130] >   Ready            Unknown   Thu, 14 Mar 2024 19:36:54 +0000   Thu, 14 Mar 2024 19:38:21 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0314 19:42:17.854104    8428 command_runner.go:130] > Addresses:
	I0314 19:42:17.854104    8428 command_runner.go:130] >   InternalIP:  172.17.84.215
	I0314 19:42:17.854104    8428 command_runner.go:130] >   Hostname:    multinode-442000-m03
	I0314 19:42:17.854104    8428 command_runner.go:130] > Capacity:
	I0314 19:42:17.854104    8428 command_runner.go:130] >   cpu:                2
	I0314 19:42:17.854176    8428 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0314 19:42:17.854176    8428 command_runner.go:130] >   hugepages-2Mi:      0
	I0314 19:42:17.854176    8428 command_runner.go:130] >   memory:             2164268Ki
	I0314 19:42:17.854176    8428 command_runner.go:130] >   pods:               110
	I0314 19:42:17.854176    8428 command_runner.go:130] > Allocatable:
	I0314 19:42:17.854176    8428 command_runner.go:130] >   cpu:                2
	I0314 19:42:17.854220    8428 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0314 19:42:17.854220    8428 command_runner.go:130] >   hugepages-2Mi:      0
	I0314 19:42:17.854220    8428 command_runner.go:130] >   memory:             2164268Ki
	I0314 19:42:17.854220    8428 command_runner.go:130] >   pods:               110
	I0314 19:42:17.854220    8428 command_runner.go:130] > System Info:
	I0314 19:42:17.854220    8428 command_runner.go:130] >   Machine ID:                 dc7772516bfe448db22a5c28796f53ab
	I0314 19:42:17.854220    8428 command_runner.go:130] >   System UUID:                71573585-d564-f043-9154-3d5854ce61b8
	I0314 19:42:17.854220    8428 command_runner.go:130] >   Boot ID:                    fed746b2-110b-43ee-9065-09983ba74a37
	I0314 19:42:17.854220    8428 command_runner.go:130] >   Kernel Version:             5.10.207
	I0314 19:42:17.854220    8428 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0314 19:42:17.854220    8428 command_runner.go:130] >   Operating System:           linux
	I0314 19:42:17.854220    8428 command_runner.go:130] >   Architecture:               amd64
	I0314 19:42:17.854220    8428 command_runner.go:130] >   Container Runtime Version:  docker://25.0.4
	I0314 19:42:17.854220    8428 command_runner.go:130] >   Kubelet Version:            v1.28.4
	I0314 19:42:17.854220    8428 command_runner.go:130] >   Kube-Proxy Version:         v1.28.4
	I0314 19:42:17.854331    8428 command_runner.go:130] > PodCIDR:                      10.244.3.0/24
	I0314 19:42:17.854331    8428 command_runner.go:130] > PodCIDRs:                     10.244.3.0/24
	I0314 19:42:17.854331    8428 command_runner.go:130] > Non-terminated Pods:          (2 in total)
	I0314 19:42:17.854331    8428 command_runner.go:130] >   Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0314 19:42:17.854331    8428 command_runner.go:130] >   ---------                   ----                ------------  ----------  ---------------  -------------  ---
	I0314 19:42:17.854331    8428 command_runner.go:130] >   kube-system                 kindnet-r7zdb       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      15m
	I0314 19:42:17.854331    8428 command_runner.go:130] >   kube-system                 kube-proxy-w2qls    0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	I0314 19:42:17.854451    8428 command_runner.go:130] > Allocated resources:
	I0314 19:42:17.854451    8428 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0314 19:42:17.854451    8428 command_runner.go:130] >   Resource           Requests   Limits
	I0314 19:42:17.854451    8428 command_runner.go:130] >   --------           --------   ------
	I0314 19:42:17.854541    8428 command_runner.go:130] >   cpu                100m (5%)  100m (5%)
	I0314 19:42:17.854541    8428 command_runner.go:130] >   memory             50Mi (2%)  50Mi (2%)
	I0314 19:42:17.854541    8428 command_runner.go:130] >   ephemeral-storage  0 (0%)     0 (0%)
	I0314 19:42:17.854668    8428 command_runner.go:130] >   hugepages-2Mi      0 (0%)     0 (0%)
	I0314 19:42:17.854668    8428 command_runner.go:130] > Events:
	I0314 19:42:17.854668    8428 command_runner.go:130] >   Type    Reason                   Age                    From             Message
	I0314 19:42:17.854668    8428 command_runner.go:130] >   ----    ------                   ----                   ----             -------
	I0314 19:42:17.854728    8428 command_runner.go:130] >   Normal  Starting                 15m                    kube-proxy       
	I0314 19:42:17.854728    8428 command_runner.go:130] >   Normal  Starting                 5m29s                  kube-proxy       
	I0314 19:42:17.854766    8428 command_runner.go:130] >   Normal  NodeHasSufficientMemory  15m (x5 over 15m)      kubelet          Node multinode-442000-m03 status is now: NodeHasSufficientMemory
	I0314 19:42:17.854766    8428 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    15m (x5 over 15m)      kubelet          Node multinode-442000-m03 status is now: NodeHasNoDiskPressure
	I0314 19:42:17.854766    8428 command_runner.go:130] >   Normal  NodeHasSufficientPID     15m (x5 over 15m)      kubelet          Node multinode-442000-m03 status is now: NodeHasSufficientPID
	I0314 19:42:17.854810    8428 command_runner.go:130] >   Normal  NodeReady                15m                    kubelet          Node multinode-442000-m03 status is now: NodeReady
	I0314 19:42:17.854810    8428 command_runner.go:130] >   Normal  NodeHasSufficientMemory  5m31s (x5 over 5m33s)  kubelet          Node multinode-442000-m03 status is now: NodeHasSufficientMemory
	I0314 19:42:17.854810    8428 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    5m31s (x5 over 5m33s)  kubelet          Node multinode-442000-m03 status is now: NodeHasNoDiskPressure
	I0314 19:42:17.854810    8428 command_runner.go:130] >   Normal  NodeHasSufficientPID     5m31s (x5 over 5m33s)  kubelet          Node multinode-442000-m03 status is now: NodeHasSufficientPID
	I0314 19:42:17.854810    8428 command_runner.go:130] >   Normal  RegisteredNode           5m27s                  node-controller  Node multinode-442000-m03 event: Registered Node multinode-442000-m03 in Controller
	I0314 19:42:17.854810    8428 command_runner.go:130] >   Normal  NodeReady                5m24s                  kubelet          Node multinode-442000-m03 status is now: NodeReady
	I0314 19:42:17.854810    8428 command_runner.go:130] >   Normal  NodeNotReady             3m57s                  node-controller  Node multinode-442000-m03 status is now: NodeNotReady
	I0314 19:42:17.854810    8428 command_runner.go:130] >   Normal  RegisteredNode           60s                    node-controller  Node multinode-442000-m03 event: Registered Node multinode-442000-m03 in Controller
	I0314 19:42:17.863994    8428 logs.go:123] Gathering logs for etcd [a81a9c43c355] ...
	I0314 19:42:17.863994    8428 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a81a9c43c355"
	I0314 19:42:17.897967    8428 command_runner.go:130] ! {"level":"warn","ts":"2024-03-14T19:41:01.944953Z","caller":"embed/config.go:673","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I0314 19:42:17.897967    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:01.945607Z","caller":"etcdmain/etcd.go:73","msg":"Running: ","args":["etcd","--advertise-client-urls=https://172.17.93.236:2379","--cert-file=/var/lib/minikube/certs/etcd/server.crt","--client-cert-auth=true","--data-dir=/var/lib/minikube/etcd","--experimental-initial-corrupt-check=true","--experimental-watch-progress-notify-interval=5s","--initial-advertise-peer-urls=https://172.17.93.236:2380","--initial-cluster=multinode-442000=https://172.17.93.236:2380","--key-file=/var/lib/minikube/certs/etcd/server.key","--listen-client-urls=https://127.0.0.1:2379,https://172.17.93.236:2379","--listen-metrics-urls=http://127.0.0.1:2381","--listen-peer-urls=https://172.17.93.236:2380","--name=multinode-442000","--peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt","--peer-client-cert-auth=true","--peer-key-file=/var/lib/minikube/certs/etcd/peer.key","--peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt","--prox
y-refresh-interval=70000","--snapshot-count=10000","--trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt"]}
	I0314 19:42:17.897967    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:01.945676Z","caller":"etcdmain/etcd.go:116","msg":"server has been already initialized","data-dir":"/var/lib/minikube/etcd","dir-type":"member"}
	I0314 19:42:17.897967    8428 command_runner.go:130] ! {"level":"warn","ts":"2024-03-14T19:41:01.945701Z","caller":"embed/config.go:673","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I0314 19:42:17.897967    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:01.94571Z","caller":"embed/etcd.go:127","msg":"configuring peer listeners","listen-peer-urls":["https://172.17.93.236:2380"]}
	I0314 19:42:17.897967    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:01.94582Z","caller":"embed/etcd.go:495","msg":"starting with peer TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I0314 19:42:17.897967    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:01.94751Z","caller":"embed/etcd.go:135","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://172.17.93.236:2379"]}
	I0314 19:42:17.897967    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:01.948798Z","caller":"embed/etcd.go:309","msg":"starting an etcd server","etcd-version":"3.5.9","git-sha":"bdbbde998","go-version":"go1.19.9","go-os":"linux","go-arch":"amd64","max-cpu-set":2,"max-cpu-available":2,"member-initialized":true,"name":"multinode-442000","data-dir":"/var/lib/minikube/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/minikube/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://172.17.93.236:2380"],"listen-peer-urls":["https://172.17.93.236:2380"],"advertise-client-urls":["https://172.17.93.236:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.17.93.236:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"cors":["*"],"host-whitelist":["*"],"initial-
cluster":"","initial-cluster-state":"new","initial-cluster-token":"","quota-backend-bytes":2147483648,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"initial-corrupt-check":true,"corrupt-check-time-interval":"0s","compact-check-time-enabled":false,"compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s"}
	I0314 19:42:17.898497    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:01.989049Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"39.493838ms"}
	I0314 19:42:17.898541    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:02.0258Z","caller":"etcdserver/server.go:530","msg":"No snapshot found. Recovering WAL from scratch!"}
	I0314 19:42:17.898598    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:02.055698Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"76b99849a2fc5549","local-member-id":"fa26a6ed08186c39","commit-index":1967}
	I0314 19:42:17.898639    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:02.067927Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fa26a6ed08186c39 switched to configuration voters=()"}
	I0314 19:42:17.898639    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:02.067975Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fa26a6ed08186c39 became follower at term 2"}
	I0314 19:42:17.898692    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:02.068051Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft fa26a6ed08186c39 [peers: [], term: 2, commit: 1967, applied: 0, lastindex: 1967, lastterm: 2]"}
	I0314 19:42:17.898692    8428 command_runner.go:130] ! {"level":"warn","ts":"2024-03-14T19:41:02.100633Z","caller":"auth/store.go:1238","msg":"simple token is not cryptographically signed"}
	I0314 19:42:17.898733    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:02.113992Z","caller":"mvcc/kvstore.go:323","msg":"restored last compact revision","meta-bucket-name":"meta","meta-bucket-name-key":"finishedCompactRev","restored-compact-revision":1090}
	I0314 19:42:17.898733    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:02.125551Z","caller":"mvcc/kvstore.go:393","msg":"kvstore restored","current-rev":1704}
	I0314 19:42:17.898786    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:02.137052Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	I0314 19:42:17.898786    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:02.152836Z","caller":"etcdserver/corrupt.go:95","msg":"starting initial corruption check","local-member-id":"fa26a6ed08186c39","timeout":"7s"}
	I0314 19:42:17.898820    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:02.153448Z","caller":"etcdserver/corrupt.go:165","msg":"initial corruption checking passed; no corruption","local-member-id":"fa26a6ed08186c39"}
	I0314 19:42:17.898820    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:02.153504Z","caller":"etcdserver/server.go:854","msg":"starting etcd server","local-member-id":"fa26a6ed08186c39","local-server-version":"3.5.9","cluster-version":"to_be_decided"}
	I0314 19:42:17.898868    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:02.154089Z","caller":"etcdserver/server.go:754","msg":"starting initial election tick advance","election-ticks":10}
	I0314 19:42:17.898868    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:02.154894Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	I0314 19:42:17.898938    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:02.154977Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	I0314 19:42:17.898938    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:02.154992Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	I0314 19:42:17.898938    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:02.158559Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fa26a6ed08186c39 switched to configuration voters=(18025278095570267193)"}
	I0314 19:42:17.898938    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:02.158756Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"76b99849a2fc5549","local-member-id":"fa26a6ed08186c39","added-peer-id":"fa26a6ed08186c39","added-peer-peer-urls":["https://172.17.86.124:2380"]}
	I0314 19:42:17.898938    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:02.158933Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"76b99849a2fc5549","local-member-id":"fa26a6ed08186c39","cluster-version":"3.5"}
	I0314 19:42:17.898938    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:02.158969Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	I0314 19:42:17.898938    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:02.159838Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I0314 19:42:17.898938    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:02.160148Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"fa26a6ed08186c39","initial-advertise-peer-urls":["https://172.17.93.236:2380"],"listen-peer-urls":["https://172.17.93.236:2380"],"advertise-client-urls":["https://172.17.93.236:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.17.93.236:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	I0314 19:42:17.898938    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:02.160272Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	I0314 19:42:17.898938    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:02.161335Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"172.17.93.236:2380"}
	I0314 19:42:17.898938    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:02.161389Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"172.17.93.236:2380"}
	I0314 19:42:17.898938    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:03.281331Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fa26a6ed08186c39 is starting a new election at term 2"}
	I0314 19:42:17.898938    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:03.281645Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fa26a6ed08186c39 became pre-candidate at term 2"}
	I0314 19:42:17.898938    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:03.281829Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fa26a6ed08186c39 received MsgPreVoteResp from fa26a6ed08186c39 at term 2"}
	I0314 19:42:17.898938    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:03.281928Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fa26a6ed08186c39 became candidate at term 3"}
	I0314 19:42:17.898938    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:03.282044Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fa26a6ed08186c39 received MsgVoteResp from fa26a6ed08186c39 at term 3"}
	I0314 19:42:17.898938    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:03.282164Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fa26a6ed08186c39 became leader at term 3"}
	I0314 19:42:17.898938    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:03.282332Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: fa26a6ed08186c39 elected leader fa26a6ed08186c39 at term 3"}
	I0314 19:42:17.898938    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:03.292472Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"fa26a6ed08186c39","local-member-attributes":"{Name:multinode-442000 ClientURLs:[https://172.17.93.236:2379]}","request-path":"/0/members/fa26a6ed08186c39/attributes","cluster-id":"76b99849a2fc5549","publish-timeout":"7s"}
	I0314 19:42:17.898938    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:03.292867Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	I0314 19:42:17.898938    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:03.296522Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	I0314 19:42:17.898938    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:03.298446Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	I0314 19:42:17.898938    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:03.311867Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.17.93.236:2379"}
	I0314 19:42:17.898938    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:03.311957Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	I0314 19:42:17.898938    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:03.31205Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	I0314 19:42:17.904943    8428 logs.go:123] Gathering logs for kube-proxy [497007582e44] ...
	I0314 19:42:17.904943    8428 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 497007582e44"
	I0314 19:42:17.931281    8428 command_runner.go:130] ! I0314 19:41:08.342277       1 server_others.go:69] "Using iptables proxy"
	I0314 19:42:17.931281    8428 command_runner.go:130] ! I0314 19:41:08.381589       1 node.go:141] Successfully retrieved node IP: 172.17.93.236
	I0314 19:42:17.931281    8428 command_runner.go:130] ! I0314 19:41:08.703360       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0314 19:42:17.931281    8428 command_runner.go:130] ! I0314 19:41:08.703384       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0314 19:42:17.931281    8428 command_runner.go:130] ! I0314 19:41:08.724122       1 server_others.go:152] "Using iptables Proxier"
	I0314 19:42:17.931281    8428 command_runner.go:130] ! I0314 19:41:08.726554       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0314 19:42:17.931281    8428 command_runner.go:130] ! I0314 19:41:08.729424       1 server.go:846] "Version info" version="v1.28.4"
	I0314 19:42:17.931281    8428 command_runner.go:130] ! I0314 19:41:08.729460       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0314 19:42:17.931281    8428 command_runner.go:130] ! I0314 19:41:08.732062       1 config.go:188] "Starting service config controller"
	I0314 19:42:17.931281    8428 command_runner.go:130] ! I0314 19:41:08.732501       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0314 19:42:17.931281    8428 command_runner.go:130] ! I0314 19:41:08.732571       1 config.go:97] "Starting endpoint slice config controller"
	I0314 19:42:17.931281    8428 command_runner.go:130] ! I0314 19:41:08.732581       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0314 19:42:17.931281    8428 command_runner.go:130] ! I0314 19:41:08.733523       1 config.go:315] "Starting node config controller"
	I0314 19:42:17.931281    8428 command_runner.go:130] ! I0314 19:41:08.733550       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0314 19:42:17.931281    8428 command_runner.go:130] ! I0314 19:41:08.832968       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0314 19:42:17.931281    8428 command_runner.go:130] ! I0314 19:41:08.833049       1 shared_informer.go:318] Caches are synced for service config
	I0314 19:42:17.931281    8428 command_runner.go:130] ! I0314 19:41:08.835501       1 shared_informer.go:318] Caches are synced for node config
	I0314 19:42:17.933860    8428 logs.go:123] Gathering logs for kube-controller-manager [12baf105f0bb] ...
	I0314 19:42:17.933914    8428 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12baf105f0bb"
	I0314 19:42:17.963142    8428 command_runner.go:130] ! I0314 19:41:03.101287       1 serving.go:348] Generated self-signed cert in-memory
	I0314 19:42:17.964008    8428 command_runner.go:130] ! I0314 19:41:03.872151       1 controllermanager.go:189] "Starting" version="v1.28.4"
	I0314 19:42:17.964055    8428 command_runner.go:130] ! I0314 19:41:03.874301       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0314 19:42:17.964055    8428 command_runner.go:130] ! I0314 19:41:03.879645       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0314 19:42:17.964055    8428 command_runner.go:130] ! I0314 19:41:03.880765       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0314 19:42:17.964055    8428 command_runner.go:130] ! I0314 19:41:03.883873       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0314 19:42:17.964055    8428 command_runner.go:130] ! I0314 19:41:03.883977       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0314 19:42:17.964055    8428 command_runner.go:130] ! I0314 19:41:07.787609       1 shared_informer.go:311] Waiting for caches to sync for tokens
	I0314 19:42:17.964055    8428 command_runner.go:130] ! I0314 19:41:07.796442       1 controllermanager.go:642] "Started controller" controller="replicationcontroller-controller"
	I0314 19:42:17.964055    8428 command_runner.go:130] ! I0314 19:41:07.796953       1 replica_set.go:214] "Starting controller" name="replicationcontroller"
	I0314 19:42:17.964055    8428 command_runner.go:130] ! I0314 19:41:07.798900       1 shared_informer.go:311] Waiting for caches to sync for ReplicationController
	I0314 19:42:17.964055    8428 command_runner.go:130] ! I0314 19:41:07.848846       1 controllermanager.go:642] "Started controller" controller="namespace-controller"
	I0314 19:42:17.964055    8428 command_runner.go:130] ! I0314 19:41:07.849015       1 namespace_controller.go:197] "Starting namespace controller"
	I0314 19:42:17.964055    8428 command_runner.go:130] ! I0314 19:41:07.849025       1 shared_informer.go:311] Waiting for caches to sync for namespace
	I0314 19:42:17.964055    8428 command_runner.go:130] ! I0314 19:41:07.855296       1 controllermanager.go:642] "Started controller" controller="persistentvolumeclaim-protection-controller"
	I0314 19:42:17.964055    8428 command_runner.go:130] ! I0314 19:41:07.858491       1 pvc_protection_controller.go:102] "Starting PVC protection controller"
	I0314 19:42:17.964055    8428 command_runner.go:130] ! I0314 19:41:07.858512       1 shared_informer.go:311] Waiting for caches to sync for PVC protection
	I0314 19:42:17.964055    8428 command_runner.go:130] ! I0314 19:41:07.864964       1 controllermanager.go:642] "Started controller" controller="endpoints-controller"
	I0314 19:42:17.964055    8428 command_runner.go:130] ! I0314 19:41:07.865080       1 endpoints_controller.go:174] "Starting endpoint controller"
	I0314 19:42:17.964055    8428 command_runner.go:130] ! I0314 19:41:07.865088       1 shared_informer.go:311] Waiting for caches to sync for endpoint
	I0314 19:42:17.964055    8428 command_runner.go:130] ! I0314 19:41:07.870629       1 controllermanager.go:642] "Started controller" controller="daemonset-controller"
	I0314 19:42:17.964055    8428 command_runner.go:130] ! I0314 19:41:07.871089       1 daemon_controller.go:291] "Starting daemon sets controller"
	I0314 19:42:17.964055    8428 command_runner.go:130] ! I0314 19:41:07.871332       1 shared_informer.go:311] Waiting for caches to sync for daemon sets
	I0314 19:42:17.964055    8428 command_runner.go:130] ! I0314 19:41:07.889997       1 shared_informer.go:318] Caches are synced for tokens
	I0314 19:42:17.964055    8428 command_runner.go:130] ! I0314 19:41:07.899597       1 controllermanager.go:642] "Started controller" controller="horizontal-pod-autoscaler-controller"
	I0314 19:42:17.964055    8428 command_runner.go:130] ! I0314 19:41:07.900355       1 horizontal.go:200] "Starting HPA controller"
	I0314 19:42:17.964055    8428 command_runner.go:130] ! I0314 19:41:07.901325       1 shared_informer.go:311] Waiting for caches to sync for HPA
	I0314 19:42:17.982925    8428 command_runner.go:130] ! I0314 19:41:07.921217       1 controllermanager.go:642] "Started controller" controller="disruption-controller"
	I0314 19:42:17.982925    8428 command_runner.go:130] ! I0314 19:41:07.922072       1 disruption.go:433] "Sending events to api server."
	I0314 19:42:17.982925    8428 command_runner.go:130] ! I0314 19:41:07.922293       1 disruption.go:444] "Starting disruption controller"
	I0314 19:42:17.982925    8428 command_runner.go:130] ! I0314 19:41:07.922481       1 shared_informer.go:311] Waiting for caches to sync for disruption
	I0314 19:42:17.983008    8428 command_runner.go:130] ! I0314 19:41:07.927437       1 controllermanager.go:642] "Started controller" controller="persistentvolume-attach-detach-controller"
	I0314 19:42:17.983008    8428 command_runner.go:130] ! I0314 19:41:07.929290       1 attach_detach_controller.go:337] "Starting attach detach controller"
	I0314 19:42:17.983008    8428 command_runner.go:130] ! I0314 19:41:07.929325       1 shared_informer.go:311] Waiting for caches to sync for attach detach
	I0314 19:42:17.983008    8428 command_runner.go:130] ! I0314 19:41:07.936410       1 controllermanager.go:642] "Started controller" controller="ephemeral-volume-controller"
	I0314 19:42:17.983008    8428 command_runner.go:130] ! I0314 19:41:07.936565       1 controller.go:169] "Starting ephemeral volume controller"
	I0314 19:42:17.983085    8428 command_runner.go:130] ! I0314 19:41:07.936765       1 shared_informer.go:311] Waiting for caches to sync for ephemeral
	I0314 19:42:17.983085    8428 command_runner.go:130] ! I0314 19:41:07.954720       1 controllermanager.go:642] "Started controller" controller="cronjob-controller"
	I0314 19:42:17.983085    8428 command_runner.go:130] ! I0314 19:41:07.954939       1 cronjob_controllerv2.go:139] "Starting cronjob controller v2"
	I0314 19:42:17.983085    8428 command_runner.go:130] ! I0314 19:41:07.955142       1 shared_informer.go:311] Waiting for caches to sync for cronjob
	I0314 19:42:17.983165    8428 command_runner.go:130] ! I0314 19:41:07.970387       1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-kubelet-serving"
	I0314 19:42:17.983165    8428 command_runner.go:130] ! I0314 19:41:07.970474       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I0314 19:42:17.983165    8428 command_runner.go:130] ! I0314 19:41:07.970624       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0314 19:42:17.983165    8428 command_runner.go:130] ! I0314 19:41:07.971307       1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-kubelet-client"
	I0314 19:42:17.983240    8428 command_runner.go:130] ! I0314 19:41:07.975049       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I0314 19:42:17.983240    8428 command_runner.go:130] ! I0314 19:41:07.973288       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0314 19:42:17.983240    8428 command_runner.go:130] ! I0314 19:41:07.974848       1 controllermanager.go:642] "Started controller" controller="certificatesigningrequest-signing-controller"
	I0314 19:42:17.983310    8428 command_runner.go:130] ! I0314 19:41:07.974977       1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-legacy-unknown"
	I0314 19:42:17.983310    8428 command_runner.go:130] ! I0314 19:41:07.977476       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0314 19:42:17.983310    8428 command_runner.go:130] ! I0314 19:41:07.974992       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0314 19:42:17.983310    8428 command_runner.go:130] ! I0314 19:41:07.975020       1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-kube-apiserver-client"
	I0314 19:42:17.983310    8428 command_runner.go:130] ! I0314 19:41:07.977827       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I0314 19:42:17.983390    8428 command_runner.go:130] ! I0314 19:41:07.975030       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0314 19:42:17.983390    8428 command_runner.go:130] ! I0314 19:41:07.990774       1 controllermanager.go:642] "Started controller" controller="ttl-controller"
	I0314 19:42:17.983390    8428 command_runner.go:130] ! I0314 19:41:07.995647       1 ttl_controller.go:124] "Starting TTL controller"
	I0314 19:42:17.983390    8428 command_runner.go:130] ! I0314 19:41:07.995667       1 shared_informer.go:311] Waiting for caches to sync for TTL
	I0314 19:42:17.983390    8428 command_runner.go:130] ! I0314 19:41:08.019000       1 controllermanager.go:642] "Started controller" controller="token-cleaner-controller"
	I0314 19:42:17.983464    8428 command_runner.go:130] ! I0314 19:41:08.019415       1 tokencleaner.go:112] "Starting token cleaner controller"
	I0314 19:42:17.983464    8428 command_runner.go:130] ! I0314 19:41:08.019568       1 shared_informer.go:311] Waiting for caches to sync for token_cleaner
	I0314 19:42:17.983464    8428 command_runner.go:130] ! I0314 19:41:08.019700       1 shared_informer.go:318] Caches are synced for token_cleaner
	I0314 19:42:17.983464    8428 command_runner.go:130] ! E0314 19:41:08.029770       1 core.go:92] "Failed to start service controller" err="WARNING: no cloud provider provided, services of type LoadBalancer will fail"
	I0314 19:42:17.983464    8428 command_runner.go:130] ! I0314 19:41:08.029950       1 controllermanager.go:620] "Warning: skipping controller" controller="service-lb-controller"
	I0314 19:42:17.983540    8428 command_runner.go:130] ! I0314 19:41:08.030066       1 core.go:228] "Warning: configure-cloud-routes is set, but no cloud provider specified. Will not configure cloud provider routes."
	I0314 19:42:17.983540    8428 command_runner.go:130] ! I0314 19:41:08.030148       1 controllermanager.go:620] "Warning: skipping controller" controller="node-route-controller"
	I0314 19:42:17.983540    8428 command_runner.go:130] ! I0314 19:41:08.056856       1 controllermanager.go:642] "Started controller" controller="clusterrole-aggregation-controller"
	I0314 19:42:17.983540    8428 command_runner.go:130] ! I0314 19:41:08.058933       1 clusterroleaggregation_controller.go:189] "Starting ClusterRoleAggregator controller"
	I0314 19:42:17.983540    8428 command_runner.go:130] ! I0314 19:41:08.059323       1 shared_informer.go:311] Waiting for caches to sync for ClusterRoleAggregator
	I0314 19:42:17.983540    8428 command_runner.go:130] ! I0314 19:41:08.062839       1 controllermanager.go:642] "Started controller" controller="endpointslice-controller"
	I0314 19:42:17.983613    8428 command_runner.go:130] ! I0314 19:41:08.063208       1 endpointslice_controller.go:264] "Starting endpoint slice controller"
	I0314 19:42:17.983613    8428 command_runner.go:130] ! I0314 19:41:08.063512       1 shared_informer.go:311] Waiting for caches to sync for endpoint_slice
	I0314 19:42:17.983613    8428 command_runner.go:130] ! I0314 19:41:08.070376       1 node_lifecycle_controller.go:431] "Controller will reconcile labels"
	I0314 19:42:17.983613    8428 command_runner.go:130] ! I0314 19:41:08.070635       1 controllermanager.go:642] "Started controller" controller="node-lifecycle-controller"
	I0314 19:42:17.983687    8428 command_runner.go:130] ! I0314 19:41:08.070748       1 node_lifecycle_controller.go:465] "Sending events to api server"
	I0314 19:42:17.983687    8428 command_runner.go:130] ! I0314 19:41:08.071006       1 node_lifecycle_controller.go:476] "Starting node controller"
	I0314 19:42:17.983763    8428 command_runner.go:130] ! I0314 19:41:08.071615       1 shared_informer.go:311] Waiting for caches to sync for taint
	I0314 19:42:17.983763    8428 command_runner.go:130] ! I0314 19:41:08.079849       1 controllermanager.go:642] "Started controller" controller="persistentvolume-binder-controller"
	I0314 19:42:17.983763    8428 command_runner.go:130] ! I0314 19:41:08.080117       1 pv_controller_base.go:319] "Starting persistent volume controller"
	I0314 19:42:17.983763    8428 command_runner.go:130] ! I0314 19:41:08.081765       1 shared_informer.go:311] Waiting for caches to sync for persistent volume
	I0314 19:42:17.983763    8428 command_runner.go:130] ! I0314 19:41:08.084328       1 controllermanager.go:642] "Started controller" controller="ttl-after-finished-controller"
	I0314 19:42:17.983763    8428 command_runner.go:130] ! I0314 19:41:08.084731       1 ttlafterfinished_controller.go:109] "Starting TTL after finished controller"
	I0314 19:42:17.983763    8428 command_runner.go:130] ! I0314 19:41:08.085301       1 shared_informer.go:311] Waiting for caches to sync for TTL after finished
	I0314 19:42:17.983836    8428 command_runner.go:130] ! I0314 19:41:08.092529       1 controllermanager.go:642] "Started controller" controller="garbage-collector-controller"
	I0314 19:42:17.983836    8428 command_runner.go:130] ! I0314 19:41:08.092761       1 garbagecollector.go:155] "Starting controller" controller="garbagecollector"
	I0314 19:42:17.983836    8428 command_runner.go:130] ! I0314 19:41:08.092771       1 shared_informer.go:311] Waiting for caches to sync for garbage collector
	I0314 19:42:17.983836    8428 command_runner.go:130] ! I0314 19:41:08.097268       1 controllermanager.go:642] "Started controller" controller="persistentvolume-expander-controller"
	I0314 19:42:17.983910    8428 command_runner.go:130] ! I0314 19:41:08.097521       1 expand_controller.go:328] "Starting expand controller"
	I0314 19:42:17.983910    8428 command_runner.go:130] ! I0314 19:41:08.097531       1 shared_informer.go:311] Waiting for caches to sync for expand
	I0314 19:42:17.983910    8428 command_runner.go:130] ! I0314 19:41:08.097559       1 graph_builder.go:294] "Running" component="GraphBuilder"
	I0314 19:42:17.983910    8428 command_runner.go:130] ! I0314 19:41:08.117374       1 controllermanager.go:642] "Started controller" controller="root-ca-certificate-publisher-controller"
	I0314 19:42:17.983910    8428 command_runner.go:130] ! I0314 19:41:08.117512       1 publisher.go:102] "Starting root CA cert publisher controller"
	I0314 19:42:17.983981    8428 command_runner.go:130] ! I0314 19:41:08.117524       1 shared_informer.go:311] Waiting for caches to sync for crt configmap
	I0314 19:42:17.983981    8428 command_runner.go:130] ! I0314 19:41:08.126388       1 controllermanager.go:642] "Started controller" controller="statefulset-controller"
	I0314 19:42:17.983981    8428 command_runner.go:130] ! I0314 19:41:08.127645       1 stateful_set.go:161] "Starting stateful set controller"
	I0314 19:42:17.983981    8428 command_runner.go:130] ! I0314 19:41:08.127702       1 shared_informer.go:311] Waiting for caches to sync for stateful set
	I0314 19:42:17.984056    8428 command_runner.go:130] ! I0314 19:41:08.131336       1 controllermanager.go:642] "Started controller" controller="certificatesigningrequest-cleaner-controller"
	I0314 19:42:17.984056    8428 command_runner.go:130] ! I0314 19:41:08.131505       1 cleaner.go:83] "Starting CSR cleaner controller"
	I0314 19:42:17.984056    8428 command_runner.go:130] ! E0314 19:41:08.142589       1 core.go:213] "Failed to start cloud node lifecycle controller" err="no cloud provider provided"
	I0314 19:42:17.984056    8428 command_runner.go:130] ! I0314 19:41:08.142621       1 controllermanager.go:620] "Warning: skipping controller" controller="cloud-node-lifecycle-controller"
	I0314 19:42:17.984056    8428 command_runner.go:130] ! I0314 19:41:08.150057       1 controllermanager.go:642] "Started controller" controller="pod-garbage-collector-controller"
	I0314 19:42:17.984131    8428 command_runner.go:130] ! I0314 19:41:08.152574       1 gc_controller.go:101] "Starting GC controller"
	I0314 19:42:17.984131    8428 command_runner.go:130] ! I0314 19:41:08.152724       1 shared_informer.go:311] Waiting for caches to sync for GC
	I0314 19:42:17.984131    8428 command_runner.go:130] ! I0314 19:41:08.302881       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="ingresses.networking.k8s.io"
	I0314 19:42:17.984131    8428 command_runner.go:130] ! I0314 19:41:08.303337       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="endpoints"
	I0314 19:42:17.984131    8428 command_runner.go:130] ! W0314 19:41:08.303671       1 shared_informer.go:593] resyncPeriod 21h24m41.293167603s is smaller than resyncCheckPeriod 22h48m56.659186017s and the informer has already started. Changing it to 22h48m56.659186017s
	I0314 19:42:17.984206    8428 command_runner.go:130] ! I0314 19:41:08.303970       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="limitranges"
	I0314 19:42:17.984206    8428 command_runner.go:130] ! I0314 19:41:08.304292       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="podtemplates"
	I0314 19:42:17.984206    8428 command_runner.go:130] ! I0314 19:41:08.304532       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="endpointslices.discovery.k8s.io"
	I0314 19:42:17.984279    8428 command_runner.go:130] ! I0314 19:41:08.304816       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="daemonsets.apps"
	I0314 19:42:17.984279    8428 command_runner.go:130] ! I0314 19:41:08.305073       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="rolebindings.rbac.authorization.k8s.io"
	I0314 19:42:17.984279    8428 command_runner.go:130] ! I0314 19:41:08.305373       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="roles.rbac.authorization.k8s.io"
	I0314 19:42:17.984279    8428 command_runner.go:130] ! I0314 19:41:08.305634       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="leases.coordination.k8s.io"
	I0314 19:42:17.984354    8428 command_runner.go:130] ! I0314 19:41:08.305976       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="statefulsets.apps"
	I0314 19:42:17.984354    8428 command_runner.go:130] ! I0314 19:41:08.306286       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="horizontalpodautoscalers.autoscaling"
	I0314 19:42:17.984354    8428 command_runner.go:130] ! I0314 19:41:08.306541       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="networkpolicies.networking.k8s.io"
	I0314 19:42:17.984354    8428 command_runner.go:130] ! I0314 19:41:08.306699       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="controllerrevisions.apps"
	I0314 19:42:17.984429    8428 command_runner.go:130] ! I0314 19:41:08.306843       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="deployments.apps"
	I0314 19:42:17.984429    8428 command_runner.go:130] ! I0314 19:41:08.307119       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="jobs.batch"
	I0314 19:42:17.984429    8428 command_runner.go:130] ! I0314 19:41:08.307379       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="serviceaccounts"
	I0314 19:42:17.984429    8428 command_runner.go:130] ! I0314 19:41:08.307553       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="cronjobs.batch"
	I0314 19:42:17.984429    8428 command_runner.go:130] ! I0314 19:41:08.307700       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="poddisruptionbudgets.policy"
	I0314 19:42:17.984504    8428 command_runner.go:130] ! I0314 19:41:08.308022       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="csistoragecapacities.storage.k8s.io"
	I0314 19:42:17.984504    8428 command_runner.go:130] ! I0314 19:41:08.308207       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="replicasets.apps"
	I0314 19:42:17.984504    8428 command_runner.go:130] ! I0314 19:41:08.308473       1 controllermanager.go:642] "Started controller" controller="resourcequota-controller"
	I0314 19:42:17.984504    8428 command_runner.go:130] ! I0314 19:41:08.308664       1 resource_quota_controller.go:294] "Starting resource quota controller"
	I0314 19:42:17.984504    8428 command_runner.go:130] ! I0314 19:41:08.309850       1 shared_informer.go:311] Waiting for caches to sync for resource quota
	I0314 19:42:17.984580    8428 command_runner.go:130] ! I0314 19:41:08.310060       1 resource_quota_monitor.go:305] "QuotaMonitor running"
	I0314 19:42:17.984580    8428 command_runner.go:130] ! I0314 19:41:08.344084       1 controllermanager.go:642] "Started controller" controller="serviceaccount-controller"
	I0314 19:42:17.984580    8428 command_runner.go:130] ! I0314 19:41:08.344536       1 serviceaccounts_controller.go:111] "Starting service account controller"
	I0314 19:42:17.984580    8428 command_runner.go:130] ! I0314 19:41:08.344832       1 shared_informer.go:311] Waiting for caches to sync for service account
	I0314 19:42:17.984580    8428 command_runner.go:130] ! I0314 19:41:08.397742       1 controllermanager.go:642] "Started controller" controller="deployment-controller"
	I0314 19:42:17.984654    8428 command_runner.go:130] ! I0314 19:41:08.400742       1 deployment_controller.go:168] "Starting controller" controller="deployment"
	I0314 19:42:17.984654    8428 command_runner.go:130] ! I0314 19:41:08.401126       1 shared_informer.go:311] Waiting for caches to sync for deployment
	I0314 19:42:17.984654    8428 command_runner.go:130] ! I0314 19:41:08.448054       1 controllermanager.go:642] "Started controller" controller="bootstrap-signer-controller"
	I0314 19:42:17.984654    8428 command_runner.go:130] ! I0314 19:41:08.448538       1 shared_informer.go:311] Waiting for caches to sync for bootstrap_signer
	I0314 19:42:17.984654    8428 command_runner.go:130] ! I0314 19:41:08.495738       1 controllermanager.go:642] "Started controller" controller="persistentvolume-protection-controller"
	I0314 19:42:17.984728    8428 command_runner.go:130] ! I0314 19:41:08.496045       1 pv_protection_controller.go:78] "Starting PV protection controller"
	I0314 19:42:17.984728    8428 command_runner.go:130] ! I0314 19:41:08.496112       1 shared_informer.go:311] Waiting for caches to sync for PV protection
	I0314 19:42:17.984728    8428 command_runner.go:130] ! I0314 19:41:08.547967       1 controllermanager.go:642] "Started controller" controller="endpointslice-mirroring-controller"
	I0314 19:42:17.984728    8428 command_runner.go:130] ! I0314 19:41:08.548352       1 endpointslicemirroring_controller.go:223] "Starting EndpointSliceMirroring controller"
	I0314 19:42:17.984728    8428 command_runner.go:130] ! I0314 19:41:08.548556       1 shared_informer.go:311] Waiting for caches to sync for endpoint_slice_mirroring
	I0314 19:42:17.984728    8428 command_runner.go:130] ! I0314 19:41:08.593742       1 controllermanager.go:642] "Started controller" controller="job-controller"
	I0314 19:42:17.984817    8428 command_runner.go:130] ! I0314 19:41:08.593860       1 job_controller.go:226] "Starting job controller"
	I0314 19:42:17.984817    8428 command_runner.go:130] ! I0314 19:41:08.594297       1 shared_informer.go:311] Waiting for caches to sync for job
	I0314 19:42:17.984817    8428 command_runner.go:130] ! I0314 19:41:08.650392       1 controllermanager.go:642] "Started controller" controller="replicaset-controller"
	I0314 19:42:17.984893    8428 command_runner.go:130] ! I0314 19:41:08.650668       1 replica_set.go:214] "Starting controller" name="replicaset"
	I0314 19:42:17.984893    8428 command_runner.go:130] ! I0314 19:41:08.650851       1 shared_informer.go:311] Waiting for caches to sync for ReplicaSet
	I0314 19:42:17.984893    8428 command_runner.go:130] ! I0314 19:41:08.704591       1 controllermanager.go:642] "Started controller" controller="certificatesigningrequest-approving-controller"
	I0314 19:42:17.984893    8428 command_runner.go:130] ! I0314 19:41:08.704627       1 certificate_controller.go:115] "Starting certificate controller" name="csrapproving"
	I0314 19:42:17.984893    8428 command_runner.go:130] ! I0314 19:41:08.704645       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrapproving
	I0314 19:42:17.984973    8428 command_runner.go:130] ! I0314 19:41:18.768485       1 range_allocator.go:111] "No Secondary Service CIDR provided. Skipping filtering out secondary service addresses"
	I0314 19:42:17.984973    8428 command_runner.go:130] ! I0314 19:41:18.768824       1 controllermanager.go:642] "Started controller" controller="node-ipam-controller"
	I0314 19:42:17.984973    8428 command_runner.go:130] ! I0314 19:41:18.769281       1 node_ipam_controller.go:162] "Starting ipam controller"
	I0314 19:42:17.984973    8428 command_runner.go:130] ! I0314 19:41:18.769315       1 shared_informer.go:311] Waiting for caches to sync for node
	I0314 19:42:17.984973    8428 command_runner.go:130] ! I0314 19:41:18.779639       1 shared_informer.go:311] Waiting for caches to sync for resource quota
	I0314 19:42:17.985046    8428 command_runner.go:130] ! I0314 19:41:18.796167       1 shared_informer.go:318] Caches are synced for PV protection
	I0314 19:42:17.985046    8428 command_runner.go:130] ! I0314 19:41:18.796514       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-442000-m02"
	I0314 19:42:17.985046    8428 command_runner.go:130] ! I0314 19:41:18.796299       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-442000\" does not exist"
	I0314 19:42:17.985046    8428 command_runner.go:130] ! I0314 19:41:18.799471       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-442000-m02\" does not exist"
	I0314 19:42:17.985120    8428 command_runner.go:130] ! I0314 19:41:18.799722       1 shared_informer.go:318] Caches are synced for ReplicationController
	I0314 19:42:17.985120    8428 command_runner.go:130] ! I0314 19:41:18.799937       1 shared_informer.go:318] Caches are synced for TTL
	I0314 19:42:17.985120    8428 command_runner.go:130] ! I0314 19:41:18.800165       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-442000-m03\" does not exist"
	I0314 19:42:17.985120    8428 command_runner.go:130] ! I0314 19:41:18.802329       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-442000-m02"
	I0314 19:42:17.985203    8428 command_runner.go:130] ! I0314 19:41:18.802379       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-442000-m02"
	I0314 19:42:17.985203    8428 command_runner.go:130] ! I0314 19:41:18.806338       1 shared_informer.go:318] Caches are synced for certificate-csrapproving
	I0314 19:42:17.985203    8428 command_runner.go:130] ! I0314 19:41:18.836188       1 shared_informer.go:318] Caches are synced for attach detach
	I0314 19:42:17.985203    8428 command_runner.go:130] ! I0314 19:41:18.842003       1 shared_informer.go:318] Caches are synced for ephemeral
	I0314 19:42:17.985275    8428 command_runner.go:130] ! I0314 19:41:18.842516       1 shared_informer.go:318] Caches are synced for stateful set
	I0314 19:42:17.985275    8428 command_runner.go:130] ! I0314 19:41:18.845380       1 shared_informer.go:318] Caches are synced for service account
	I0314 19:42:17.985275    8428 command_runner.go:130] ! I0314 19:41:18.848744       1 shared_informer.go:318] Caches are synced for bootstrap_signer
	I0314 19:42:17.985275    8428 command_runner.go:130] ! I0314 19:41:18.849154       1 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring
	I0314 19:42:17.985275    8428 command_runner.go:130] ! I0314 19:41:18.849988       1 shared_informer.go:318] Caches are synced for namespace
	I0314 19:42:17.985275    8428 command_runner.go:130] ! I0314 19:41:18.850447       1 shared_informer.go:311] Waiting for caches to sync for garbage collector
	I0314 19:42:17.985353    8428 command_runner.go:130] ! I0314 19:41:18.851139       1 shared_informer.go:318] Caches are synced for ReplicaSet
	I0314 19:42:17.985353    8428 command_runner.go:130] ! I0314 19:41:18.852942       1 shared_informer.go:318] Caches are synced for GC
	I0314 19:42:17.985353    8428 command_runner.go:130] ! I0314 19:41:18.860631       1 shared_informer.go:318] Caches are synced for ClusterRoleAggregator
	I0314 19:42:17.985353    8428 command_runner.go:130] ! I0314 19:41:18.862001       1 shared_informer.go:318] Caches are synced for cronjob
	I0314 19:42:17.985353    8428 command_runner.go:130] ! I0314 19:41:18.862045       1 shared_informer.go:318] Caches are synced for PVC protection
	I0314 19:42:17.985429    8428 command_runner.go:130] ! I0314 19:41:18.864453       1 shared_informer.go:318] Caches are synced for endpoint_slice
	I0314 19:42:17.985429    8428 command_runner.go:130] ! I0314 19:41:18.865205       1 shared_informer.go:318] Caches are synced for endpoint
	I0314 19:42:17.985429    8428 command_runner.go:130] ! I0314 19:41:18.870312       1 shared_informer.go:318] Caches are synced for node
	I0314 19:42:17.985429    8428 command_runner.go:130] ! I0314 19:41:18.871490       1 range_allocator.go:174] "Sending events to api server"
	I0314 19:42:17.985429    8428 command_runner.go:130] ! I0314 19:41:18.871652       1 range_allocator.go:178] "Starting range CIDR allocator"
	I0314 19:42:17.985429    8428 command_runner.go:130] ! I0314 19:41:18.871843       1 shared_informer.go:311] Waiting for caches to sync for cidrallocator
	I0314 19:42:17.985508    8428 command_runner.go:130] ! I0314 19:41:18.871901       1 shared_informer.go:318] Caches are synced for cidrallocator
	I0314 19:42:17.985508    8428 command_runner.go:130] ! I0314 19:41:18.871655       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-serving
	I0314 19:42:17.985508    8428 command_runner.go:130] ! I0314 19:41:18.871600       1 shared_informer.go:318] Caches are synced for daemon sets
	I0314 19:42:17.985508    8428 command_runner.go:130] ! I0314 19:41:18.877449       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-client
	I0314 19:42:17.985508    8428 command_runner.go:130] ! I0314 19:41:18.878919       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-legacy-unknown
	I0314 19:42:17.985581    8428 command_runner.go:130] ! I0314 19:41:18.880521       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0314 19:42:17.985581    8428 command_runner.go:130] ! I0314 19:41:18.886337       1 shared_informer.go:318] Caches are synced for TTL after finished
	I0314 19:42:17.985581    8428 command_runner.go:130] ! I0314 19:41:18.895206       1 shared_informer.go:318] Caches are synced for job
	I0314 19:42:17.985581    8428 command_runner.go:130] ! I0314 19:41:18.898522       1 shared_informer.go:318] Caches are synced for expand
	I0314 19:42:17.985581    8428 command_runner.go:130] ! I0314 19:41:18.902360       1 shared_informer.go:318] Caches are synced for deployment
	I0314 19:42:17.985581    8428 command_runner.go:130] ! I0314 19:41:18.905493       1 shared_informer.go:318] Caches are synced for HPA
	I0314 19:42:17.985581    8428 command_runner.go:130] ! I0314 19:41:18.906213       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="54.805878ms"
	I0314 19:42:17.985656    8428 command_runner.go:130] ! I0314 19:41:18.908178       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="47.802µs"
	I0314 19:42:17.985656    8428 command_runner.go:130] ! I0314 19:41:18.908549       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="56.720551ms"
	I0314 19:42:17.985656    8428 command_runner.go:130] ! I0314 19:41:18.911784       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="133.705µs"
	I0314 19:42:17.985656    8428 command_runner.go:130] ! I0314 19:41:18.919410       1 shared_informer.go:318] Caches are synced for crt configmap
	I0314 19:42:17.985656    8428 command_runner.go:130] ! I0314 19:41:18.923587       1 shared_informer.go:318] Caches are synced for disruption
	I0314 19:42:17.985732    8428 command_runner.go:130] ! I0314 19:41:18.974303       1 shared_informer.go:318] Caches are synced for taint
	I0314 19:42:17.985732    8428 command_runner.go:130] ! I0314 19:41:18.974653       1 node_lifecycle_controller.go:1225] "Initializing eviction metric for zone" zone=""
	I0314 19:42:17.985732    8428 command_runner.go:130] ! I0314 19:41:18.975178       1 taint_manager.go:205] "Starting NoExecuteTaintManager"
	I0314 19:42:17.985732    8428 command_runner.go:130] ! I0314 19:41:18.975416       1 taint_manager.go:210] "Sending events to api server"
	I0314 19:42:17.985806    8428 command_runner.go:130] ! I0314 19:41:18.977051       1 event.go:307] "Event occurred" object="multinode-442000" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-442000 event: Registered Node multinode-442000 in Controller"
	I0314 19:42:17.985806    8428 command_runner.go:130] ! I0314 19:41:18.977995       1 event.go:307] "Event occurred" object="multinode-442000-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-442000-m02 event: Registered Node multinode-442000-m02 in Controller"
	I0314 19:42:17.985806    8428 command_runner.go:130] ! I0314 19:41:18.978165       1 event.go:307] "Event occurred" object="multinode-442000-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-442000-m03 event: Registered Node multinode-442000-m03 in Controller"
	I0314 19:42:17.985806    8428 command_runner.go:130] ! I0314 19:41:18.980168       1 shared_informer.go:318] Caches are synced for resource quota
	I0314 19:42:17.985883    8428 command_runner.go:130] ! I0314 19:41:18.982162       1 shared_informer.go:318] Caches are synced for persistent volume
	I0314 19:42:17.985883    8428 command_runner.go:130] ! I0314 19:41:19.001384       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-442000"
	I0314 19:42:17.985883    8428 command_runner.go:130] ! I0314 19:41:19.002299       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-442000-m02"
	I0314 19:42:17.985883    8428 command_runner.go:130] ! I0314 19:41:19.002838       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-442000-m03"
	I0314 19:42:17.985956    8428 command_runner.go:130] ! I0314 19:41:19.003844       1 node_lifecycle_controller.go:1071] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I0314 19:42:17.985956    8428 command_runner.go:130] ! I0314 19:41:19.010468       1 shared_informer.go:318] Caches are synced for resource quota
	I0314 19:42:17.985956    8428 command_runner.go:130] ! I0314 19:41:19.393074       1 shared_informer.go:318] Caches are synced for garbage collector
	I0314 19:42:17.985956    8428 command_runner.go:130] ! I0314 19:41:19.393161       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0314 19:42:17.986031    8428 command_runner.go:130] ! I0314 19:41:19.450734       1 shared_informer.go:318] Caches are synced for garbage collector
	I0314 19:42:17.986031    8428 command_runner.go:130] ! I0314 19:41:41.542550       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-442000-m02"
	I0314 19:42:17.986031    8428 command_runner.go:130] ! I0314 19:41:44.029818       1 event.go:307] "Event occurred" object="kube-system/storage-provisioner" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod kube-system/storage-provisioner"
	I0314 19:42:17.986031    8428 command_runner.go:130] ! I0314 19:41:44.029853       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68-d22jc" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod kube-system/coredns-5dd5756b68-d22jc"
	I0314 19:42:17.986111    8428 command_runner.go:130] ! I0314 19:41:44.029866       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6-7446n" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-5b5d89c9d6-7446n"
	I0314 19:42:17.986169    8428 command_runner.go:130] ! I0314 19:41:59.058949       1 event.go:307] "Event occurred" object="multinode-442000-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-442000-m02 status is now: NodeNotReady"
	I0314 19:42:17.986205    8428 command_runner.go:130] ! I0314 19:41:59.074940       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6-8drpb" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0314 19:42:17.986205    8428 command_runner.go:130] ! I0314 19:41:59.085508       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="9.938337ms"
	I0314 19:42:17.986205    8428 command_runner.go:130] ! I0314 19:41:59.086845       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="54.804µs"
	I0314 19:42:17.986205    8428 command_runner.go:130] ! I0314 19:41:59.099029       1 event.go:307] "Event occurred" object="kube-system/kindnet-c7m4p" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0314 19:42:17.986205    8428 command_runner.go:130] ! I0314 19:41:59.122329       1 event.go:307] "Event occurred" object="kube-system/kube-proxy-72dzs" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0314 19:42:17.986282    8428 command_runner.go:130] ! I0314 19:42:12.281109       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="13.332951ms"
	I0314 19:42:17.986282    8428 command_runner.go:130] ! I0314 19:42:12.281325       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="115.209µs"
	I0314 19:42:17.986313    8428 command_runner.go:130] ! I0314 19:42:12.305037       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="74.006µs"
	I0314 19:42:17.986341    8428 command_runner.go:130] ! I0314 19:42:12.366507       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="32.074928ms"
	I0314 19:42:17.986341    8428 command_runner.go:130] ! I0314 19:42:12.368560       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="106.408µs"
	I0314 19:42:17.998710    8428 logs.go:123] Gathering logs for Docker ...
	I0314 19:42:17.998710    8428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0314 19:42:18.030533    8428 command_runner.go:130] > Mar 14 19:39:36 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0314 19:42:18.030533    8428 command_runner.go:130] > Mar 14 19:39:36 minikube cri-dockerd[222]: time="2024-03-14T19:39:36Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0314 19:42:18.030533    8428 command_runner.go:130] > Mar 14 19:39:36 minikube cri-dockerd[222]: time="2024-03-14T19:39:36Z" level=info msg="Start docker client with request timeout 0s"
	I0314 19:42:18.030639    8428 command_runner.go:130] > Mar 14 19:39:36 minikube cri-dockerd[222]: time="2024-03-14T19:39:36Z" level=fatal msg="failed to get docker version: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0314 19:42:18.030639    8428 command_runner.go:130] > Mar 14 19:39:37 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0314 19:42:18.030676    8428 command_runner.go:130] > Mar 14 19:39:37 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0314 19:42:18.030712    8428 command_runner.go:130] > Mar 14 19:39:37 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0314 19:42:18.030712    8428 command_runner.go:130] > Mar 14 19:39:39 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 1.
	I0314 19:42:18.030752    8428 command_runner.go:130] > Mar 14 19:39:39 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0314 19:42:18.030752    8428 command_runner.go:130] > Mar 14 19:39:39 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0314 19:42:18.030787    8428 command_runner.go:130] > Mar 14 19:39:39 minikube cri-dockerd[402]: time="2024-03-14T19:39:39Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0314 19:42:18.030787    8428 command_runner.go:130] > Mar 14 19:39:39 minikube cri-dockerd[402]: time="2024-03-14T19:39:39Z" level=info msg="Start docker client with request timeout 0s"
	I0314 19:42:18.030826    8428 command_runner.go:130] > Mar 14 19:39:39 minikube cri-dockerd[402]: time="2024-03-14T19:39:39Z" level=fatal msg="failed to get docker version: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0314 19:42:18.030826    8428 command_runner.go:130] > Mar 14 19:39:39 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0314 19:42:18.030826    8428 command_runner.go:130] > Mar 14 19:39:39 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0314 19:42:18.030869    8428 command_runner.go:130] > Mar 14 19:39:39 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0314 19:42:18.030869    8428 command_runner.go:130] > Mar 14 19:39:41 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 2.
	I0314 19:42:18.030911    8428 command_runner.go:130] > Mar 14 19:39:41 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0314 19:42:18.030911    8428 command_runner.go:130] > Mar 14 19:39:41 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0314 19:42:18.030947    8428 command_runner.go:130] > Mar 14 19:39:41 minikube cri-dockerd[422]: time="2024-03-14T19:39:41Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0314 19:42:18.030947    8428 command_runner.go:130] > Mar 14 19:39:41 minikube cri-dockerd[422]: time="2024-03-14T19:39:41Z" level=info msg="Start docker client with request timeout 0s"
	I0314 19:42:18.031008    8428 command_runner.go:130] > Mar 14 19:39:41 minikube cri-dockerd[422]: time="2024-03-14T19:39:41Z" level=fatal msg="failed to get docker version: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0314 19:42:18.031008    8428 command_runner.go:130] > Mar 14 19:39:41 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0314 19:42:18.031008    8428 command_runner.go:130] > Mar 14 19:39:41 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0314 19:42:18.031008    8428 command_runner.go:130] > Mar 14 19:39:41 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0314 19:42:18.031008    8428 command_runner.go:130] > Mar 14 19:39:44 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 3.
	I0314 19:42:18.031008    8428 command_runner.go:130] > Mar 14 19:39:44 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0314 19:42:18.031008    8428 command_runner.go:130] > Mar 14 19:39:44 minikube systemd[1]: cri-docker.service: Start request repeated too quickly.
	I0314 19:42:18.031008    8428 command_runner.go:130] > Mar 14 19:39:44 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0314 19:42:18.031008    8428 command_runner.go:130] > Mar 14 19:39:44 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0314 19:42:18.031008    8428 command_runner.go:130] > Mar 14 19:40:26 multinode-442000 systemd[1]: Starting Docker Application Container Engine...
	I0314 19:42:18.031008    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[650]: time="2024-03-14T19:40:27.010258466Z" level=info msg="Starting up"
	I0314 19:42:18.031008    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[650]: time="2024-03-14T19:40:27.011413188Z" level=info msg="containerd not running, starting managed containerd"
	I0314 19:42:18.031008    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[650]: time="2024-03-14T19:40:27.012927209Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=656
	I0314 19:42:18.031008    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.042687292Z" level=info msg="starting containerd" revision=dcf2847247e18caba8dce86522029642f60fe96b version=v1.7.14
	I0314 19:42:18.031008    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.069138554Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0314 19:42:18.031008    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.069242083Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0314 19:42:18.031008    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.069344111Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0314 19:42:18.031008    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.069362416Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0314 19:42:18.031008    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.070081016Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0314 19:42:18.031008    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.070164740Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0314 19:42:18.031008    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.070380400Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0314 19:42:18.031008    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.070511536Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0314 19:42:18.031008    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.070532642Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0314 19:42:18.031008    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.070544145Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0314 19:42:18.031008    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.070983067Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0314 19:42:18.031008    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.071556427Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0314 19:42:18.031008    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.074554061Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0314 19:42:18.031008    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.074645687Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0314 19:42:18.031536    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.074800830Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0314 19:42:18.031536    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.074883153Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0314 19:42:18.031576    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.075687977Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0314 19:42:18.031619    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.075800308Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0314 19:42:18.031657    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.075818813Z" level=info msg="metadata content store policy set" policy=shared
	I0314 19:42:18.031657    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.081334348Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0314 19:42:18.031691    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.081440978Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0314 19:42:18.031691    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.081463484Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0314 19:42:18.031730    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.081526902Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0314 19:42:18.031765    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.081545007Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0314 19:42:18.031765    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.081621128Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0314 19:42:18.031804    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.082036144Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0314 19:42:18.031804    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.082193387Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0314 19:42:18.031846    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.082276711Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0314 19:42:18.031846    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.082349431Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0314 19:42:18.031887    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.082368036Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0314 19:42:18.031928    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.082385141Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0314 19:42:18.031928    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.082401545Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0314 19:42:18.031969    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.082417450Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0314 19:42:18.032010    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.082433154Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0314 19:42:18.032010    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.082457161Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0314 19:42:18.032052    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.082515377Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0314 19:42:18.032052    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.082533482Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0314 19:42:18.032087    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.082554788Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0314 19:42:18.032126    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.082572093Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0314 19:42:18.032126    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.082586997Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0314 19:42:18.032166    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.082601801Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0314 19:42:18.032205    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.082616305Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0314 19:42:18.032239    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.082631109Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0314 19:42:18.032271    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.082643913Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0314 19:42:18.032271    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.082659317Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0314 19:42:18.032297    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.082673721Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0314 19:42:18.032297    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.082690226Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0314 19:42:18.032297    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.082704230Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0314 19:42:18.032297    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.082717333Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0314 19:42:18.032297    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.082730637Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0314 19:42:18.032297    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.082747942Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0314 19:42:18.032297    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.082771048Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0314 19:42:18.032297    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.082785952Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0314 19:42:18.032297    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.082799956Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0314 19:42:18.032297    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.082936994Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0314 19:42:18.032297    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.082973004Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	I0314 19:42:18.032297    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.082986808Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0314 19:42:18.032297    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.082998612Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	I0314 19:42:18.032297    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.083067631Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0314 19:42:18.032297    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.083095839Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0314 19:42:18.032297    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.083107842Z" level=info msg="NRI interface is disabled by configuration."
	I0314 19:42:18.032297    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.083364013Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0314 19:42:18.032297    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.083531860Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0314 19:42:18.032297    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.083575672Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0314 19:42:18.032297    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.083609482Z" level=info msg="containerd successfully booted in 0.043398s"
	I0314 19:42:18.032297    8428 command_runner.go:130] > Mar 14 19:40:28 multinode-442000 dockerd[650]: time="2024-03-14T19:40:28.063674621Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0314 19:42:18.032297    8428 command_runner.go:130] > Mar 14 19:40:28 multinode-442000 dockerd[650]: time="2024-03-14T19:40:28.220876850Z" level=info msg="Loading containers: start."
	I0314 19:42:18.032297    8428 command_runner.go:130] > Mar 14 19:40:28 multinode-442000 dockerd[650]: time="2024-03-14T19:40:28.643208421Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.18.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0314 19:42:18.032297    8428 command_runner.go:130] > Mar 14 19:40:28 multinode-442000 dockerd[650]: time="2024-03-14T19:40:28.726589336Z" level=info msg="Loading containers: done."
	I0314 19:42:18.032821    8428 command_runner.go:130] > Mar 14 19:40:28 multinode-442000 dockerd[650]: time="2024-03-14T19:40:28.750141296Z" level=info msg="Docker daemon" commit=061aa95 containerd-snapshotter=false storage-driver=overlay2 version=25.0.4
	I0314 19:42:18.032862    8428 command_runner.go:130] > Mar 14 19:40:28 multinode-442000 dockerd[650]: time="2024-03-14T19:40:28.750832983Z" level=info msg="Daemon has completed initialization"
	I0314 19:42:18.032862    8428 command_runner.go:130] > Mar 14 19:40:28 multinode-442000 systemd[1]: Started Docker Application Container Engine.
	I0314 19:42:18.032862    8428 command_runner.go:130] > Mar 14 19:40:28 multinode-442000 dockerd[650]: time="2024-03-14T19:40:28.799522730Z" level=info msg="API listen on [::]:2376"
	I0314 19:42:18.032904    8428 command_runner.go:130] > Mar 14 19:40:28 multinode-442000 dockerd[650]: time="2024-03-14T19:40:28.799691776Z" level=info msg="API listen on /var/run/docker.sock"
	I0314 19:42:18.032904    8428 command_runner.go:130] > Mar 14 19:40:52 multinode-442000 systemd[1]: Stopping Docker Application Container Engine...
	I0314 19:42:18.032944    8428 command_runner.go:130] > Mar 14 19:40:52 multinode-442000 dockerd[650]: time="2024-03-14T19:40:52.824796168Z" level=info msg="Processing signal 'terminated'"
	I0314 19:42:18.032978    8428 command_runner.go:130] > Mar 14 19:40:52 multinode-442000 dockerd[650]: time="2024-03-14T19:40:52.825961557Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0314 19:42:18.032978    8428 command_runner.go:130] > Mar 14 19:40:52 multinode-442000 dockerd[650]: time="2024-03-14T19:40:52.826585605Z" level=info msg="Daemon shutdown complete"
	I0314 19:42:18.033017    8428 command_runner.go:130] > Mar 14 19:40:52 multinode-442000 dockerd[650]: time="2024-03-14T19:40:52.826653911Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0314 19:42:18.033051    8428 command_runner.go:130] > Mar 14 19:40:52 multinode-442000 dockerd[650]: time="2024-03-14T19:40:52.826812323Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0314 19:42:18.033051    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 systemd[1]: docker.service: Deactivated successfully.
	I0314 19:42:18.033090    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 systemd[1]: Stopped Docker Application Container Engine.
	I0314 19:42:18.033090    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 systemd[1]: Starting Docker Application Container Engine...
	I0314 19:42:18.033124    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1043]: time="2024-03-14T19:40:53.899936864Z" level=info msg="Starting up"
	I0314 19:42:18.033124    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1043]: time="2024-03-14T19:40:53.900739426Z" level=info msg="containerd not running, starting managed containerd"
	I0314 19:42:18.033163    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1043]: time="2024-03-14T19:40:53.901763504Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1049
	I0314 19:42:18.033163    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.930795337Z" level=info msg="starting containerd" revision=dcf2847247e18caba8dce86522029642f60fe96b version=v1.7.14
	I0314 19:42:18.033213    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.957961927Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0314 19:42:18.033213    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.958063735Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0314 19:42:18.033253    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.958107338Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0314 19:42:18.033286    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.958123339Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0314 19:42:18.033325    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.958150841Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0314 19:42:18.033359    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.958163842Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0314 19:42:18.033398    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.958360458Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0314 19:42:18.033439    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.958444864Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0314 19:42:18.033439    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.958463766Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0314 19:42:18.033478    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.958475466Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0314 19:42:18.033478    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.958502569Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0314 19:42:18.033518    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.958670881Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0314 19:42:18.033557    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.961627209Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0314 19:42:18.033592    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.961715316Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0314 19:42:18.033631    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.961871928Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0314 19:42:18.033672    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.961949634Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0314 19:42:18.033712    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.961985336Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0314 19:42:18.033747    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.962005238Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0314 19:42:18.033787    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.962017139Z" level=info msg="metadata content store policy set" policy=shared
	I0314 19:42:18.033787    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.962188852Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0314 19:42:18.033828    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.962280259Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0314 19:42:18.033828    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.962311462Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0314 19:42:18.033869    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.962328263Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0314 19:42:18.033869    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.962344564Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0314 19:42:18.033932    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.962393368Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0314 19:42:18.033932    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.962810900Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0314 19:42:18.033932    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.962939310Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0314 19:42:18.034006    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.963018216Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0314 19:42:18.034006    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.963036317Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0314 19:42:18.034006    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.963060419Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0314 19:42:18.034063    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.963076820Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0314 19:42:18.034063    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.963091221Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0314 19:42:18.034063    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.963106323Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0314 19:42:18.034124    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.963121324Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0314 19:42:18.034124    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.963135425Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0314 19:42:18.034181    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.963148726Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0314 19:42:18.034181    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.963162027Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0314 19:42:18.034181    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.963184029Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0314 19:42:18.034265    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.963205330Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0314 19:42:18.034265    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.963220631Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0314 19:42:18.034295    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.963270235Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0314 19:42:18.034295    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.963286336Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0314 19:42:18.034339    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.963300438Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0314 19:42:18.034339    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.963313039Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0314 19:42:18.034339    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.963326640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0314 19:42:18.034405    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.963341141Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0314 19:42:18.034405    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.963357642Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0314 19:42:18.034405    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.963369743Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0314 19:42:18.034477    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.963382444Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0314 19:42:18.034477    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.963395545Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0314 19:42:18.034477    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.963411646Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0314 19:42:18.034541    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.963433148Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0314 19:42:18.034541    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.963449149Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0314 19:42:18.034541    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.963461550Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0314 19:42:18.034612    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.963512954Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0314 19:42:18.034612    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.963529855Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	I0314 19:42:18.034612    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.963593860Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0314 19:42:18.034667    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.963606261Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	I0314 19:42:18.034667    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.963665466Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0314 19:42:18.034727    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.963679767Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0314 19:42:18.034727    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.963695368Z" level=info msg="NRI interface is disabled by configuration."
	I0314 19:42:18.034727    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.964176205Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0314 19:42:18.034785    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.964503330Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0314 19:42:18.034845    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.965392899Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0314 19:42:18.034845    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.966787506Z" level=info msg="containerd successfully booted in 0.037267s"
	I0314 19:42:18.034845    8428 command_runner.go:130] > Mar 14 19:40:54 multinode-442000 dockerd[1043]: time="2024-03-14T19:40:54.945087153Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0314 19:42:18.034902    8428 command_runner.go:130] > Mar 14 19:40:54 multinode-442000 dockerd[1043]: time="2024-03-14T19:40:54.972020025Z" level=info msg="Loading containers: start."
	I0314 19:42:18.034902    8428 command_runner.go:130] > Mar 14 19:40:55 multinode-442000 dockerd[1043]: time="2024-03-14T19:40:55.259462934Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.18.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0314 19:42:18.034902    8428 command_runner.go:130] > Mar 14 19:40:55 multinode-442000 dockerd[1043]: time="2024-03-14T19:40:55.336883289Z" level=info msg="Loading containers: done."
	I0314 19:42:18.034964    8428 command_runner.go:130] > Mar 14 19:40:55 multinode-442000 dockerd[1043]: time="2024-03-14T19:40:55.370669888Z" level=info msg="Docker daemon" commit=061aa95 containerd-snapshotter=false storage-driver=overlay2 version=25.0.4
	I0314 19:42:18.034964    8428 command_runner.go:130] > Mar 14 19:40:55 multinode-442000 dockerd[1043]: time="2024-03-14T19:40:55.370874904Z" level=info msg="Daemon has completed initialization"
	I0314 19:42:18.034964    8428 command_runner.go:130] > Mar 14 19:40:55 multinode-442000 dockerd[1043]: time="2024-03-14T19:40:55.415311921Z" level=info msg="API listen on /var/run/docker.sock"
	I0314 19:42:18.034964    8428 command_runner.go:130] > Mar 14 19:40:55 multinode-442000 dockerd[1043]: time="2024-03-14T19:40:55.415467233Z" level=info msg="API listen on [::]:2376"
	I0314 19:42:18.035022    8428 command_runner.go:130] > Mar 14 19:40:55 multinode-442000 systemd[1]: Started Docker Application Container Engine.
	I0314 19:42:18.035022    8428 command_runner.go:130] > Mar 14 19:40:56 multinode-442000 systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0314 19:42:18.035022    8428 command_runner.go:130] > Mar 14 19:40:56 multinode-442000 cri-dockerd[1267]: time="2024-03-14T19:40:56Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0314 19:42:18.035073    8428 command_runner.go:130] > Mar 14 19:40:56 multinode-442000 cri-dockerd[1267]: time="2024-03-14T19:40:56Z" level=info msg="Start docker client with request timeout 0s"
	I0314 19:42:18.035073    8428 command_runner.go:130] > Mar 14 19:40:56 multinode-442000 cri-dockerd[1267]: time="2024-03-14T19:40:56Z" level=info msg="Hairpin mode is set to hairpin-veth"
	I0314 19:42:18.035073    8428 command_runner.go:130] > Mar 14 19:40:56 multinode-442000 cri-dockerd[1267]: time="2024-03-14T19:40:56Z" level=info msg="Loaded network plugin cni"
	I0314 19:42:18.035073    8428 command_runner.go:130] > Mar 14 19:40:56 multinode-442000 cri-dockerd[1267]: time="2024-03-14T19:40:56Z" level=info msg="Docker cri networking managed by network plugin cni"
	I0314 19:42:18.035263    8428 command_runner.go:130] > Mar 14 19:40:56 multinode-442000 cri-dockerd[1267]: time="2024-03-14T19:40:56Z" level=info msg="Docker Info: &{ID:04f4855f-417a-422c-b5bb-3cf8a43fb438 Containers:18 ContainersRunning:0 ContainersPaused:0 ContainersStopped:18 Images:10 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:[] Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:[] Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6tables:true Debug:false NFd:26 OomKillDisable:false NGoroutines:52 SystemTime:2024-03-14T19:40:56.401787998Z LoggingDriver:json-file CgroupDriver:cgroupfs CgroupVersion:2 NEventsListener:0 Ke
rnelVersion:5.10.207 OperatingSystem:Buildroot 2023.02.9 OSVersion:2023.02.9 OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:0xc0004c0150 NCPU:2 MemTotal:2216210432 GenericResources:[] DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:multinode-442000 Labels:[provider=hyperv] ExperimentalBuild:false ServerVersion:25.0.4 ClusterStore: ClusterAdvertise: Runtimes:map[io.containerd.runc.v2:{Path:runc Args:[] Shim:<nil>} runc:{Path:runc Args:[] Shim:<nil>}] DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:[] Nodes:0 Managers:0 Cluster:<nil> Warnings:[]} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dcf2847247e18caba8dce86522029642f60fe96b Expected:dcf2847247e18caba8dce86522029642f60fe96b} RuncCommit:{ID:51d5e94601ceffbbd85688df1c928ecccbfa4685 Expected:51d5e94601ceffbbd85688df1c928ecccbfa4685} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[nam
e=seccomp,profile=builtin name=cgroupns] ProductLicense:Community Engine DefaultAddressPools:[] Warnings:[]}"
	I0314 19:42:18.035263    8428 command_runner.go:130] > Mar 14 19:40:56 multinode-442000 cri-dockerd[1267]: time="2024-03-14T19:40:56Z" level=info msg="Setting cgroupDriver cgroupfs"
	I0314 19:42:18.035317    8428 command_runner.go:130] > Mar 14 19:40:56 multinode-442000 cri-dockerd[1267]: time="2024-03-14T19:40:56Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	I0314 19:42:18.035317    8428 command_runner.go:130] > Mar 14 19:40:56 multinode-442000 cri-dockerd[1267]: time="2024-03-14T19:40:56Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	I0314 19:42:18.035361    8428 command_runner.go:130] > Mar 14 19:40:56 multinode-442000 cri-dockerd[1267]: time="2024-03-14T19:40:56Z" level=info msg="Start cri-dockerd grpc backend"
	I0314 19:42:18.035361    8428 command_runner.go:130] > Mar 14 19:40:56 multinode-442000 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	I0314 19:42:18.035420    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 cri-dockerd[1267]: time="2024-03-14T19:41:00Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"busybox-5b5d89c9d6-7446n_default\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"fa0f2372c88eef3de0c7caa0041064157c314aff4c14bf6622f34dd89106f773\""
	I0314 19:42:18.035420    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 cri-dockerd[1267]: time="2024-03-14T19:41:00Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-5dd5756b68-d22jc_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"a3dba3fc54c01e7fb1675536e155d6b541ed5782f664675ccd953639013f50b0\""
	I0314 19:42:18.035481    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:01.294795352Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0314 19:42:18.035481    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:01.294882858Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0314 19:42:18.035481    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:01.294903860Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:18.035547    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:01.295303891Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:18.035547    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:01.380666857Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0314 19:42:18.035608    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:01.380946878Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0314 19:42:18.035608    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:01.381075288Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:18.035664    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:01.381588628Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:18.035664    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:01.418754186Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0314 19:42:18.035664    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:01.418872295Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0314 19:42:18.035735    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:01.418919499Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:18.035735    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:01.419130315Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:18.035797    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 cri-dockerd[1267]: time="2024-03-14T19:41:01Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/35dd339c8a08d84d0d1a4d2c062b04d44baff78d20c6ed33ce967d50c18eaa3c/resolv.conf as [nameserver 172.17.80.1]"
	I0314 19:42:18.035797    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:01.449937485Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0314 19:42:18.035797    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:01.450067495Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0314 19:42:18.035797    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:01.450100297Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:18.035882    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:01.450295012Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:18.035882    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 cri-dockerd[1267]: time="2024-03-14T19:41:01Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/67475bf80ddd91df7549842450a8d92c27cd16f814cd4e4c750a7cad7d82fc9f/resolv.conf as [nameserver 172.17.80.1]"
	I0314 19:42:18.035938    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 cri-dockerd[1267]: time="2024-03-14T19:41:01Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a27fa2188ee4cf0c44cde0f8cae03a83655bc574c856082192e3261801efcc72/resolv.conf as [nameserver 172.17.80.1]"
	I0314 19:42:18.035938    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 cri-dockerd[1267]: time="2024-03-14T19:41:01Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/c70744e60ac50b50085376d0c124ff15cc884b8a836b0085ef71a65ddb06bcfd/resolv.conf as [nameserver 172.17.80.1]"
	I0314 19:42:18.035938    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:01.782527266Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0314 19:42:18.036027    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:01.782834890Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0314 19:42:18.036056    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:01.782945299Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:18.036056    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:01.783324628Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:18.036100    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:01.950307171Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0314 19:42:18.036100    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:01.950638097Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0314 19:42:18.036100    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:01.950847113Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:18.036168    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:01.951959699Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:18.036168    8428 command_runner.go:130] > Mar 14 19:41:02 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:02.033329657Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0314 19:42:18.036168    8428 command_runner.go:130] > Mar 14 19:41:02 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:02.033826996Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0314 19:42:18.036238    8428 command_runner.go:130] > Mar 14 19:41:02 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:02.034090516Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:18.036238    8428 command_runner.go:130] > Mar 14 19:41:02 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:02.034801671Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:18.036293    8428 command_runner.go:130] > Mar 14 19:41:02 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:02.038389546Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0314 19:42:18.036293    8428 command_runner.go:130] > Mar 14 19:41:02 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:02.038570160Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0314 19:42:18.036293    8428 command_runner.go:130] > Mar 14 19:41:02 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:02.038686569Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:18.036355    8428 command_runner.go:130] > Mar 14 19:41:02 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:02.038972291Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:18.036355    8428 command_runner.go:130] > Mar 14 19:41:05 multinode-442000 cri-dockerd[1267]: time="2024-03-14T19:41:05Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	I0314 19:42:18.036421    8428 command_runner.go:130] > Mar 14 19:41:07 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:07.056067890Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0314 19:42:18.036421    8428 command_runner.go:130] > Mar 14 19:41:07 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:07.056148096Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0314 19:42:18.036421    8428 command_runner.go:130] > Mar 14 19:41:07 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:07.056166397Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:18.036491    8428 command_runner.go:130] > Mar 14 19:41:07 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:07.056406816Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:18.036491    8428 command_runner.go:130] > Mar 14 19:41:07 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:07.109761119Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0314 19:42:18.036549    8428 command_runner.go:130] > Mar 14 19:41:07 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:07.110023440Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0314 19:42:18.036549    8428 command_runner.go:130] > Mar 14 19:41:07 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:07.110099145Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:18.036596    8428 command_runner.go:130] > Mar 14 19:41:07 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:07.110475674Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:18.036596    8428 command_runner.go:130] > Mar 14 19:41:07 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:07.116978275Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0314 19:42:18.036632    8428 command_runner.go:130] > Mar 14 19:41:07 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:07.117046280Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0314 19:42:18.036632    8428 command_runner.go:130] > Mar 14 19:41:07 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:07.117060481Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:18.036675    8428 command_runner.go:130] > Mar 14 19:41:07 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:07.117158888Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:18.036675    8428 command_runner.go:130] > Mar 14 19:41:07 multinode-442000 cri-dockerd[1267]: time="2024-03-14T19:41:07Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a723f141543f2007cc07e048ef5836fca4ae70749b7266630f6c890bb233c09a/resolv.conf as [nameserver 172.17.80.1]"
	I0314 19:42:18.036740    8428 command_runner.go:130] > Mar 14 19:41:07 multinode-442000 cri-dockerd[1267]: time="2024-03-14T19:41:07Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/f513a7aff67200987eb0f28647720ea4cb9bbdb684fc85d1b08c0dd54563517d/resolv.conf as [nameserver 172.17.80.1]"
	I0314 19:42:18.036740    8428 command_runner.go:130] > Mar 14 19:41:07 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:07.432676357Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0314 19:42:18.036788    8428 command_runner.go:130] > Mar 14 19:41:07 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:07.432829669Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0314 19:42:18.036788    8428 command_runner.go:130] > Mar 14 19:41:07 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:07.432849370Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:18.036842    8428 command_runner.go:130] > Mar 14 19:41:07 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:07.433004382Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:18.036842    8428 command_runner.go:130] > Mar 14 19:41:07 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:07.579105320Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0314 19:42:18.036904    8428 command_runner.go:130] > Mar 14 19:41:07 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:07.580432922Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0314 19:42:18.036904    8428 command_runner.go:130] > Mar 14 19:41:07 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:07.580451623Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:18.036904    8428 command_runner.go:130] > Mar 14 19:41:07 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:07.580554931Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:18.036967    8428 command_runner.go:130] > Mar 14 19:41:07 multinode-442000 cri-dockerd[1267]: time="2024-03-14T19:41:07Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a9176b55446637c4407c9a64ce7d85fce2b395bcc0a22061f5f7ff304ff2d47f/resolv.conf as [nameserver 172.17.80.1]"
	I0314 19:42:18.036967    8428 command_runner.go:130] > Mar 14 19:41:07 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:07.897653021Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0314 19:42:18.037017    8428 command_runner.go:130] > Mar 14 19:41:07 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:07.897936143Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0314 19:42:18.037017    8428 command_runner.go:130] > Mar 14 19:41:07 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:07.898062553Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:18.037072    8428 command_runner.go:130] > Mar 14 19:41:07 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:07.898459584Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:18.037072    8428 command_runner.go:130] > Mar 14 19:41:37 multinode-442000 dockerd[1043]: time="2024-03-14T19:41:37.705977514Z" level=info msg="ignoring event" container=2876622a2618d9b60f7cb4f182054a8b2d30209e3bd14c5d4afe515101547bc8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0314 19:42:18.037120    8428 command_runner.go:130] > Mar 14 19:41:37 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:37.706482647Z" level=info msg="shim disconnected" id=2876622a2618d9b60f7cb4f182054a8b2d30209e3bd14c5d4afe515101547bc8 namespace=moby
	I0314 19:42:18.037120    8428 command_runner.go:130] > Mar 14 19:41:37 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:37.706677460Z" level=warning msg="cleaning up after shim disconnected" id=2876622a2618d9b60f7cb4f182054a8b2d30209e3bd14c5d4afe515101547bc8 namespace=moby
	I0314 19:42:18.037175    8428 command_runner.go:130] > Mar 14 19:41:37 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:37.706692261Z" level=info msg="cleaning up dead shim" namespace=moby
	I0314 19:42:18.037175    8428 command_runner.go:130] > Mar 14 19:41:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:53.663136392Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0314 19:42:18.037225    8428 command_runner.go:130] > Mar 14 19:41:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:53.663371709Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0314 19:42:18.037262    8428 command_runner.go:130] > Mar 14 19:41:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:53.663411212Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:18.037262    8428 command_runner.go:130] > Mar 14 19:41:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:53.663537821Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:18.037316    8428 command_runner.go:130] > Mar 14 19:42:10 multinode-442000 dockerd[1049]: time="2024-03-14T19:42:10.837487028Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0314 19:42:18.037316    8428 command_runner.go:130] > Mar 14 19:42:10 multinode-442000 dockerd[1049]: time="2024-03-14T19:42:10.837604337Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0314 19:42:18.037371    8428 command_runner.go:130] > Mar 14 19:42:10 multinode-442000 dockerd[1049]: time="2024-03-14T19:42:10.837625738Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:18.037371    8428 command_runner.go:130] > Mar 14 19:42:10 multinode-442000 dockerd[1049]: time="2024-03-14T19:42:10.837719345Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:18.037419    8428 command_runner.go:130] > Mar 14 19:42:10 multinode-442000 dockerd[1049]: time="2024-03-14T19:42:10.848167835Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0314 19:42:18.037419    8428 command_runner.go:130] > Mar 14 19:42:10 multinode-442000 dockerd[1049]: time="2024-03-14T19:42:10.849098605Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0314 19:42:18.037474    8428 command_runner.go:130] > Mar 14 19:42:10 multinode-442000 dockerd[1049]: time="2024-03-14T19:42:10.849287919Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:18.037474    8428 command_runner.go:130] > Mar 14 19:42:10 multinode-442000 dockerd[1049]: time="2024-03-14T19:42:10.849656747Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:18.037531    8428 command_runner.go:130] > Mar 14 19:42:11 multinode-442000 cri-dockerd[1267]: time="2024-03-14T19:42:11Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/cddebe360bf3a58d057146523ff9f043ddb40843d3e55a24f8f364524780a439/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	I0314 19:42:18.037531    8428 command_runner.go:130] > Mar 14 19:42:11 multinode-442000 cri-dockerd[1267]: time="2024-03-14T19:42:11Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/89f326046d00d990fbe8611867f6438ef498caad91d78b4f265633a7cd56307f/resolv.conf as [nameserver 172.17.80.1]"
	I0314 19:42:18.037531    8428 command_runner.go:130] > Mar 14 19:42:11 multinode-442000 dockerd[1049]: time="2024-03-14T19:42:11.575693713Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0314 19:42:18.037531    8428 command_runner.go:130] > Mar 14 19:42:11 multinode-442000 dockerd[1049]: time="2024-03-14T19:42:11.575950032Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0314 19:42:18.037531    8428 command_runner.go:130] > Mar 14 19:42:11 multinode-442000 dockerd[1049]: time="2024-03-14T19:42:11.576019637Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:18.037531    8428 command_runner.go:130] > Mar 14 19:42:11 multinode-442000 dockerd[1049]: time="2024-03-14T19:42:11.577004211Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0314 19:42:18.037531    8428 command_runner.go:130] > Mar 14 19:42:11 multinode-442000 dockerd[1049]: time="2024-03-14T19:42:11.577168224Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0314 19:42:18.037531    8428 command_runner.go:130] > Mar 14 19:42:11 multinode-442000 dockerd[1049]: time="2024-03-14T19:42:11.577288033Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:18.037531    8428 command_runner.go:130] > Mar 14 19:42:11 multinode-442000 dockerd[1049]: time="2024-03-14T19:42:11.577583255Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:18.037531    8428 command_runner.go:130] > Mar 14 19:42:11 multinode-442000 dockerd[1049]: time="2024-03-14T19:42:11.576656985Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:18.037531    8428 command_runner.go:130] > Mar 14 19:42:13 multinode-442000 dockerd[1043]: 2024/03/14 19:42:13 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0314 19:42:18.037531    8428 command_runner.go:130] > Mar 14 19:42:14 multinode-442000 dockerd[1043]: 2024/03/14 19:42:14 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0314 19:42:18.037531    8428 command_runner.go:130] > Mar 14 19:42:14 multinode-442000 dockerd[1043]: 2024/03/14 19:42:14 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0314 19:42:18.037531    8428 command_runner.go:130] > Mar 14 19:42:14 multinode-442000 dockerd[1043]: 2024/03/14 19:42:14 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0314 19:42:18.037531    8428 command_runner.go:130] > Mar 14 19:42:14 multinode-442000 dockerd[1043]: 2024/03/14 19:42:14 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0314 19:42:18.037531    8428 command_runner.go:130] > Mar 14 19:42:14 multinode-442000 dockerd[1043]: 2024/03/14 19:42:14 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0314 19:42:18.037531    8428 command_runner.go:130] > Mar 14 19:42:14 multinode-442000 dockerd[1043]: 2024/03/14 19:42:14 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0314 19:42:18.037531    8428 command_runner.go:130] > Mar 14 19:42:14 multinode-442000 dockerd[1043]: 2024/03/14 19:42:14 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0314 19:42:18.037531    8428 command_runner.go:130] > Mar 14 19:42:14 multinode-442000 dockerd[1043]: 2024/03/14 19:42:14 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0314 19:42:18.037531    8428 command_runner.go:130] > Mar 14 19:42:14 multinode-442000 dockerd[1043]: 2024/03/14 19:42:14 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0314 19:42:18.037531    8428 command_runner.go:130] > Mar 14 19:42:14 multinode-442000 dockerd[1043]: 2024/03/14 19:42:14 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0314 19:42:18.037531    8428 command_runner.go:130] > Mar 14 19:42:14 multinode-442000 dockerd[1043]: 2024/03/14 19:42:14 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0314 19:42:18.037531    8428 command_runner.go:130] > Mar 14 19:42:17 multinode-442000 dockerd[1043]: 2024/03/14 19:42:17 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0314 19:42:18.038075    8428 command_runner.go:130] > Mar 14 19:42:17 multinode-442000 dockerd[1043]: 2024/03/14 19:42:17 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0314 19:42:18.038075    8428 command_runner.go:130] > Mar 14 19:42:17 multinode-442000 dockerd[1043]: 2024/03/14 19:42:17 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0314 19:42:18.038075    8428 command_runner.go:130] > Mar 14 19:42:17 multinode-442000 dockerd[1043]: 2024/03/14 19:42:17 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0314 19:42:18.038148    8428 command_runner.go:130] > Mar 14 19:42:18 multinode-442000 dockerd[1043]: 2024/03/14 19:42:18 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0314 19:42:18.038148    8428 command_runner.go:130] > Mar 14 19:42:18 multinode-442000 dockerd[1043]: 2024/03/14 19:42:18 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0314 19:42:18.038203    8428 command_runner.go:130] > Mar 14 19:42:18 multinode-442000 dockerd[1043]: 2024/03/14 19:42:18 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0314 19:42:18.067678    8428 logs.go:123] Gathering logs for kube-apiserver [a598d24960de] ...
	I0314 19:42:18.067678    8428 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a598d24960de"
	I0314 19:42:18.104507    8428 command_runner.go:130] ! I0314 19:41:02.580148       1 options.go:220] external host was not specified, using 172.17.93.236
	I0314 19:42:18.104607    8428 command_runner.go:130] ! I0314 19:41:02.584195       1 server.go:148] Version: v1.28.4
	I0314 19:42:18.104607    8428 command_runner.go:130] ! I0314 19:41:02.584361       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0314 19:42:18.104607    8428 command_runner.go:130] ! I0314 19:41:03.945945       1 shared_informer.go:311] Waiting for caches to sync for node_authorizer
	I0314 19:42:18.104762    8428 command_runner.go:130] ! I0314 19:41:03.963375       1 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0314 19:42:18.104818    8428 command_runner.go:130] ! I0314 19:41:03.963415       1 plugins.go:161] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0314 19:42:18.104913    8428 command_runner.go:130] ! I0314 19:41:03.963973       1 instance.go:298] Using reconciler: lease
	I0314 19:42:18.104962    8428 command_runner.go:130] ! I0314 19:41:04.031000       1 handler.go:232] Adding GroupVersion apiextensions.k8s.io v1 to ResourceManager
	I0314 19:42:18.104998    8428 command_runner.go:130] ! W0314 19:41:04.031118       1 genericapiserver.go:744] Skipping API apiextensions.k8s.io/v1beta1 because it has no resources.
	I0314 19:42:18.104998    8428 command_runner.go:130] ! I0314 19:41:04.342643       1 handler.go:232] Adding GroupVersion  v1 to ResourceManager
	I0314 19:42:18.104998    8428 command_runner.go:130] ! I0314 19:41:04.343120       1 instance.go:709] API group "internal.apiserver.k8s.io" is not enabled, skipping.
	I0314 19:42:18.105087    8428 command_runner.go:130] ! I0314 19:41:04.862959       1 instance.go:709] API group "resource.k8s.io" is not enabled, skipping.
	I0314 19:42:18.105087    8428 command_runner.go:130] ! I0314 19:41:04.875745       1 handler.go:232] Adding GroupVersion authentication.k8s.io v1 to ResourceManager
	I0314 19:42:18.105087    8428 command_runner.go:130] ! W0314 19:41:04.875858       1 genericapiserver.go:744] Skipping API authentication.k8s.io/v1beta1 because it has no resources.
	I0314 19:42:18.105186    8428 command_runner.go:130] ! W0314 19:41:04.875867       1 genericapiserver.go:744] Skipping API authentication.k8s.io/v1alpha1 because it has no resources.
	I0314 19:42:18.105186    8428 command_runner.go:130] ! I0314 19:41:04.876422       1 handler.go:232] Adding GroupVersion authorization.k8s.io v1 to ResourceManager
	I0314 19:42:18.105186    8428 command_runner.go:130] ! W0314 19:41:04.876506       1 genericapiserver.go:744] Skipping API authorization.k8s.io/v1beta1 because it has no resources.
	I0314 19:42:18.105285    8428 command_runner.go:130] ! I0314 19:41:04.877676       1 handler.go:232] Adding GroupVersion autoscaling v2 to ResourceManager
	I0314 19:42:18.105285    8428 command_runner.go:130] ! I0314 19:41:04.878707       1 handler.go:232] Adding GroupVersion autoscaling v1 to ResourceManager
	I0314 19:42:18.105285    8428 command_runner.go:130] ! W0314 19:41:04.878804       1 genericapiserver.go:744] Skipping API autoscaling/v2beta1 because it has no resources.
	I0314 19:42:18.105379    8428 command_runner.go:130] ! W0314 19:41:04.878812       1 genericapiserver.go:744] Skipping API autoscaling/v2beta2 because it has no resources.
	I0314 19:42:18.105379    8428 command_runner.go:130] ! I0314 19:41:04.881331       1 handler.go:232] Adding GroupVersion batch v1 to ResourceManager
	I0314 19:42:18.105379    8428 command_runner.go:130] ! W0314 19:41:04.881418       1 genericapiserver.go:744] Skipping API batch/v1beta1 because it has no resources.
	I0314 19:42:18.105379    8428 command_runner.go:130] ! I0314 19:41:04.882613       1 handler.go:232] Adding GroupVersion certificates.k8s.io v1 to ResourceManager
	I0314 19:42:18.105479    8428 command_runner.go:130] ! W0314 19:41:04.882706       1 genericapiserver.go:744] Skipping API certificates.k8s.io/v1beta1 because it has no resources.
	I0314 19:42:18.105479    8428 command_runner.go:130] ! W0314 19:41:04.882714       1 genericapiserver.go:744] Skipping API certificates.k8s.io/v1alpha1 because it has no resources.
	I0314 19:42:18.105479    8428 command_runner.go:130] ! I0314 19:41:04.883473       1 handler.go:232] Adding GroupVersion coordination.k8s.io v1 to ResourceManager
	I0314 19:42:18.105575    8428 command_runner.go:130] ! W0314 19:41:04.883562       1 genericapiserver.go:744] Skipping API coordination.k8s.io/v1beta1 because it has no resources.
	I0314 19:42:18.105575    8428 command_runner.go:130] ! W0314 19:41:04.883619       1 genericapiserver.go:744] Skipping API discovery.k8s.io/v1beta1 because it has no resources.
	I0314 19:42:18.105575    8428 command_runner.go:130] ! I0314 19:41:04.884340       1 handler.go:232] Adding GroupVersion discovery.k8s.io v1 to ResourceManager
	I0314 19:42:18.105575    8428 command_runner.go:130] ! I0314 19:41:04.886289       1 handler.go:232] Adding GroupVersion networking.k8s.io v1 to ResourceManager
	I0314 19:42:18.105667    8428 command_runner.go:130] ! W0314 19:41:04.886373       1 genericapiserver.go:744] Skipping API networking.k8s.io/v1beta1 because it has no resources.
	I0314 19:42:18.105752    8428 command_runner.go:130] ! W0314 19:41:04.886382       1 genericapiserver.go:744] Skipping API networking.k8s.io/v1alpha1 because it has no resources.
	I0314 19:42:18.105752    8428 command_runner.go:130] ! I0314 19:41:04.886877       1 handler.go:232] Adding GroupVersion node.k8s.io v1 to ResourceManager
	I0314 19:42:18.105752    8428 command_runner.go:130] ! W0314 19:41:04.886971       1 genericapiserver.go:744] Skipping API node.k8s.io/v1beta1 because it has no resources.
	I0314 19:42:18.105752    8428 command_runner.go:130] ! W0314 19:41:04.886979       1 genericapiserver.go:744] Skipping API node.k8s.io/v1alpha1 because it has no resources.
	I0314 19:42:18.105846    8428 command_runner.go:130] ! I0314 19:41:04.888213       1 handler.go:232] Adding GroupVersion policy v1 to ResourceManager
	I0314 19:42:18.105846    8428 command_runner.go:130] ! W0314 19:41:04.888261       1 genericapiserver.go:744] Skipping API policy/v1beta1 because it has no resources.
	I0314 19:42:18.105949    8428 command_runner.go:130] ! I0314 19:41:04.903461       1 handler.go:232] Adding GroupVersion rbac.authorization.k8s.io v1 to ResourceManager
	I0314 19:42:18.105949    8428 command_runner.go:130] ! W0314 19:41:04.903509       1 genericapiserver.go:744] Skipping API rbac.authorization.k8s.io/v1beta1 because it has no resources.
	I0314 19:42:18.105949    8428 command_runner.go:130] ! W0314 19:41:04.903517       1 genericapiserver.go:744] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
	I0314 19:42:18.105949    8428 command_runner.go:130] ! I0314 19:41:04.906409       1 handler.go:232] Adding GroupVersion scheduling.k8s.io v1 to ResourceManager
	I0314 19:42:18.106050    8428 command_runner.go:130] ! W0314 19:41:04.906458       1 genericapiserver.go:744] Skipping API scheduling.k8s.io/v1beta1 because it has no resources.
	I0314 19:42:18.106050    8428 command_runner.go:130] ! W0314 19:41:04.906466       1 genericapiserver.go:744] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
	I0314 19:42:18.106050    8428 command_runner.go:130] ! I0314 19:41:04.915366       1 handler.go:232] Adding GroupVersion storage.k8s.io v1 to ResourceManager
	I0314 19:42:18.106163    8428 command_runner.go:130] ! W0314 19:41:04.915463       1 genericapiserver.go:744] Skipping API storage.k8s.io/v1beta1 because it has no resources.
	I0314 19:42:18.106255    8428 command_runner.go:130] ! W0314 19:41:04.915472       1 genericapiserver.go:744] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
	I0314 19:42:18.106313    8428 command_runner.go:130] ! I0314 19:41:04.916839       1 handler.go:232] Adding GroupVersion flowcontrol.apiserver.k8s.io v1beta3 to ResourceManager
	I0314 19:42:18.106313    8428 command_runner.go:130] ! I0314 19:41:04.918318       1 handler.go:232] Adding GroupVersion flowcontrol.apiserver.k8s.io v1beta2 to ResourceManager
	I0314 19:42:18.106313    8428 command_runner.go:130] ! W0314 19:41:04.918410       1 genericapiserver.go:744] Skipping API flowcontrol.apiserver.k8s.io/v1beta1 because it has no resources.
	I0314 19:42:18.106403    8428 command_runner.go:130] ! W0314 19:41:04.918418       1 genericapiserver.go:744] Skipping API flowcontrol.apiserver.k8s.io/v1alpha1 because it has no resources.
	I0314 19:42:18.106403    8428 command_runner.go:130] ! I0314 19:41:04.922469       1 handler.go:232] Adding GroupVersion apps v1 to ResourceManager
	I0314 19:42:18.106403    8428 command_runner.go:130] ! W0314 19:41:04.922563       1 genericapiserver.go:744] Skipping API apps/v1beta2 because it has no resources.
	I0314 19:42:18.106403    8428 command_runner.go:130] ! W0314 19:41:04.922576       1 genericapiserver.go:744] Skipping API apps/v1beta1 because it has no resources.
	I0314 19:42:18.106504    8428 command_runner.go:130] ! I0314 19:41:04.923589       1 handler.go:232] Adding GroupVersion admissionregistration.k8s.io v1 to ResourceManager
	I0314 19:42:18.106504    8428 command_runner.go:130] ! W0314 19:41:04.923671       1 genericapiserver.go:744] Skipping API admissionregistration.k8s.io/v1beta1 because it has no resources.
	I0314 19:42:18.106604    8428 command_runner.go:130] ! W0314 19:41:04.923678       1 genericapiserver.go:744] Skipping API admissionregistration.k8s.io/v1alpha1 because it has no resources.
	I0314 19:42:18.106604    8428 command_runner.go:130] ! I0314 19:41:04.924323       1 handler.go:232] Adding GroupVersion events.k8s.io v1 to ResourceManager
	I0314 19:42:18.106604    8428 command_runner.go:130] ! W0314 19:41:04.924409       1 genericapiserver.go:744] Skipping API events.k8s.io/v1beta1 because it has no resources.
	I0314 19:42:18.106701    8428 command_runner.go:130] ! I0314 19:41:04.946149       1 handler.go:232] Adding GroupVersion apiregistration.k8s.io v1 to ResourceManager
	I0314 19:42:18.106701    8428 command_runner.go:130] ! W0314 19:41:04.946188       1 genericapiserver.go:744] Skipping API apiregistration.k8s.io/v1beta1 because it has no resources.
	I0314 19:42:18.106701    8428 command_runner.go:130] ! I0314 19:41:05.649195       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0314 19:42:18.106701    8428 command_runner.go:130] ! I0314 19:41:05.649351       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0314 19:42:18.106801    8428 command_runner.go:130] ! I0314 19:41:05.650113       1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	I0314 19:42:18.106801    8428 command_runner.go:130] ! I0314 19:41:05.651281       1 secure_serving.go:213] Serving securely on [::]:8443
	I0314 19:42:18.106801    8428 command_runner.go:130] ! I0314 19:41:05.651311       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0314 19:42:18.106801    8428 command_runner.go:130] ! I0314 19:41:05.651726       1 handler_discovery.go:412] Starting ResourceDiscoveryManager
	I0314 19:42:18.106906    8428 command_runner.go:130] ! I0314 19:41:05.651907       1 dynamic_serving_content.go:132] "Starting controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	I0314 19:42:18.106906    8428 command_runner.go:130] ! I0314 19:41:05.654468       1 controller.go:80] Starting OpenAPI V3 AggregationController
	I0314 19:42:18.106906    8428 command_runner.go:130] ! I0314 19:41:05.654814       1 gc_controller.go:78] Starting apiserver lease garbage collector
	I0314 19:42:18.107009    8428 command_runner.go:130] ! I0314 19:41:05.655201       1 gc_controller.go:78] Starting apiserver lease garbage collector
	I0314 19:42:18.107009    8428 command_runner.go:130] ! I0314 19:41:05.656049       1 apf_controller.go:372] Starting API Priority and Fairness config controller
	I0314 19:42:18.107009    8428 command_runner.go:130] ! I0314 19:41:05.656308       1 available_controller.go:423] Starting AvailableConditionController
	I0314 19:42:18.107117    8428 command_runner.go:130] ! I0314 19:41:05.656404       1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
	I0314 19:42:18.107117    8428 command_runner.go:130] ! I0314 19:41:05.651597       1 apiservice_controller.go:97] Starting APIServiceRegistrationController
	I0314 19:42:18.107117    8428 command_runner.go:130] ! I0314 19:41:05.656599       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
	I0314 19:42:18.107117    8428 command_runner.go:130] ! I0314 19:41:05.658623       1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
	I0314 19:42:18.107223    8428 command_runner.go:130] ! I0314 19:41:05.658785       1 shared_informer.go:311] Waiting for caches to sync for cluster_authentication_trust_controller
	I0314 19:42:18.107223    8428 command_runner.go:130] ! I0314 19:41:05.659483       1 system_namespaces_controller.go:67] Starting system namespaces controller
	I0314 19:42:18.107223    8428 command_runner.go:130] ! I0314 19:41:05.661076       1 aggregator.go:164] waiting for initial CRD sync...
	I0314 19:42:18.107223    8428 command_runner.go:130] ! I0314 19:41:05.662487       1 customresource_discovery_controller.go:289] Starting DiscoveryController
	I0314 19:42:18.107330    8428 command_runner.go:130] ! I0314 19:41:05.662789       1 controller.go:78] Starting OpenAPI AggregationController
	I0314 19:42:18.107330    8428 command_runner.go:130] ! I0314 19:41:05.727194       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0314 19:42:18.107330    8428 command_runner.go:130] ! I0314 19:41:05.728512       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0314 19:42:18.107424    8428 command_runner.go:130] ! I0314 19:41:05.729067       1 controller.go:116] Starting legacy_token_tracking_controller
	I0314 19:42:18.107451    8428 command_runner.go:130] ! I0314 19:41:05.729317       1 shared_informer.go:311] Waiting for caches to sync for configmaps
	I0314 19:42:18.107451    8428 command_runner.go:130] ! I0314 19:41:05.729432       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0314 19:42:18.107451    8428 command_runner.go:130] ! I0314 19:41:05.729507       1 shared_informer.go:311] Waiting for caches to sync for crd-autoregister
	I0314 19:42:18.107451    8428 command_runner.go:130] ! I0314 19:41:05.729606       1 controller.go:134] Starting OpenAPI controller
	I0314 19:42:18.107451    8428 command_runner.go:130] ! I0314 19:41:05.729710       1 controller.go:85] Starting OpenAPI V3 controller
	I0314 19:42:18.107451    8428 command_runner.go:130] ! I0314 19:41:05.729812       1 naming_controller.go:291] Starting NamingConditionController
	I0314 19:42:18.107633    8428 command_runner.go:130] ! I0314 19:41:05.729911       1 establishing_controller.go:76] Starting EstablishingController
	I0314 19:42:18.107633    8428 command_runner.go:130] ! I0314 19:41:05.730411       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0314 19:42:18.107633    8428 command_runner.go:130] ! I0314 19:41:05.730521       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0314 19:42:18.107633    8428 command_runner.go:130] ! I0314 19:41:05.730616       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0314 19:42:18.107741    8428 command_runner.go:130] ! I0314 19:41:05.799477       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0314 19:42:18.107741    8428 command_runner.go:130] ! I0314 19:41:05.813580       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0314 19:42:18.107741    8428 command_runner.go:130] ! I0314 19:41:05.830168       1 shared_informer.go:318] Caches are synced for configmaps
	I0314 19:42:18.107741    8428 command_runner.go:130] ! I0314 19:41:05.830217       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0314 19:42:18.107847    8428 command_runner.go:130] ! I0314 19:41:05.830281       1 aggregator.go:166] initial CRD sync complete...
	I0314 19:42:18.107847    8428 command_runner.go:130] ! I0314 19:41:05.830289       1 autoregister_controller.go:141] Starting autoregister controller
	I0314 19:42:18.107847    8428 command_runner.go:130] ! I0314 19:41:05.830295       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0314 19:42:18.107847    8428 command_runner.go:130] ! I0314 19:41:05.830301       1 cache.go:39] Caches are synced for autoregister controller
	I0314 19:42:18.107944    8428 command_runner.go:130] ! I0314 19:41:05.846941       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0314 19:42:18.107999    8428 command_runner.go:130] ! I0314 19:41:05.857057       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0314 19:42:18.107999    8428 command_runner.go:130] ! I0314 19:41:05.858966       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0314 19:42:18.108071    8428 command_runner.go:130] ! I0314 19:41:05.865554       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I0314 19:42:18.108071    8428 command_runner.go:130] ! I0314 19:41:05.865721       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I0314 19:42:18.108071    8428 command_runner.go:130] ! I0314 19:41:06.667315       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0314 19:42:18.108071    8428 command_runner.go:130] ! W0314 19:41:07.118314       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [172.17.93.236]
	I0314 19:42:18.108164    8428 command_runner.go:130] ! I0314 19:41:07.120612       1 controller.go:624] quota admission added evaluator for: endpoints
	I0314 19:42:18.108192    8428 command_runner.go:130] ! I0314 19:41:07.135973       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0314 19:42:18.108192    8428 command_runner.go:130] ! I0314 19:41:09.049225       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0314 19:42:18.108192    8428 command_runner.go:130] ! I0314 19:41:09.264220       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0314 19:42:18.108192    8428 command_runner.go:130] ! I0314 19:41:09.277110       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0314 19:42:18.108192    8428 command_runner.go:130] ! I0314 19:41:09.393446       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0314 19:42:18.108192    8428 command_runner.go:130] ! I0314 19:41:09.422214       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0314 19:42:18.114857    8428 logs.go:123] Gathering logs for kube-controller-manager [16b80f73683d] ...
	I0314 19:42:18.114857    8428 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16b80f73683d"
	I0314 19:42:18.142896    8428 command_runner.go:130] ! I0314 19:18:57.791996       1 serving.go:348] Generated self-signed cert in-memory
	I0314 19:42:18.143269    8428 command_runner.go:130] ! I0314 19:18:58.802083       1 controllermanager.go:189] "Starting" version="v1.28.4"
	I0314 19:42:18.143269    8428 command_runner.go:130] ! I0314 19:18:58.802123       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0314 19:42:18.143269    8428 command_runner.go:130] ! I0314 19:18:58.803952       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0314 19:42:18.143269    8428 command_runner.go:130] ! I0314 19:18:58.804068       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0314 19:42:18.143269    8428 command_runner.go:130] ! I0314 19:18:58.807259       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0314 19:42:18.143269    8428 command_runner.go:130] ! I0314 19:18:58.807321       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0314 19:42:18.143269    8428 command_runner.go:130] ! I0314 19:19:03.211766       1 shared_informer.go:311] Waiting for caches to sync for tokens
	I0314 19:42:18.143269    8428 command_runner.go:130] ! I0314 19:19:03.241058       1 controllermanager.go:642] "Started controller" controller="endpoints-controller"
	I0314 19:42:18.143269    8428 command_runner.go:130] ! I0314 19:19:03.241394       1 endpoints_controller.go:174] "Starting endpoint controller"
	I0314 19:42:18.143269    8428 command_runner.go:130] ! I0314 19:19:03.241421       1 shared_informer.go:311] Waiting for caches to sync for endpoint
	I0314 19:42:18.143269    8428 command_runner.go:130] ! I0314 19:19:03.277645       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="limitranges"
	I0314 19:42:18.143269    8428 command_runner.go:130] ! I0314 19:19:03.277842       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="serviceaccounts"
	I0314 19:42:18.143269    8428 command_runner.go:130] ! I0314 19:19:03.277987       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="statefulsets.apps"
	I0314 19:42:18.143804    8428 command_runner.go:130] ! I0314 19:19:03.278099       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="roles.rbac.authorization.k8s.io"
	I0314 19:42:18.143804    8428 command_runner.go:130] ! I0314 19:19:03.278176       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="jobs.batch"
	I0314 19:42:18.143804    8428 command_runner.go:130] ! I0314 19:19:03.278283       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="ingresses.networking.k8s.io"
	I0314 19:42:18.143804    8428 command_runner.go:130] ! I0314 19:19:03.278389       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="leases.coordination.k8s.io"
	I0314 19:42:18.143804    8428 command_runner.go:130] ! I0314 19:19:03.278566       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="rolebindings.rbac.authorization.k8s.io"
	I0314 19:42:18.143804    8428 command_runner.go:130] ! W0314 19:19:03.278710       1 shared_informer.go:593] resyncPeriod 13h23m0.648968128s is smaller than resyncCheckPeriod 15h46m21.421594093s and the informer has already started. Changing it to 15h46m21.421594093s
	I0314 19:42:18.143804    8428 command_runner.go:130] ! I0314 19:19:03.278915       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="controllerrevisions.apps"
	I0314 19:42:18.143804    8428 command_runner.go:130] ! I0314 19:19:03.279052       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="daemonsets.apps"
	I0314 19:42:18.143804    8428 command_runner.go:130] ! I0314 19:19:03.279196       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="networkpolicies.networking.k8s.io"
	I0314 19:42:18.143804    8428 command_runner.go:130] ! I0314 19:19:03.279291       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="poddisruptionbudgets.policy"
	I0314 19:42:18.143804    8428 command_runner.go:130] ! I0314 19:19:03.279313       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="endpoints"
	I0314 19:42:18.144131    8428 command_runner.go:130] ! I0314 19:19:03.279560       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="replicasets.apps"
	I0314 19:42:18.144131    8428 command_runner.go:130] ! I0314 19:19:03.279688       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="horizontalpodautoscalers.autoscaling"
	I0314 19:42:18.144131    8428 command_runner.go:130] ! I0314 19:19:03.279834       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="cronjobs.batch"
	I0314 19:42:18.144131    8428 command_runner.go:130] ! I0314 19:19:03.279857       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="deployments.apps"
	I0314 19:42:18.144131    8428 command_runner.go:130] ! I0314 19:19:03.279927       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="csistoragecapacities.storage.k8s.io"
	I0314 19:42:18.144131    8428 command_runner.go:130] ! I0314 19:19:03.280011       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="endpointslices.discovery.k8s.io"
	I0314 19:42:18.144261    8428 command_runner.go:130] ! I0314 19:19:03.280106       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="podtemplates"
	I0314 19:42:18.144261    8428 command_runner.go:130] ! I0314 19:19:03.280148       1 controllermanager.go:642] "Started controller" controller="resourcequota-controller"
	I0314 19:42:18.144261    8428 command_runner.go:130] ! I0314 19:19:03.280224       1 resource_quota_controller.go:294] "Starting resource quota controller"
	I0314 19:42:18.144261    8428 command_runner.go:130] ! I0314 19:19:03.280306       1 shared_informer.go:311] Waiting for caches to sync for resource quota
	I0314 19:42:18.144261    8428 command_runner.go:130] ! I0314 19:19:03.280392       1 resource_quota_monitor.go:305] "QuotaMonitor running"
	I0314 19:42:18.144261    8428 command_runner.go:130] ! I0314 19:19:03.297527       1 controllermanager.go:642] "Started controller" controller="serviceaccount-controller"
	I0314 19:42:18.144261    8428 command_runner.go:130] ! I0314 19:19:03.297675       1 serviceaccounts_controller.go:111] "Starting service account controller"
	I0314 19:42:18.144261    8428 command_runner.go:130] ! I0314 19:19:03.297706       1 shared_informer.go:311] Waiting for caches to sync for service account
	I0314 19:42:18.144261    8428 command_runner.go:130] ! I0314 19:19:03.310691       1 node_lifecycle_controller.go:431] "Controller will reconcile labels"
	I0314 19:42:18.144261    8428 command_runner.go:130] ! I0314 19:19:03.310864       1 controllermanager.go:642] "Started controller" controller="node-lifecycle-controller"
	I0314 19:42:18.144470    8428 command_runner.go:130] ! I0314 19:19:03.311121       1 node_lifecycle_controller.go:465] "Sending events to api server"
	I0314 19:42:18.144470    8428 command_runner.go:130] ! I0314 19:19:03.311163       1 node_lifecycle_controller.go:476] "Starting node controller"
	I0314 19:42:18.144470    8428 command_runner.go:130] ! I0314 19:19:03.311170       1 shared_informer.go:311] Waiting for caches to sync for taint
	I0314 19:42:18.144470    8428 command_runner.go:130] ! I0314 19:19:03.312491       1 shared_informer.go:318] Caches are synced for tokens
	I0314 19:42:18.144470    8428 command_runner.go:130] ! I0314 19:19:03.324271       1 controllermanager.go:642] "Started controller" controller="persistentvolume-attach-detach-controller"
	I0314 19:42:18.144470    8428 command_runner.go:130] ! I0314 19:19:03.324640       1 attach_detach_controller.go:337] "Starting attach detach controller"
	I0314 19:42:18.144470    8428 command_runner.go:130] ! I0314 19:19:03.324856       1 shared_informer.go:311] Waiting for caches to sync for attach detach
	I0314 19:42:18.144601    8428 command_runner.go:130] ! I0314 19:19:03.341489       1 controllermanager.go:642] "Started controller" controller="certificatesigningrequest-cleaner-controller"
	I0314 19:42:18.144601    8428 command_runner.go:130] ! I0314 19:19:03.341829       1 cleaner.go:83] "Starting CSR cleaner controller"
	I0314 19:42:18.144601    8428 command_runner.go:130] ! I0314 19:19:03.359979       1 controllermanager.go:642] "Started controller" controller="bootstrap-signer-controller"
	I0314 19:42:18.144601    8428 command_runner.go:130] ! I0314 19:19:03.360131       1 shared_informer.go:311] Waiting for caches to sync for bootstrap_signer
	I0314 19:42:18.144601    8428 command_runner.go:130] ! I0314 19:19:03.373006       1 controllermanager.go:642] "Started controller" controller="persistentvolume-binder-controller"
	I0314 19:42:18.144601    8428 command_runner.go:130] ! I0314 19:19:03.373343       1 pv_controller_base.go:319] "Starting persistent volume controller"
	I0314 19:42:18.144601    8428 command_runner.go:130] ! I0314 19:19:03.373606       1 shared_informer.go:311] Waiting for caches to sync for persistent volume
	I0314 19:42:18.144720    8428 command_runner.go:130] ! I0314 19:19:03.385026       1 controllermanager.go:642] "Started controller" controller="persistentvolumeclaim-protection-controller"
	I0314 19:42:18.144720    8428 command_runner.go:130] ! I0314 19:19:03.385081       1 pvc_protection_controller.go:102] "Starting PVC protection controller"
	I0314 19:42:18.144720    8428 command_runner.go:130] ! I0314 19:19:03.385807       1 shared_informer.go:311] Waiting for caches to sync for PVC protection
	I0314 19:42:18.144720    8428 command_runner.go:130] ! I0314 19:19:03.399556       1 controllermanager.go:642] "Started controller" controller="token-cleaner-controller"
	I0314 19:42:18.144720    8428 command_runner.go:130] ! I0314 19:19:03.399796       1 core.go:228] "Warning: configure-cloud-routes is set, but no cloud provider specified. Will not configure cloud provider routes."
	I0314 19:42:18.144720    8428 command_runner.go:130] ! I0314 19:19:03.399936       1 controllermanager.go:620] "Warning: skipping controller" controller="node-route-controller"
	I0314 19:42:18.144844    8428 command_runner.go:130] ! I0314 19:19:03.400078       1 tokencleaner.go:112] "Starting token cleaner controller"
	I0314 19:42:18.144844    8428 command_runner.go:130] ! I0314 19:19:03.400349       1 shared_informer.go:311] Waiting for caches to sync for token_cleaner
	I0314 19:42:18.144844    8428 command_runner.go:130] ! I0314 19:19:03.400489       1 shared_informer.go:318] Caches are synced for token_cleaner
	I0314 19:42:18.144844    8428 command_runner.go:130] ! I0314 19:19:03.521977       1 controllermanager.go:642] "Started controller" controller="persistentvolume-protection-controller"
	I0314 19:42:18.144844    8428 command_runner.go:130] ! I0314 19:19:03.522076       1 pv_protection_controller.go:78] "Starting PV protection controller"
	I0314 19:42:18.144844    8428 command_runner.go:130] ! I0314 19:19:03.522086       1 shared_informer.go:311] Waiting for caches to sync for PV protection
	I0314 19:42:18.144844    8428 command_runner.go:130] ! I0314 19:19:03.567446       1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-kubelet-serving"
	I0314 19:42:18.144975    8428 command_runner.go:130] ! I0314 19:19:03.567574       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I0314 19:42:18.144975    8428 command_runner.go:130] ! I0314 19:19:03.567615       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0314 19:42:18.144975    8428 command_runner.go:130] ! I0314 19:19:03.568792       1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-kubelet-client"
	I0314 19:42:18.144975    8428 command_runner.go:130] ! I0314 19:19:03.568891       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I0314 19:42:18.145108    8428 command_runner.go:130] ! I0314 19:19:03.569119       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0314 19:42:18.145108    8428 command_runner.go:130] ! I0314 19:19:03.570147       1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-kube-apiserver-client"
	I0314 19:42:18.145108    8428 command_runner.go:130] ! I0314 19:19:03.570261       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I0314 19:42:18.145108    8428 command_runner.go:130] ! I0314 19:19:03.570356       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0314 19:42:18.145108    8428 command_runner.go:130] ! I0314 19:19:03.571403       1 controllermanager.go:642] "Started controller" controller="certificatesigningrequest-signing-controller"
	I0314 19:42:18.145108    8428 command_runner.go:130] ! I0314 19:19:03.571529       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0314 19:42:18.145108    8428 command_runner.go:130] ! I0314 19:19:03.571434       1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-legacy-unknown"
	I0314 19:42:18.145243    8428 command_runner.go:130] ! I0314 19:19:03.572095       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0314 19:42:18.145243    8428 command_runner.go:130] ! I0314 19:19:03.723142       1 controllermanager.go:642] "Started controller" controller="ttl-controller"
	I0314 19:42:18.145243    8428 command_runner.go:130] ! I0314 19:19:03.723289       1 ttl_controller.go:124] "Starting TTL controller"
	I0314 19:42:18.145243    8428 command_runner.go:130] ! I0314 19:19:03.723300       1 shared_informer.go:311] Waiting for caches to sync for TTL
	I0314 19:42:18.145243    8428 command_runner.go:130] ! I0314 19:19:13.784656       1 range_allocator.go:111] "No Secondary Service CIDR provided. Skipping filtering out secondary service addresses"
	I0314 19:42:18.145243    8428 command_runner.go:130] ! I0314 19:19:13.784710       1 controllermanager.go:642] "Started controller" controller="node-ipam-controller"
	I0314 19:42:18.145326    8428 command_runner.go:130] ! I0314 19:19:13.784891       1 node_ipam_controller.go:162] "Starting ipam controller"
	I0314 19:42:18.145326    8428 command_runner.go:130] ! I0314 19:19:13.784975       1 shared_informer.go:311] Waiting for caches to sync for node
	I0314 19:42:18.145326    8428 command_runner.go:130] ! I0314 19:19:13.813537       1 controllermanager.go:642] "Started controller" controller="namespace-controller"
	I0314 19:42:18.145326    8428 command_runner.go:130] ! I0314 19:19:13.814099       1 namespace_controller.go:197] "Starting namespace controller"
	I0314 19:42:18.145326    8428 command_runner.go:130] ! I0314 19:19:13.814528       1 shared_informer.go:311] Waiting for caches to sync for namespace
	I0314 19:42:18.145326    8428 command_runner.go:130] ! I0314 19:19:13.831516       1 controllermanager.go:642] "Started controller" controller="garbage-collector-controller"
	I0314 19:42:18.145326    8428 command_runner.go:130] ! I0314 19:19:13.831928       1 garbagecollector.go:155] "Starting controller" controller="garbagecollector"
	I0314 19:42:18.145326    8428 command_runner.go:130] ! I0314 19:19:13.832023       1 shared_informer.go:311] Waiting for caches to sync for garbage collector
	I0314 19:42:18.145326    8428 command_runner.go:130] ! I0314 19:19:13.832052       1 graph_builder.go:294] "Running" component="GraphBuilder"
	I0314 19:42:18.145326    8428 command_runner.go:130] ! I0314 19:19:13.876141       1 controllermanager.go:642] "Started controller" controller="horizontal-pod-autoscaler-controller"
	I0314 19:42:18.145326    8428 command_runner.go:130] ! I0314 19:19:13.876437       1 horizontal.go:200] "Starting HPA controller"
	I0314 19:42:18.145326    8428 command_runner.go:130] ! I0314 19:19:13.876448       1 shared_informer.go:311] Waiting for caches to sync for HPA
	I0314 19:42:18.145326    8428 command_runner.go:130] ! I0314 19:19:13.892498       1 controllermanager.go:642] "Started controller" controller="disruption-controller"
	I0314 19:42:18.145326    8428 command_runner.go:130] ! I0314 19:19:13.892891       1 disruption.go:433] "Sending events to api server."
	I0314 19:42:18.145326    8428 command_runner.go:130] ! I0314 19:19:13.893092       1 disruption.go:444] "Starting disruption controller"
	I0314 19:42:18.145326    8428 command_runner.go:130] ! I0314 19:19:13.893185       1 shared_informer.go:311] Waiting for caches to sync for disruption
	I0314 19:42:18.145326    8428 command_runner.go:130] ! I0314 19:19:13.895299       1 controllermanager.go:642] "Started controller" controller="certificatesigningrequest-approving-controller"
	I0314 19:42:18.145326    8428 command_runner.go:130] ! I0314 19:19:13.895861       1 certificate_controller.go:115] "Starting certificate controller" name="csrapproving"
	I0314 19:42:18.145326    8428 command_runner.go:130] ! I0314 19:19:13.896105       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrapproving
	I0314 19:42:18.145326    8428 command_runner.go:130] ! I0314 19:19:13.908480       1 controllermanager.go:642] "Started controller" controller="endpointslice-mirroring-controller"
	I0314 19:42:18.145326    8428 command_runner.go:130] ! I0314 19:19:13.908861       1 endpointslicemirroring_controller.go:223] "Starting EndpointSliceMirroring controller"
	I0314 19:42:18.145326    8428 command_runner.go:130] ! I0314 19:19:13.908873       1 shared_informer.go:311] Waiting for caches to sync for endpoint_slice_mirroring
	I0314 19:42:18.145326    8428 command_runner.go:130] ! I0314 19:19:13.929369       1 controllermanager.go:642] "Started controller" controller="replicationcontroller-controller"
	I0314 19:42:18.145326    8428 command_runner.go:130] ! I0314 19:19:13.929803       1 replica_set.go:214] "Starting controller" name="replicationcontroller"
	I0314 19:42:18.145326    8428 command_runner.go:130] ! I0314 19:19:13.930050       1 shared_informer.go:311] Waiting for caches to sync for ReplicationController
	I0314 19:42:18.145863    8428 command_runner.go:130] ! I0314 19:19:13.974683       1 controllermanager.go:642] "Started controller" controller="replicaset-controller"
	I0314 19:42:18.145863    8428 command_runner.go:130] ! I0314 19:19:13.974899       1 replica_set.go:214] "Starting controller" name="replicaset"
	I0314 19:42:18.145863    8428 command_runner.go:130] ! I0314 19:19:13.975108       1 shared_informer.go:311] Waiting for caches to sync for ReplicaSet
	I0314 19:42:18.145863    8428 command_runner.go:130] ! E0314 19:19:14.134866       1 core.go:92] "Failed to start service controller" err="WARNING: no cloud provider provided, services of type LoadBalancer will fail"
	I0314 19:42:18.145863    8428 command_runner.go:130] ! I0314 19:19:14.135266       1 controllermanager.go:620] "Warning: skipping controller" controller="service-lb-controller"
	I0314 19:42:18.145964    8428 command_runner.go:130] ! E0314 19:19:14.170400       1 core.go:213] "Failed to start cloud node lifecycle controller" err="no cloud provider provided"
	I0314 19:42:18.145964    8428 command_runner.go:130] ! I0314 19:19:14.170426       1 controllermanager.go:620] "Warning: skipping controller" controller="cloud-node-lifecycle-controller"
	I0314 19:42:18.145964    8428 command_runner.go:130] ! I0314 19:19:14.324676       1 controllermanager.go:642] "Started controller" controller="ttl-after-finished-controller"
	I0314 19:42:18.145964    8428 command_runner.go:130] ! I0314 19:19:14.324865       1 ttlafterfinished_controller.go:109] "Starting TTL after finished controller"
	I0314 19:42:18.146055    8428 command_runner.go:130] ! I0314 19:19:14.325169       1 shared_informer.go:311] Waiting for caches to sync for TTL after finished
	I0314 19:42:18.146055    8428 command_runner.go:130] ! I0314 19:19:14.474401       1 controllermanager.go:642] "Started controller" controller="ephemeral-volume-controller"
	I0314 19:42:18.146055    8428 command_runner.go:130] ! I0314 19:19:14.474562       1 controller.go:169] "Starting ephemeral volume controller"
	I0314 19:42:18.146055    8428 command_runner.go:130] ! I0314 19:19:14.474660       1 shared_informer.go:311] Waiting for caches to sync for ephemeral
	I0314 19:42:18.146055    8428 command_runner.go:130] ! I0314 19:19:14.633668       1 controllermanager.go:642] "Started controller" controller="endpointslice-controller"
	I0314 19:42:18.146156    8428 command_runner.go:130] ! I0314 19:19:14.633821       1 endpointslice_controller.go:264] "Starting endpoint slice controller"
	I0314 19:42:18.146156    8428 command_runner.go:130] ! I0314 19:19:14.633832       1 shared_informer.go:311] Waiting for caches to sync for endpoint_slice
	I0314 19:42:18.146156    8428 command_runner.go:130] ! I0314 19:19:14.773955       1 controllermanager.go:642] "Started controller" controller="pod-garbage-collector-controller"
	I0314 19:42:18.146156    8428 command_runner.go:130] ! I0314 19:19:14.774019       1 gc_controller.go:101] "Starting GC controller"
	I0314 19:42:18.146246    8428 command_runner.go:130] ! I0314 19:19:14.774027       1 shared_informer.go:311] Waiting for caches to sync for GC
	I0314 19:42:18.146246    8428 command_runner.go:130] ! I0314 19:19:14.925568       1 controllermanager.go:642] "Started controller" controller="daemonset-controller"
	I0314 19:42:18.146246    8428 command_runner.go:130] ! I0314 19:19:14.925814       1 daemon_controller.go:291] "Starting daemon sets controller"
	I0314 19:42:18.146246    8428 command_runner.go:130] ! I0314 19:19:14.925828       1 shared_informer.go:311] Waiting for caches to sync for daemon sets
	I0314 19:42:18.146340    8428 command_runner.go:130] ! I0314 19:19:15.075328       1 controllermanager.go:642] "Started controller" controller="job-controller"
	I0314 19:42:18.146340    8428 command_runner.go:130] ! I0314 19:19:15.075556       1 job_controller.go:226] "Starting job controller"
	I0314 19:42:18.146340    8428 command_runner.go:130] ! I0314 19:19:15.075634       1 shared_informer.go:311] Waiting for caches to sync for job
	I0314 19:42:18.146340    8428 command_runner.go:130] ! I0314 19:19:15.225929       1 controllermanager.go:642] "Started controller" controller="persistentvolume-expander-controller"
	I0314 19:42:18.146340    8428 command_runner.go:130] ! I0314 19:19:15.226065       1 expand_controller.go:328] "Starting expand controller"
	I0314 19:42:18.146430    8428 command_runner.go:130] ! I0314 19:19:15.226077       1 shared_informer.go:311] Waiting for caches to sync for expand
	I0314 19:42:18.146430    8428 command_runner.go:130] ! I0314 19:19:15.378471       1 controllermanager.go:642] "Started controller" controller="deployment-controller"
	I0314 19:42:18.146430    8428 command_runner.go:130] ! I0314 19:19:15.378640       1 deployment_controller.go:168] "Starting controller" controller="deployment"
	I0314 19:42:18.146430    8428 command_runner.go:130] ! I0314 19:19:15.379237       1 shared_informer.go:311] Waiting for caches to sync for deployment
	I0314 19:42:18.146519    8428 command_runner.go:130] ! I0314 19:19:15.525089       1 controllermanager.go:642] "Started controller" controller="statefulset-controller"
	I0314 19:42:18.146519    8428 command_runner.go:130] ! I0314 19:19:15.525565       1 stateful_set.go:161] "Starting stateful set controller"
	I0314 19:42:18.146607    8428 command_runner.go:130] ! I0314 19:19:15.525643       1 shared_informer.go:311] Waiting for caches to sync for stateful set
	I0314 19:42:18.146607    8428 command_runner.go:130] ! I0314 19:19:15.679545       1 controllermanager.go:642] "Started controller" controller="cronjob-controller"
	I0314 19:42:18.146607    8428 command_runner.go:130] ! I0314 19:19:15.679611       1 cronjob_controllerv2.go:139] "Starting cronjob controller v2"
	I0314 19:42:18.146607    8428 command_runner.go:130] ! I0314 19:19:15.679619       1 shared_informer.go:311] Waiting for caches to sync for cronjob
	I0314 19:42:18.146696    8428 command_runner.go:130] ! I0314 19:19:15.825516       1 controllermanager.go:642] "Started controller" controller="clusterrole-aggregation-controller"
	I0314 19:42:18.146696    8428 command_runner.go:130] ! I0314 19:19:15.825908       1 clusterroleaggregation_controller.go:189] "Starting ClusterRoleAggregator controller"
	I0314 19:42:18.146696    8428 command_runner.go:130] ! I0314 19:19:15.825920       1 shared_informer.go:311] Waiting for caches to sync for ClusterRoleAggregator
	I0314 19:42:18.146785    8428 command_runner.go:130] ! I0314 19:19:15.976308       1 controllermanager.go:642] "Started controller" controller="root-ca-certificate-publisher-controller"
	I0314 19:42:18.146785    8428 command_runner.go:130] ! I0314 19:19:15.976673       1 publisher.go:102] "Starting root CA cert publisher controller"
	I0314 19:42:18.146785    8428 command_runner.go:130] ! I0314 19:19:15.976858       1 shared_informer.go:311] Waiting for caches to sync for crt configmap
	I0314 19:42:18.146785    8428 command_runner.go:130] ! I0314 19:19:15.993409       1 shared_informer.go:311] Waiting for caches to sync for resource quota
	I0314 19:42:18.146871    8428 command_runner.go:130] ! I0314 19:19:16.017841       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-442000\" does not exist"
	I0314 19:42:18.146871    8428 command_runner.go:130] ! I0314 19:19:16.022817       1 shared_informer.go:311] Waiting for caches to sync for garbage collector
	I0314 19:42:18.146871    8428 command_runner.go:130] ! I0314 19:19:16.023332       1 shared_informer.go:318] Caches are synced for TTL
	I0314 19:42:18.146967    8428 command_runner.go:130] ! I0314 19:19:16.025413       1 shared_informer.go:318] Caches are synced for TTL after finished
	I0314 19:42:18.146967    8428 command_runner.go:130] ! I0314 19:19:16.025667       1 shared_informer.go:318] Caches are synced for stateful set
	I0314 19:42:18.146967    8428 command_runner.go:130] ! I0314 19:19:16.025909       1 shared_informer.go:318] Caches are synced for daemon sets
	I0314 19:42:18.146967    8428 command_runner.go:130] ! I0314 19:19:16.026194       1 shared_informer.go:318] Caches are synced for expand
	I0314 19:42:18.146967    8428 command_runner.go:130] ! I0314 19:19:16.030689       1 shared_informer.go:318] Caches are synced for ReplicationController
	I0314 19:42:18.147059    8428 command_runner.go:130] ! I0314 19:19:16.042937       1 shared_informer.go:318] Caches are synced for endpoint
	I0314 19:42:18.147059    8428 command_runner.go:130] ! I0314 19:19:16.063170       1 shared_informer.go:318] Caches are synced for bootstrap_signer
	I0314 19:42:18.147059    8428 command_runner.go:130] ! I0314 19:19:16.069816       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-client
	I0314 19:42:18.147059    8428 command_runner.go:130] ! I0314 19:19:16.069953       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-serving
	I0314 19:42:18.147149    8428 command_runner.go:130] ! I0314 19:19:16.071382       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0314 19:42:18.147149    8428 command_runner.go:130] ! I0314 19:19:16.072881       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-legacy-unknown
	I0314 19:42:18.147149    8428 command_runner.go:130] ! I0314 19:19:16.075260       1 shared_informer.go:318] Caches are synced for GC
	I0314 19:42:18.147149    8428 command_runner.go:130] ! I0314 19:19:16.075273       1 shared_informer.go:318] Caches are synced for ReplicaSet
	I0314 19:42:18.147237    8428 command_runner.go:130] ! I0314 19:19:16.075312       1 shared_informer.go:318] Caches are synced for ephemeral
	I0314 19:42:18.147237    8428 command_runner.go:130] ! I0314 19:19:16.076852       1 shared_informer.go:318] Caches are synced for HPA
	I0314 19:42:18.147237    8428 command_runner.go:130] ! I0314 19:19:16.077008       1 shared_informer.go:318] Caches are synced for crt configmap
	I0314 19:42:18.147237    8428 command_runner.go:130] ! I0314 19:19:16.077022       1 shared_informer.go:318] Caches are synced for job
	I0314 19:42:18.147237    8428 command_runner.go:130] ! I0314 19:19:16.079681       1 shared_informer.go:318] Caches are synced for deployment
	I0314 19:42:18.147325    8428 command_runner.go:130] ! I0314 19:19:16.079893       1 shared_informer.go:318] Caches are synced for cronjob
	I0314 19:42:18.147325    8428 command_runner.go:130] ! I0314 19:19:16.085788       1 shared_informer.go:318] Caches are synced for node
	I0314 19:42:18.147325    8428 command_runner.go:130] ! I0314 19:19:16.085869       1 range_allocator.go:174] "Sending events to api server"
	I0314 19:42:18.147325    8428 command_runner.go:130] ! I0314 19:19:16.085937       1 range_allocator.go:178] "Starting range CIDR allocator"
	I0314 19:42:18.147414    8428 command_runner.go:130] ! I0314 19:19:16.085945       1 shared_informer.go:311] Waiting for caches to sync for cidrallocator
	I0314 19:42:18.147414    8428 command_runner.go:130] ! I0314 19:19:16.085951       1 shared_informer.go:318] Caches are synced for cidrallocator
	I0314 19:42:18.147414    8428 command_runner.go:130] ! I0314 19:19:16.086224       1 shared_informer.go:318] Caches are synced for PVC protection
	I0314 19:42:18.147414    8428 command_runner.go:130] ! I0314 19:19:16.093730       1 shared_informer.go:318] Caches are synced for disruption
	I0314 19:42:18.147504    8428 command_runner.go:130] ! I0314 19:19:16.093802       1 shared_informer.go:318] Caches are synced for resource quota
	I0314 19:42:18.147504    8428 command_runner.go:130] ! I0314 19:19:16.097148       1 shared_informer.go:318] Caches are synced for certificate-csrapproving
	I0314 19:42:18.147504    8428 command_runner.go:130] ! I0314 19:19:16.098688       1 shared_informer.go:318] Caches are synced for service account
	I0314 19:42:18.147504    8428 command_runner.go:130] ! I0314 19:19:16.102404       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-442000" podCIDRs=["10.244.0.0/24"]
	I0314 19:42:18.147592    8428 command_runner.go:130] ! I0314 19:19:16.112396       1 shared_informer.go:318] Caches are synced for taint
	I0314 19:42:18.147592    8428 command_runner.go:130] ! I0314 19:19:16.112849       1 node_lifecycle_controller.go:1225] "Initializing eviction metric for zone" zone=""
	I0314 19:42:18.147592    8428 command_runner.go:130] ! I0314 19:19:16.113070       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-442000"
	I0314 19:42:18.147680    8428 command_runner.go:130] ! I0314 19:19:16.113155       1 node_lifecycle_controller.go:1029] "Controller detected that all Nodes are not-Ready. Entering master disruption mode"
	I0314 19:42:18.147680    8428 command_runner.go:130] ! I0314 19:19:16.112659       1 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring
	I0314 19:42:18.147680    8428 command_runner.go:130] ! I0314 19:19:16.113865       1 taint_manager.go:205] "Starting NoExecuteTaintManager"
	I0314 19:42:18.147680    8428 command_runner.go:130] ! I0314 19:19:16.113966       1 taint_manager.go:210] "Sending events to api server"
	I0314 19:42:18.147680    8428 command_runner.go:130] ! I0314 19:19:16.115068       1 shared_informer.go:318] Caches are synced for namespace
	I0314 19:42:18.147777    8428 command_runner.go:130] ! I0314 19:19:16.118281       1 event.go:307] "Event occurred" object="multinode-442000" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-442000 event: Registered Node multinode-442000 in Controller"
	I0314 19:42:18.147777    8428 command_runner.go:130] ! I0314 19:19:16.134584       1 shared_informer.go:318] Caches are synced for endpoint_slice
	I0314 19:42:18.147777    8428 command_runner.go:130] ! I0314 19:19:16.151625       1 event.go:307] "Event occurred" object="kube-system/kube-scheduler-multinode-442000" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0314 19:42:18.147866    8428 command_runner.go:130] ! I0314 19:19:16.171551       1 event.go:307] "Event occurred" object="kube-system/etcd-multinode-442000" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0314 19:42:18.147866    8428 command_runner.go:130] ! I0314 19:19:16.174341       1 event.go:307] "Event occurred" object="kube-system/kube-apiserver-multinode-442000" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0314 19:42:18.147959    8428 command_runner.go:130] ! I0314 19:19:16.174358       1 event.go:307] "Event occurred" object="kube-system/kube-controller-manager-multinode-442000" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0314 19:42:18.147959    8428 command_runner.go:130] ! I0314 19:19:16.184987       1 shared_informer.go:318] Caches are synced for resource quota
	I0314 19:42:18.147959    8428 command_runner.go:130] ! I0314 19:19:16.223118       1 shared_informer.go:318] Caches are synced for PV protection
	I0314 19:42:18.147959    8428 command_runner.go:130] ! I0314 19:19:16.225526       1 shared_informer.go:318] Caches are synced for attach detach
	I0314 19:42:18.148048    8428 command_runner.go:130] ! I0314 19:19:16.225950       1 shared_informer.go:318] Caches are synced for ClusterRoleAggregator
	I0314 19:42:18.148048    8428 command_runner.go:130] ! I0314 19:19:16.274020       1 shared_informer.go:318] Caches are synced for persistent volume
	I0314 19:42:18.148048    8428 command_runner.go:130] ! I0314 19:19:16.320250       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-7b9lf"
	I0314 19:42:18.148142    8428 command_runner.go:130] ! I0314 19:19:16.328650       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-cg28g"
	I0314 19:42:18.148142    8428 command_runner.go:130] ! I0314 19:19:16.626855       1 shared_informer.go:318] Caches are synced for garbage collector
	I0314 19:42:18.148142    8428 command_runner.go:130] ! I0314 19:19:16.633099       1 shared_informer.go:318] Caches are synced for garbage collector
	I0314 19:42:18.148142    8428 command_runner.go:130] ! I0314 19:19:16.633344       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0314 19:42:18.148231    8428 command_runner.go:130] ! I0314 19:19:16.789964       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5dd5756b68 to 2"
	I0314 19:42:18.148231    8428 command_runner.go:130] ! I0314 19:19:17.099870       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-pvxpr"
	I0314 19:42:18.148319    8428 command_runner.go:130] ! I0314 19:19:17.114819       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-d22jc"
	I0314 19:42:18.148319    8428 command_runner.go:130] ! I0314 19:19:17.146456       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="355.713874ms"
	I0314 19:42:18.148319    8428 command_runner.go:130] ! I0314 19:19:17.166202       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="19.688691ms"
	I0314 19:42:18.148407    8428 command_runner.go:130] ! I0314 19:19:17.169087       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="2.771063ms"
	I0314 19:42:18.148407    8428 command_runner.go:130] ! I0314 19:19:18.399096       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I0314 19:42:18.148407    8428 command_runner.go:130] ! I0314 19:19:18.448322       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-pvxpr"
	I0314 19:42:18.148495    8428 command_runner.go:130] ! I0314 19:19:18.482373       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="82.944747ms"
	I0314 19:42:18.148495    8428 command_runner.go:130] ! I0314 19:19:18.500300       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="17.716936ms"
	I0314 19:42:18.148495    8428 command_runner.go:130] ! I0314 19:19:18.500887       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="99.317µs"
	I0314 19:42:18.148584    8428 command_runner.go:130] ! I0314 19:19:26.475232       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="98.515µs"
	I0314 19:42:18.148584    8428 command_runner.go:130] ! I0314 19:19:26.505160       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="58.309µs"
	I0314 19:42:18.148584    8428 command_runner.go:130] ! I0314 19:19:28.423231       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="23.310782ms"
	I0314 19:42:18.148584    8428 command_runner.go:130] ! I0314 19:19:28.423925       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="72.006µs"
	I0314 19:42:18.148675    8428 command_runner.go:130] ! I0314 19:19:31.116802       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I0314 19:42:18.148675    8428 command_runner.go:130] ! I0314 19:22:02.467925       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-442000-m02\" does not exist"
	I0314 19:42:18.148754    8428 command_runner.go:130] ! I0314 19:22:02.479576       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-442000-m02" podCIDRs=["10.244.1.0/24"]
	I0314 19:42:18.148790    8428 command_runner.go:130] ! I0314 19:22:02.507610       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-72dzs"
	I0314 19:42:18.148823    8428 command_runner.go:130] ! I0314 19:22:02.511169       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-c7m4p"
	I0314 19:42:18.148823    8428 command_runner.go:130] ! I0314 19:22:06.145908       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-442000-m02"
	I0314 19:42:18.148823    8428 command_runner.go:130] ! I0314 19:22:06.146201       1 event.go:307] "Event occurred" object="multinode-442000-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-442000-m02 event: Registered Node multinode-442000-m02 in Controller"
	I0314 19:42:18.148823    8428 command_runner.go:130] ! I0314 19:22:20.862710       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-442000-m02"
	I0314 19:42:18.148823    8428 command_runner.go:130] ! I0314 19:22:45.188036       1 event.go:307] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-5b5d89c9d6 to 2"
	I0314 19:42:18.148823    8428 command_runner.go:130] ! I0314 19:22:45.218022       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5b5d89c9d6-8drpb"
	I0314 19:42:18.148823    8428 command_runner.go:130] ! I0314 19:22:45.241867       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5b5d89c9d6-7446n"
	I0314 19:42:18.148823    8428 command_runner.go:130] ! I0314 19:22:45.267427       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="80.313691ms"
	I0314 19:42:18.148823    8428 command_runner.go:130] ! I0314 19:22:45.292961       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="25.159362ms"
	I0314 19:42:18.148823    8428 command_runner.go:130] ! I0314 19:22:45.311264       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="18.241692ms"
	I0314 19:42:18.148823    8428 command_runner.go:130] ! I0314 19:22:45.311407       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="93.911µs"
	I0314 19:42:18.148823    8428 command_runner.go:130] ! I0314 19:22:48.320252       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="21.515467ms"
	I0314 19:42:18.148823    8428 command_runner.go:130] ! I0314 19:22:48.320403       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="46.303µs"
	I0314 19:42:18.148823    8428 command_runner.go:130] ! I0314 19:22:48.344640       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="8.018521ms"
	I0314 19:42:18.148823    8428 command_runner.go:130] ! I0314 19:22:48.344838       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="42.804µs"
	I0314 19:42:18.148823    8428 command_runner.go:130] ! I0314 19:26:25.208780       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-442000-m02"
	I0314 19:42:18.148823    8428 command_runner.go:130] ! I0314 19:26:25.214591       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-442000-m03\" does not exist"
	I0314 19:42:18.148823    8428 command_runner.go:130] ! I0314 19:26:25.248082       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-442000-m03" podCIDRs=["10.244.2.0/24"]
	I0314 19:42:18.148823    8428 command_runner.go:130] ! I0314 19:26:25.265233       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-r7zdb"
	I0314 19:42:18.149355    8428 command_runner.go:130] ! I0314 19:26:25.273144       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-w2qls"
	I0314 19:42:18.149443    8428 command_runner.go:130] ! I0314 19:26:26.207170       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-442000-m03"
	I0314 19:42:18.149443    8428 command_runner.go:130] ! I0314 19:26:26.207236       1 event.go:307] "Event occurred" object="multinode-442000-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-442000-m03 event: Registered Node multinode-442000-m03 in Controller"
	I0314 19:42:18.149530    8428 command_runner.go:130] ! I0314 19:26:43.758846       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-442000-m02"
	I0314 19:42:18.149530    8428 command_runner.go:130] ! I0314 19:33:46.333556       1 event.go:307] "Event occurred" object="multinode-442000-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-442000-m03 status is now: NodeNotReady"
	I0314 19:42:18.149618    8428 command_runner.go:130] ! I0314 19:33:46.333891       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-442000-m02"
	I0314 19:42:18.149618    8428 command_runner.go:130] ! I0314 19:33:46.348976       1 event.go:307] "Event occurred" object="kube-system/kindnet-r7zdb" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0314 19:42:18.149706    8428 command_runner.go:130] ! I0314 19:33:46.370200       1 event.go:307] "Event occurred" object="kube-system/kube-proxy-w2qls" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0314 19:42:18.149706    8428 command_runner.go:130] ! I0314 19:36:39.868492       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-442000-m02"
	I0314 19:42:18.149706    8428 command_runner.go:130] ! I0314 19:36:41.400896       1 event.go:307] "Event occurred" object="multinode-442000-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RemovingNode" message="Node multinode-442000-m03 event: Removing Node multinode-442000-m03 from Controller"
	I0314 19:42:18.149794    8428 command_runner.go:130] ! I0314 19:36:47.335802       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-442000-m03\" does not exist"
	I0314 19:42:18.149883    8428 command_runner.go:130] ! I0314 19:36:47.336128       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-442000-m02"
	I0314 19:42:18.149883    8428 command_runner.go:130] ! I0314 19:36:47.352987       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-442000-m03" podCIDRs=["10.244.3.0/24"]
	I0314 19:42:18.149883    8428 command_runner.go:130] ! I0314 19:36:51.403261       1 event.go:307] "Event occurred" object="multinode-442000-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-442000-m03 event: Registered Node multinode-442000-m03 in Controller"
	I0314 19:42:18.149973    8428 command_runner.go:130] ! I0314 19:36:54.976864       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-442000-m02"
	I0314 19:42:18.149973    8428 command_runner.go:130] ! I0314 19:38:21.463528       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-442000-m02"
	I0314 19:42:18.149973    8428 command_runner.go:130] ! I0314 19:38:21.463818       1 event.go:307] "Event occurred" object="multinode-442000-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-442000-m03 status is now: NodeNotReady"
	I0314 19:42:18.150063    8428 command_runner.go:130] ! I0314 19:38:21.486796       1 event.go:307] "Event occurred" object="kube-system/kindnet-r7zdb" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0314 19:42:18.150063    8428 command_runner.go:130] ! I0314 19:38:21.501217       1 event.go:307] "Event occurred" object="kube-system/kube-proxy-w2qls" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0314 19:42:18.167222    8428 logs.go:123] Gathering logs for kindnet [999e4c168afe] ...
	I0314 19:42:18.167222    8428 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 999e4c168afe"
	I0314 19:42:18.193087    8428 command_runner.go:130] ! I0314 19:41:08.409720       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0314 19:42:18.193584    8428 command_runner.go:130] ! I0314 19:41:08.410195       1 main.go:107] hostIP = 172.17.93.236
	I0314 19:42:18.193620    8428 command_runner.go:130] ! podIP = 172.17.93.236
	I0314 19:42:18.193620    8428 command_runner.go:130] ! I0314 19:41:08.411178       1 main.go:116] setting mtu 1500 for CNI 
	I0314 19:42:18.193620    8428 command_runner.go:130] ! I0314 19:41:08.411230       1 main.go:146] kindnetd IP family: "ipv4"
	I0314 19:42:18.193620    8428 command_runner.go:130] ! I0314 19:41:08.411277       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0314 19:42:18.193687    8428 command_runner.go:130] ! I0314 19:41:38.747509       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: i/o timeout
	I0314 19:42:18.193687    8428 command_runner.go:130] ! I0314 19:41:38.770843       1 main.go:223] Handling node with IPs: map[172.17.93.236:{}]
	I0314 19:42:18.193687    8428 command_runner.go:130] ! I0314 19:41:38.770994       1 main.go:227] handling current node
	I0314 19:42:18.193725    8428 command_runner.go:130] ! I0314 19:41:38.771413       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:18.193747    8428 command_runner.go:130] ! I0314 19:41:38.771428       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:18.193747    8428 command_runner.go:130] ! I0314 19:41:38.771670       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 172.17.80.135 Flags: [] Table: 0} 
	I0314 19:42:18.193747    8428 command_runner.go:130] ! I0314 19:41:38.771817       1 main.go:223] Handling node with IPs: map[172.17.84.215:{}]
	I0314 19:42:18.193800    8428 command_runner.go:130] ! I0314 19:41:38.771827       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.3.0/24] 
	I0314 19:42:18.193800    8428 command_runner.go:130] ! I0314 19:41:38.771944       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 172.17.84.215 Flags: [] Table: 0} 
	I0314 19:42:18.193800    8428 command_runner.go:130] ! I0314 19:41:48.777997       1 main.go:223] Handling node with IPs: map[172.17.93.236:{}]
	I0314 19:42:18.193800    8428 command_runner.go:130] ! I0314 19:41:48.778091       1 main.go:227] handling current node
	I0314 19:42:18.193800    8428 command_runner.go:130] ! I0314 19:41:48.778105       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:18.193861    8428 command_runner.go:130] ! I0314 19:41:48.778113       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:18.193861    8428 command_runner.go:130] ! I0314 19:41:48.778217       1 main.go:223] Handling node with IPs: map[172.17.84.215:{}]
	I0314 19:42:18.193861    8428 command_runner.go:130] ! I0314 19:41:48.778373       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.3.0/24] 
	I0314 19:42:18.193861    8428 command_runner.go:130] ! I0314 19:41:58.793215       1 main.go:223] Handling node with IPs: map[172.17.93.236:{}]
	I0314 19:42:18.193861    8428 command_runner.go:130] ! I0314 19:41:58.793285       1 main.go:227] handling current node
	I0314 19:42:18.193937    8428 command_runner.go:130] ! I0314 19:41:58.793297       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:18.193937    8428 command_runner.go:130] ! I0314 19:41:58.793304       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:18.193937    8428 command_runner.go:130] ! I0314 19:41:58.793793       1 main.go:223] Handling node with IPs: map[172.17.84.215:{}]
	I0314 19:42:18.193937    8428 command_runner.go:130] ! I0314 19:41:58.793859       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.3.0/24] 
	I0314 19:42:18.193998    8428 command_runner.go:130] ! I0314 19:42:08.808709       1 main.go:223] Handling node with IPs: map[172.17.93.236:{}]
	I0314 19:42:18.193998    8428 command_runner.go:130] ! I0314 19:42:08.808803       1 main.go:227] handling current node
	I0314 19:42:18.193998    8428 command_runner.go:130] ! I0314 19:42:08.808818       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:18.193998    8428 command_runner.go:130] ! I0314 19:42:08.808826       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:18.194061    8428 command_runner.go:130] ! I0314 19:42:08.809153       1 main.go:223] Handling node with IPs: map[172.17.84.215:{}]
	I0314 19:42:18.194061    8428 command_runner.go:130] ! I0314 19:42:08.809168       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.3.0/24] 
	I0314 19:42:18.196186    8428 logs.go:123] Gathering logs for kubelet ...
	I0314 19:42:18.196186    8428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:42:18.230499    8428 command_runner.go:130] > Mar 14 19:40:57 multinode-442000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0314 19:42:18.230499    8428 command_runner.go:130] > Mar 14 19:40:57 multinode-442000 kubelet[1388]: I0314 19:40:57.516074    1388 server.go:467] "Kubelet version" kubeletVersion="v1.28.4"
	I0314 19:42:18.230499    8428 command_runner.go:130] > Mar 14 19:40:57 multinode-442000 kubelet[1388]: I0314 19:40:57.516440    1388 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0314 19:42:18.230499    8428 command_runner.go:130] > Mar 14 19:40:57 multinode-442000 kubelet[1388]: I0314 19:40:57.516773    1388 server.go:895] "Client rotation is on, will bootstrap in background"
	I0314 19:42:18.230499    8428 command_runner.go:130] > Mar 14 19:40:57 multinode-442000 kubelet[1388]: E0314 19:40:57.516893    1388 run.go:74] "command failed" err="failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory"
	I0314 19:42:18.230499    8428 command_runner.go:130] > Mar 14 19:40:57 multinode-442000 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	I0314 19:42:18.230499    8428 command_runner.go:130] > Mar 14 19:40:57 multinode-442000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	I0314 19:42:18.230499    8428 command_runner.go:130] > Mar 14 19:40:58 multinode-442000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1.
	I0314 19:42:18.230499    8428 command_runner.go:130] > Mar 14 19:40:58 multinode-442000 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I0314 19:42:18.230499    8428 command_runner.go:130] > Mar 14 19:40:58 multinode-442000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0314 19:42:18.230499    8428 command_runner.go:130] > Mar 14 19:40:58 multinode-442000 kubelet[1450]: I0314 19:40:58.293295    1450 server.go:467] "Kubelet version" kubeletVersion="v1.28.4"
	I0314 19:42:18.230499    8428 command_runner.go:130] > Mar 14 19:40:58 multinode-442000 kubelet[1450]: I0314 19:40:58.293422    1450 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0314 19:42:18.230499    8428 command_runner.go:130] > Mar 14 19:40:58 multinode-442000 kubelet[1450]: I0314 19:40:58.293759    1450 server.go:895] "Client rotation is on, will bootstrap in background"
	I0314 19:42:18.230499    8428 command_runner.go:130] > Mar 14 19:40:58 multinode-442000 kubelet[1450]: E0314 19:40:58.293809    1450 run.go:74] "command failed" err="failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory"
	I0314 19:42:18.230499    8428 command_runner.go:130] > Mar 14 19:40:58 multinode-442000 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	I0314 19:42:18.230499    8428 command_runner.go:130] > Mar 14 19:40:58 multinode-442000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	I0314 19:42:18.230499    8428 command_runner.go:130] > Mar 14 19:40:58 multinode-442000 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I0314 19:42:18.230499    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0314 19:42:18.230499    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.270178    1523 server.go:467] "Kubelet version" kubeletVersion="v1.28.4"
	I0314 19:42:18.230499    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.270275    1523 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0314 19:42:18.230499    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.270469    1523 server.go:895] "Client rotation is on, will bootstrap in background"
	I0314 19:42:18.230499    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.272943    1523 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	I0314 19:42:18.230499    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.286808    1523 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0314 19:42:18.230499    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.333673    1523 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
	I0314 19:42:18.230499    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.335204    1523 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
	I0314 19:42:18.231058    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.335543    1523 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","To
pologyManagerPolicyOptions":null}
	I0314 19:42:18.231058    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.335688    1523 topology_manager.go:138] "Creating topology manager with none policy"
	I0314 19:42:18.231058    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.335703    1523 container_manager_linux.go:301] "Creating device plugin manager"
	I0314 19:42:18.231058    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.336879    1523 state_mem.go:36] "Initialized new in-memory state store"
	I0314 19:42:18.231058    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.338507    1523 kubelet.go:393] "Attempting to sync node with API server"
	I0314 19:42:18.231173    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.338606    1523 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests"
	I0314 19:42:18.231173    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.339942    1523 kubelet.go:309] "Adding apiserver pod source"
	I0314 19:42:18.231173    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.339973    1523 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
	I0314 19:42:18.231173    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: W0314 19:41:00.342644    1523 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-442000&limit=500&resourceVersion=0": dial tcp 172.17.93.236:8443: connect: connection refused
	I0314 19:42:18.231284    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: E0314 19:41:00.342728    1523 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-442000&limit=500&resourceVersion=0": dial tcp 172.17.93.236:8443: connect: connection refused
	I0314 19:42:18.231284    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: W0314 19:41:00.352846    1523 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.17.93.236:8443: connect: connection refused
	I0314 19:42:18.231284    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: E0314 19:41:00.353005    1523 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.17.93.236:8443: connect: connection refused
	I0314 19:42:18.231284    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.362091    1523 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="docker" version="25.0.4" apiVersion="v1"
	I0314 19:42:18.231394    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: W0314 19:41:00.368654    1523 probe.go:268] Flexvolume plugin directory at /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
	I0314 19:42:18.231394    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.370831    1523 server.go:1232] "Started kubelet"
	I0314 19:42:18.231394    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.376404    1523 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
	I0314 19:42:18.231394    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.381472    1523 server.go:162] "Starting to listen" address="0.0.0.0" port=10250
	I0314 19:42:18.231394    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.381715    1523 volume_manager.go:291] "Starting Kubelet Volume Manager"
	I0314 19:42:18.231394    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.383735    1523 server.go:462] "Adding debug handlers to kubelet server"
	I0314 19:42:18.231503    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.385265    1523 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10
	I0314 19:42:18.231503    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.387577    1523 desired_state_of_world_populator.go:151] "Desired state populator starts to run"
	I0314 19:42:18.231503    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.392182    1523 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock"
	I0314 19:42:18.231503    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: E0314 19:41:00.392853    1523 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-442000?timeout=10s\": dial tcp 172.17.93.236:8443: connect: connection refused" interval="200ms"
	I0314 19:42:18.231612    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: W0314 19:41:00.392921    1523 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.17.93.236:8443: connect: connection refused
	I0314 19:42:18.231612    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: E0314 19:41:00.392970    1523 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.17.93.236:8443: connect: connection refused
	I0314 19:42:18.231721    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: E0314 19:41:00.402867    1523 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"multinode-442000.17bcb8e6e82683f3", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"multinode-442000", UID:"multinode-442000", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"multinode-442000"}, FirstTimestamp:time.Date(2024, ti
me.March, 14, 19, 41, 0, 370772979, time.Local), LastTimestamp:time.Date(2024, time.March, 14, 19, 41, 0, 370772979, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"multinode-442000"}': 'Post "https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events": dial tcp 172.17.93.236:8443: connect: connection refused'(may retry after sleeping)
	I0314 19:42:18.231721    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.431568    1523 reconciler_new.go:29] "Reconciler: start to sync state"
	I0314 19:42:18.231721    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.453043    1523 cpu_manager.go:214] "Starting CPU manager" policy="none"
	I0314 19:42:18.231721    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.453062    1523 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s"
	I0314 19:42:18.231840    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.453088    1523 state_mem.go:36] "Initialized new in-memory state store"
	I0314 19:42:18.231840    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.453812    1523 state_mem.go:88] "Updated default CPUSet" cpuSet=""
	I0314 19:42:18.231840    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.453838    1523 state_mem.go:96] "Updated CPUSet assignments" assignments={}
	I0314 19:42:18.231900    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.453846    1523 policy_none.go:49] "None policy: Start"
	I0314 19:42:18.231900    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.459854    1523 memory_manager.go:169] "Starting memorymanager" policy="None"
	I0314 19:42:18.231944    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.459925    1523 state_mem.go:35] "Initializing new in-memory state store"
	I0314 19:42:18.231944    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.460715    1523 state_mem.go:75] "Updated machine memory state"
	I0314 19:42:18.231944    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.466366    1523 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found"
	I0314 19:42:18.231944    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.471455    1523 plugin_manager.go:118] "Starting Kubelet Plugin Manager"
	I0314 19:42:18.231944    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.475344    1523 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4"
	I0314 19:42:18.232145    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.478780    1523 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6"
	I0314 19:42:18.232145    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.478820    1523 status_manager.go:217] "Starting to sync pod status with apiserver"
	I0314 19:42:18.232145    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.478846    1523 kubelet.go:2303] "Starting kubelet main sync loop"
	I0314 19:42:18.232266    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: E0314 19:41:00.478899    1523 kubelet.go:2327] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful"
	I0314 19:42:18.232266    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: W0314 19:41:00.485952    1523 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.17.93.236:8443: connect: connection refused
	I0314 19:42:18.232266    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: E0314 19:41:00.487569    1523 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.17.93.236:8443: connect: connection refused
	I0314 19:42:18.232266    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: E0314 19:41:00.493845    1523 eviction_manager.go:258] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"multinode-442000\" not found"
	I0314 19:42:18.232378    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.501023    1523 kubelet_node_status.go:70] "Attempting to register node" node="multinode-442000"
	I0314 19:42:18.232513    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: E0314 19:41:00.501915    1523 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.17.93.236:8443: connect: connection refused" node="multinode-442000"
	I0314 19:42:18.232620    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: E0314 19:41:00.503739    1523 iptables.go:575] "Could not set up iptables canary" err=<
	I0314 19:42:18.232620    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	I0314 19:42:18.232782    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	I0314 19:42:18.232871    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]:         Perhaps ip6tables or your kernel needs to be upgraded.
	I0314 19:42:18.232871    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	I0314 19:42:18.232871    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.578961    1523 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="af5b88117f99a24e81a324ab026c69a7058a7c1bc88d9b9a5386134abc257bba"
	I0314 19:42:18.232871    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.578983    1523 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="54e39762d7a6437164a9b2c6dd22b1f36b57514310190ce4acc3349001cb1774"
	I0314 19:42:18.232980    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.579017    1523 topology_manager.go:215] "Topology Admit Handler" podUID="2b2434280023596d1e3c90125a7219ed" podNamespace="kube-system" podName="kube-scheduler-multinode-442000"
	I0314 19:42:18.232980    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.592991    1523 topology_manager.go:215] "Topology Admit Handler" podUID="7754d2f32966faec8123dc3b8a2af767" podNamespace="kube-system" podName="kube-apiserver-multinode-442000"
	I0314 19:42:18.232980    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: E0314 19:41:00.594193    1523 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-442000?timeout=10s\": dial tcp 172.17.93.236:8443: connect: connection refused" interval="400ms"
	I0314 19:42:18.233091    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.609977    1523 topology_manager.go:215] "Topology Admit Handler" podUID="a7ee530f2bd843eddeace8cd6ec0d204" podNamespace="kube-system" podName="kube-controller-manager-multinode-442000"
	I0314 19:42:18.233091    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.622973    1523 topology_manager.go:215] "Topology Admit Handler" podUID="fa99a5621d016aa714804afcaa1e0a53" podNamespace="kube-system" podName="etcd-multinode-442000"
	I0314 19:42:18.233091    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.634832    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2b2434280023596d1e3c90125a7219ed-kubeconfig\") pod \"kube-scheduler-multinode-442000\" (UID: \"2b2434280023596d1e3c90125a7219ed\") " pod="kube-system/kube-scheduler-multinode-442000"
	I0314 19:42:18.233091    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.640587    1523 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b179d157b6b2f71cc980c7ea5060a613be77e84e89947fbcb91a687ea7310eaf"
	I0314 19:42:18.233203    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.640610    1523 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b046b896affe9f3219822b857a6b4dfa1427854d5df420b6b2e1cec631372548"
	I0314 19:42:18.233203    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.640625    1523 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fa0f2372c88eef3de0c7caa0041064157c314aff4c14bf6622f34dd89106f773"
	I0314 19:42:18.233203    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.640637    1523 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9b3244b47278e22e56ab0362b7a74ee80ca2806fb1074d718b0278b5bc70be76"
	I0314 19:42:18.233203    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.640648    1523 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a3dba3fc54c01e7fb1675536e155d6b541ed5782f664675ccd953639013f50b0"
	I0314 19:42:18.233203    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.640663    1523 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="102c907609a3ac28e95d46e2671477684c5a043672e21597c677ee9dbfcb7e08"
	I0314 19:42:18.233312    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.640674    1523 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ab390fc53b998ec55449f16c05933add797f430f2cc6f4b55afabf79cd8b0bc7"
	I0314 19:42:18.233312    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.713400    1523 kubelet_node_status.go:70] "Attempting to register node" node="multinode-442000"
	I0314 19:42:18.233312    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: E0314 19:41:00.714712    1523 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.17.93.236:8443: connect: connection refused" node="multinode-442000"
	I0314 19:42:18.233405    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.736377    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7754d2f32966faec8123dc3b8a2af767-ca-certs\") pod \"kube-apiserver-multinode-442000\" (UID: \"7754d2f32966faec8123dc3b8a2af767\") " pod="kube-system/kube-apiserver-multinode-442000"
	I0314 19:42:18.233476    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.736439    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7754d2f32966faec8123dc3b8a2af767-k8s-certs\") pod \"kube-apiserver-multinode-442000\" (UID: \"7754d2f32966faec8123dc3b8a2af767\") " pod="kube-system/kube-apiserver-multinode-442000"
	I0314 19:42:18.233476    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.736466    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7754d2f32966faec8123dc3b8a2af767-usr-share-ca-certificates\") pod \"kube-apiserver-multinode-442000\" (UID: \"7754d2f32966faec8123dc3b8a2af767\") " pod="kube-system/kube-apiserver-multinode-442000"
	I0314 19:42:18.233548    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.736490    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a7ee530f2bd843eddeace8cd6ec0d204-flexvolume-dir\") pod \"kube-controller-manager-multinode-442000\" (UID: \"a7ee530f2bd843eddeace8cd6ec0d204\") " pod="kube-system/kube-controller-manager-multinode-442000"
	I0314 19:42:18.233548    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.736521    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a7ee530f2bd843eddeace8cd6ec0d204-k8s-certs\") pod \"kube-controller-manager-multinode-442000\" (UID: \"a7ee530f2bd843eddeace8cd6ec0d204\") " pod="kube-system/kube-controller-manager-multinode-442000"
	I0314 19:42:18.233619    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.736546    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/fa99a5621d016aa714804afcaa1e0a53-etcd-certs\") pod \"etcd-multinode-442000\" (UID: \"fa99a5621d016aa714804afcaa1e0a53\") " pod="kube-system/etcd-multinode-442000"
	I0314 19:42:18.233690    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.736609    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a7ee530f2bd843eddeace8cd6ec0d204-ca-certs\") pod \"kube-controller-manager-multinode-442000\" (UID: \"a7ee530f2bd843eddeace8cd6ec0d204\") " pod="kube-system/kube-controller-manager-multinode-442000"
	I0314 19:42:18.233690    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.736642    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a7ee530f2bd843eddeace8cd6ec0d204-kubeconfig\") pod \"kube-controller-manager-multinode-442000\" (UID: \"a7ee530f2bd843eddeace8cd6ec0d204\") " pod="kube-system/kube-controller-manager-multinode-442000"
	I0314 19:42:18.233762    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.736675    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a7ee530f2bd843eddeace8cd6ec0d204-usr-share-ca-certificates\") pod \"kube-controller-manager-multinode-442000\" (UID: \"a7ee530f2bd843eddeace8cd6ec0d204\") " pod="kube-system/kube-controller-manager-multinode-442000"
	I0314 19:42:18.233762    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.736706    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/fa99a5621d016aa714804afcaa1e0a53-etcd-data\") pod \"etcd-multinode-442000\" (UID: \"fa99a5621d016aa714804afcaa1e0a53\") " pod="kube-system/etcd-multinode-442000"
	I0314 19:42:18.233837    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: E0314 19:41:00.996146    1523 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-442000?timeout=10s\": dial tcp 172.17.93.236:8443: connect: connection refused" interval="800ms"
	I0314 19:42:18.233911    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 kubelet[1523]: E0314 19:41:01.009288    1523 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"multinode-442000.17bcb8e6e82683f3", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"multinode-442000", UID:"multinode-442000", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"multinode-442000"}, FirstTimestamp:time.Date(2024, ti
me.March, 14, 19, 41, 0, 370772979, time.Local), LastTimestamp:time.Date(2024, time.March, 14, 19, 41, 0, 370772979, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"multinode-442000"}': 'Post "https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events": dial tcp 172.17.93.236:8443: connect: connection refused'(may retry after sleeping)
	I0314 19:42:18.233983    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 kubelet[1523]: I0314 19:41:01.128790    1523 kubelet_node_status.go:70] "Attempting to register node" node="multinode-442000"
	I0314 19:42:18.233983    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 kubelet[1523]: E0314 19:41:01.130034    1523 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.17.93.236:8443: connect: connection refused" node="multinode-442000"
	I0314 19:42:18.233983    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 kubelet[1523]: W0314 19:41:01.475229    1523 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.17.93.236:8443: connect: connection refused
	I0314 19:42:18.234054    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 kubelet[1523]: E0314 19:41:01.475367    1523 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.17.93.236:8443: connect: connection refused
	I0314 19:42:18.234054    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 kubelet[1523]: W0314 19:41:01.647700    1523 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-442000&limit=500&resourceVersion=0": dial tcp 172.17.93.236:8443: connect: connection refused
	I0314 19:42:18.234054    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 kubelet[1523]: E0314 19:41:01.647839    1523 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-442000&limit=500&resourceVersion=0": dial tcp 172.17.93.236:8443: connect: connection refused
	I0314 19:42:18.234125    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 kubelet[1523]: I0314 19:41:01.684558    1523 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c70744e60ac50b50085376d0c124ff15cc884b8a836b0085ef71a65ddb06bcfd"
	I0314 19:42:18.234125    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 kubelet[1523]: W0314 19:41:01.767121    1523 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.17.93.236:8443: connect: connection refused
	I0314 19:42:18.234197    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 kubelet[1523]: E0314 19:41:01.767283    1523 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.17.93.236:8443: connect: connection refused
	I0314 19:42:18.234197    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 kubelet[1523]: E0314 19:41:01.797772    1523 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-442000?timeout=10s\": dial tcp 172.17.93.236:8443: connect: connection refused" interval="1.6s"
	I0314 19:42:18.234269    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 kubelet[1523]: W0314 19:41:01.907277    1523 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.17.93.236:8443: connect: connection refused
	I0314 19:42:18.234341    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 kubelet[1523]: E0314 19:41:01.907408    1523 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.17.93.236:8443: connect: connection refused
	I0314 19:42:18.234341    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 kubelet[1523]: I0314 19:41:01.963548    1523 kubelet_node_status.go:70] "Attempting to register node" node="multinode-442000"
	I0314 19:42:18.234341    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 kubelet[1523]: E0314 19:41:01.967786    1523 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.17.93.236:8443: connect: connection refused" node="multinode-442000"
	I0314 19:42:18.234413    8428 command_runner.go:130] > Mar 14 19:41:03 multinode-442000 kubelet[1523]: I0314 19:41:03.581966    1523 kubelet_node_status.go:70] "Attempting to register node" node="multinode-442000"
	I0314 19:42:18.234413    8428 command_runner.go:130] > Mar 14 19:41:05 multinode-442000 kubelet[1523]: I0314 19:41:05.875219    1523 kubelet_node_status.go:108] "Node was previously registered" node="multinode-442000"
	I0314 19:42:18.234413    8428 command_runner.go:130] > Mar 14 19:41:05 multinode-442000 kubelet[1523]: I0314 19:41:05.875953    1523 kubelet_node_status.go:73] "Successfully registered node" node="multinode-442000"
	I0314 19:42:18.234413    8428 command_runner.go:130] > Mar 14 19:41:05 multinode-442000 kubelet[1523]: I0314 19:41:05.881726    1523 kuberuntime_manager.go:1528] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	I0314 19:42:18.234486    8428 command_runner.go:130] > Mar 14 19:41:05 multinode-442000 kubelet[1523]: I0314 19:41:05.882677    1523 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	I0314 19:42:18.234486    8428 command_runner.go:130] > Mar 14 19:41:05 multinode-442000 kubelet[1523]: I0314 19:41:05.894905    1523 setters.go:552] "Node became not ready" node="multinode-442000" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-03-14T19:41:05Z","lastTransitionTime":"2024-03-14T19:41:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"}
	I0314 19:42:18.234558    8428 command_runner.go:130] > Mar 14 19:41:05 multinode-442000 kubelet[1523]: E0314 19:41:05.973748    1523 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"etcd-multinode-442000\" already exists" pod="kube-system/etcd-multinode-442000"
	I0314 19:42:18.234558    8428 command_runner.go:130] > Mar 14 19:41:06 multinode-442000 kubelet[1523]: I0314 19:41:06.346543    1523 apiserver.go:52] "Watching apiserver"
	I0314 19:42:18.234558    8428 command_runner.go:130] > Mar 14 19:41:06 multinode-442000 kubelet[1523]: I0314 19:41:06.355573    1523 topology_manager.go:215] "Topology Admit Handler" podUID="677b9084-0026-4b21-b041-445940624ed7" podNamespace="kube-system" podName="kindnet-7b9lf"
	I0314 19:42:18.234558    8428 command_runner.go:130] > Mar 14 19:41:06 multinode-442000 kubelet[1523]: I0314 19:41:06.355823    1523 topology_manager.go:215] "Topology Admit Handler" podUID="c7f798bf-6722-4731-af8d-ccd5703d116e" podNamespace="kube-system" podName="kube-proxy-cg28g"
	I0314 19:42:18.234629    8428 command_runner.go:130] > Mar 14 19:41:06 multinode-442000 kubelet[1523]: I0314 19:41:06.355970    1523 topology_manager.go:215] "Topology Admit Handler" podUID="2a563b3f-a175-4dc2-9f0b-67dbaefbfaac" podNamespace="kube-system" podName="coredns-5dd5756b68-d22jc"
	I0314 19:42:18.234701    8428 command_runner.go:130] > Mar 14 19:41:06 multinode-442000 kubelet[1523]: I0314 19:41:06.356220    1523 topology_manager.go:215] "Topology Admit Handler" podUID="65d76566-4401-4b28-8452-10ed98624901" podNamespace="kube-system" podName="storage-provisioner"
	I0314 19:42:18.234701    8428 command_runner.go:130] > Mar 14 19:41:06 multinode-442000 kubelet[1523]: I0314 19:41:06.356515    1523 topology_manager.go:215] "Topology Admit Handler" podUID="6ca0ace6-596a-4504-80b5-0cc0cc11f9a2" podNamespace="default" podName="busybox-5b5d89c9d6-7446n"
	I0314 19:42:18.234701    8428 command_runner.go:130] > Mar 14 19:41:06 multinode-442000 kubelet[1523]: E0314 19:41:06.356776    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-7446n" podUID="6ca0ace6-596a-4504-80b5-0cc0cc11f9a2"
	I0314 19:42:18.234772    8428 command_runner.go:130] > Mar 14 19:41:06 multinode-442000 kubelet[1523]: E0314 19:41:06.356948    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-d22jc" podUID="2a563b3f-a175-4dc2-9f0b-67dbaefbfaac"
	I0314 19:42:18.234772    8428 command_runner.go:130] > Mar 14 19:41:06 multinode-442000 kubelet[1523]: I0314 19:41:06.360847    1523 kubelet.go:1872] "Trying to delete pod" pod="kube-system/kube-apiserver-multinode-442000" podUID="02a2d011-5f4c-451c-9698-a88e42e4b6c9"
	I0314 19:42:18.234844    8428 command_runner.go:130] > Mar 14 19:41:06 multinode-442000 kubelet[1523]: I0314 19:41:06.388530    1523 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
	I0314 19:42:18.234844    8428 command_runner.go:130] > Mar 14 19:41:06 multinode-442000 kubelet[1523]: I0314 19:41:06.394882    1523 kubelet.go:1877] "Deleted mirror pod because it is outdated" pod="kube-system/kube-apiserver-multinode-442000"
	I0314 19:42:18.234844    8428 command_runner.go:130] > Mar 14 19:41:06 multinode-442000 kubelet[1523]: I0314 19:41:06.419699    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c7f798bf-6722-4731-af8d-ccd5703d116e-xtables-lock\") pod \"kube-proxy-cg28g\" (UID: \"c7f798bf-6722-4731-af8d-ccd5703d116e\") " pod="kube-system/kube-proxy-cg28g"
	I0314 19:42:18.234917    8428 command_runner.go:130] > Mar 14 19:41:06 multinode-442000 kubelet[1523]: I0314 19:41:06.419828    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/677b9084-0026-4b21-b041-445940624ed7-cni-cfg\") pod \"kindnet-7b9lf\" (UID: \"677b9084-0026-4b21-b041-445940624ed7\") " pod="kube-system/kindnet-7b9lf"
	I0314 19:42:18.234917    8428 command_runner.go:130] > Mar 14 19:41:06 multinode-442000 kubelet[1523]: I0314 19:41:06.419854    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/677b9084-0026-4b21-b041-445940624ed7-lib-modules\") pod \"kindnet-7b9lf\" (UID: \"677b9084-0026-4b21-b041-445940624ed7\") " pod="kube-system/kindnet-7b9lf"
	I0314 19:42:18.234989    8428 command_runner.go:130] > Mar 14 19:41:06 multinode-442000 kubelet[1523]: I0314 19:41:06.419895    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/65d76566-4401-4b28-8452-10ed98624901-tmp\") pod \"storage-provisioner\" (UID: \"65d76566-4401-4b28-8452-10ed98624901\") " pod="kube-system/storage-provisioner"
	I0314 19:42:18.235062    8428 command_runner.go:130] > Mar 14 19:41:06 multinode-442000 kubelet[1523]: I0314 19:41:06.419943    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/677b9084-0026-4b21-b041-445940624ed7-xtables-lock\") pod \"kindnet-7b9lf\" (UID: \"677b9084-0026-4b21-b041-445940624ed7\") " pod="kube-system/kindnet-7b9lf"
	I0314 19:42:18.235062    8428 command_runner.go:130] > Mar 14 19:41:06 multinode-442000 kubelet[1523]: I0314 19:41:06.420062    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c7f798bf-6722-4731-af8d-ccd5703d116e-lib-modules\") pod \"kube-proxy-cg28g\" (UID: \"c7f798bf-6722-4731-af8d-ccd5703d116e\") " pod="kube-system/kube-proxy-cg28g"
	I0314 19:42:18.235062    8428 command_runner.go:130] > Mar 14 19:41:06 multinode-442000 kubelet[1523]: E0314 19:41:06.420370    1523 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0314 19:42:18.235137    8428 command_runner.go:130] > Mar 14 19:41:06 multinode-442000 kubelet[1523]: E0314 19:41:06.420509    1523 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2a563b3f-a175-4dc2-9f0b-67dbaefbfaac-config-volume podName:2a563b3f-a175-4dc2-9f0b-67dbaefbfaac nodeName:}" failed. No retries permitted until 2024-03-14 19:41:06.920467401 +0000 UTC m=+6.742091622 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/2a563b3f-a175-4dc2-9f0b-67dbaefbfaac-config-volume") pod "coredns-5dd5756b68-d22jc" (UID: "2a563b3f-a175-4dc2-9f0b-67dbaefbfaac") : object "kube-system"/"coredns" not registered
	I0314 19:42:18.235208    8428 command_runner.go:130] > Mar 14 19:41:06 multinode-442000 kubelet[1523]: E0314 19:41:06.447169    1523 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0314 19:42:18.235208    8428 command_runner.go:130] > Mar 14 19:41:06 multinode-442000 kubelet[1523]: E0314 19:41:06.447481    1523 projected.go:198] Error preparing data for projected volume kube-api-access-6hh9s for pod default/busybox-5b5d89c9d6-7446n: object "default"/"kube-root-ca.crt" not registered
	I0314 19:42:18.235283    8428 command_runner.go:130] > Mar 14 19:41:06 multinode-442000 kubelet[1523]: E0314 19:41:06.447769    1523 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6ca0ace6-596a-4504-80b5-0cc0cc11f9a2-kube-api-access-6hh9s podName:6ca0ace6-596a-4504-80b5-0cc0cc11f9a2 nodeName:}" failed. No retries permitted until 2024-03-14 19:41:06.9477485 +0000 UTC m=+6.769372721 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-6hh9s" (UniqueName: "kubernetes.io/projected/6ca0ace6-596a-4504-80b5-0cc0cc11f9a2-kube-api-access-6hh9s") pod "busybox-5b5d89c9d6-7446n" (UID: "6ca0ace6-596a-4504-80b5-0cc0cc11f9a2") : object "default"/"kube-root-ca.crt" not registered
	I0314 19:42:18.235283    8428 command_runner.go:130] > Mar 14 19:41:06 multinode-442000 kubelet[1523]: I0314 19:41:06.496544    1523 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="81fdcd9740169a0b72b7c7316eeac39f" path="/var/lib/kubelet/pods/81fdcd9740169a0b72b7c7316eeac39f/volumes"
	I0314 19:42:18.235283    8428 command_runner.go:130] > Mar 14 19:41:06 multinode-442000 kubelet[1523]: I0314 19:41:06.497856    1523 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="92e70beb375f9f247f5f8395dc065033" path="/var/lib/kubelet/pods/92e70beb375f9f247f5f8395dc065033/volumes"
	I0314 19:42:18.235354    8428 command_runner.go:130] > Mar 14 19:41:06 multinode-442000 kubelet[1523]: I0314 19:41:06.840791    1523 kubelet.go:1872] "Trying to delete pod" pod="kube-system/etcd-multinode-442000" podUID="8974ad44-5d36-48f0-bc6b-9115bab5fb5e"
	I0314 19:42:18.235427    8428 command_runner.go:130] > Mar 14 19:41:06 multinode-442000 kubelet[1523]: I0314 19:41:06.864488    1523 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-multinode-442000" podStartSLOduration=0.864428449 podCreationTimestamp="2024-03-14 19:41:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-03-14 19:41:06.656175631 +0000 UTC m=+6.477799952" watchObservedRunningTime="2024-03-14 19:41:06.864428449 +0000 UTC m=+6.686052670"
	I0314 19:42:18.235427    8428 command_runner.go:130] > Mar 14 19:41:06 multinode-442000 kubelet[1523]: I0314 19:41:06.889820    1523 kubelet.go:1877] "Deleted mirror pod because it is outdated" pod="kube-system/etcd-multinode-442000"
	I0314 19:42:18.235427    8428 command_runner.go:130] > Mar 14 19:41:06 multinode-442000 kubelet[1523]: E0314 19:41:06.925613    1523 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0314 19:42:18.235499    8428 command_runner.go:130] > Mar 14 19:41:06 multinode-442000 kubelet[1523]: E0314 19:41:06.925789    1523 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2a563b3f-a175-4dc2-9f0b-67dbaefbfaac-config-volume podName:2a563b3f-a175-4dc2-9f0b-67dbaefbfaac nodeName:}" failed. No retries permitted until 2024-03-14 19:41:07.925744766 +0000 UTC m=+7.747368987 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/2a563b3f-a175-4dc2-9f0b-67dbaefbfaac-config-volume") pod "coredns-5dd5756b68-d22jc" (UID: "2a563b3f-a175-4dc2-9f0b-67dbaefbfaac") : object "kube-system"/"coredns" not registered
	I0314 19:42:18.235499    8428 command_runner.go:130] > Mar 14 19:41:07 multinode-442000 kubelet[1523]: E0314 19:41:07.026456    1523 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0314 19:42:18.235570    8428 command_runner.go:130] > Mar 14 19:41:07 multinode-442000 kubelet[1523]: E0314 19:41:07.026485    1523 projected.go:198] Error preparing data for projected volume kube-api-access-6hh9s for pod default/busybox-5b5d89c9d6-7446n: object "default"/"kube-root-ca.crt" not registered
	I0314 19:42:18.235628    8428 command_runner.go:130] > Mar 14 19:41:07 multinode-442000 kubelet[1523]: E0314 19:41:07.026583    1523 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6ca0ace6-596a-4504-80b5-0cc0cc11f9a2-kube-api-access-6hh9s podName:6ca0ace6-596a-4504-80b5-0cc0cc11f9a2 nodeName:}" failed. No retries permitted until 2024-03-14 19:41:08.02656612 +0000 UTC m=+7.848190341 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-6hh9s" (UniqueName: "kubernetes.io/projected/6ca0ace6-596a-4504-80b5-0cc0cc11f9a2-kube-api-access-6hh9s") pod "busybox-5b5d89c9d6-7446n" (UID: "6ca0ace6-596a-4504-80b5-0cc0cc11f9a2") : object "default"/"kube-root-ca.crt" not registered
	I0314 19:42:18.235691    8428 command_runner.go:130] > Mar 14 19:41:07 multinode-442000 kubelet[1523]: E0314 19:41:07.479340    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-d22jc" podUID="2a563b3f-a175-4dc2-9f0b-67dbaefbfaac"
	I0314 19:42:18.235691    8428 command_runner.go:130] > Mar 14 19:41:07 multinode-442000 kubelet[1523]: E0314 19:41:07.479540    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-7446n" podUID="6ca0ace6-596a-4504-80b5-0cc0cc11f9a2"
	I0314 19:42:18.235691    8428 command_runner.go:130] > Mar 14 19:41:07 multinode-442000 kubelet[1523]: E0314 19:41:07.934416    1523 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0314 19:42:18.235691    8428 command_runner.go:130] > Mar 14 19:41:07 multinode-442000 kubelet[1523]: E0314 19:41:07.934566    1523 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2a563b3f-a175-4dc2-9f0b-67dbaefbfaac-config-volume podName:2a563b3f-a175-4dc2-9f0b-67dbaefbfaac nodeName:}" failed. No retries permitted until 2024-03-14 19:41:09.934544359 +0000 UTC m=+9.756168580 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/2a563b3f-a175-4dc2-9f0b-67dbaefbfaac-config-volume") pod "coredns-5dd5756b68-d22jc" (UID: "2a563b3f-a175-4dc2-9f0b-67dbaefbfaac") : object "kube-system"/"coredns" not registered
	I0314 19:42:18.235691    8428 command_runner.go:130] > Mar 14 19:41:08 multinode-442000 kubelet[1523]: E0314 19:41:08.035285    1523 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0314 19:42:18.235691    8428 command_runner.go:130] > Mar 14 19:41:08 multinode-442000 kubelet[1523]: E0314 19:41:08.035328    1523 projected.go:198] Error preparing data for projected volume kube-api-access-6hh9s for pod default/busybox-5b5d89c9d6-7446n: object "default"/"kube-root-ca.crt" not registered
	I0314 19:42:18.235691    8428 command_runner.go:130] > Mar 14 19:41:08 multinode-442000 kubelet[1523]: E0314 19:41:08.035382    1523 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6ca0ace6-596a-4504-80b5-0cc0cc11f9a2-kube-api-access-6hh9s podName:6ca0ace6-596a-4504-80b5-0cc0cc11f9a2 nodeName:}" failed. No retries permitted until 2024-03-14 19:41:10.035364414 +0000 UTC m=+9.856988635 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-6hh9s" (UniqueName: "kubernetes.io/projected/6ca0ace6-596a-4504-80b5-0cc0cc11f9a2-kube-api-access-6hh9s") pod "busybox-5b5d89c9d6-7446n" (UID: "6ca0ace6-596a-4504-80b5-0cc0cc11f9a2") : object "default"/"kube-root-ca.crt" not registered
	I0314 19:42:18.235691    8428 command_runner.go:130] > Mar 14 19:41:08 multinode-442000 kubelet[1523]: I0314 19:41:08.192454    1523 kubelet.go:1872] "Trying to delete pod" pod="kube-system/etcd-multinode-442000" podUID="8974ad44-5d36-48f0-bc6b-9115bab5fb5e"
	I0314 19:42:18.235691    8428 command_runner.go:130] > Mar 14 19:41:08 multinode-442000 kubelet[1523]: I0314 19:41:08.232807    1523 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/etcd-multinode-442000" podStartSLOduration=2.232765597 podCreationTimestamp="2024-03-14 19:41:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-03-14 19:41:08.211688076 +0000 UTC m=+8.033312297" watchObservedRunningTime="2024-03-14 19:41:08.232765597 +0000 UTC m=+8.054389818"
	I0314 19:42:18.235691    8428 command_runner.go:130] > Mar 14 19:41:09 multinode-442000 kubelet[1523]: E0314 19:41:09.480285    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-7446n" podUID="6ca0ace6-596a-4504-80b5-0cc0cc11f9a2"
	I0314 19:42:18.235691    8428 command_runner.go:130] > Mar 14 19:41:09 multinode-442000 kubelet[1523]: E0314 19:41:09.480350    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-d22jc" podUID="2a563b3f-a175-4dc2-9f0b-67dbaefbfaac"
	I0314 19:42:18.235691    8428 command_runner.go:130] > Mar 14 19:41:09 multinode-442000 kubelet[1523]: E0314 19:41:09.954598    1523 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0314 19:42:18.235691    8428 command_runner.go:130] > Mar 14 19:41:09 multinode-442000 kubelet[1523]: E0314 19:41:09.954683    1523 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2a563b3f-a175-4dc2-9f0b-67dbaefbfaac-config-volume podName:2a563b3f-a175-4dc2-9f0b-67dbaefbfaac nodeName:}" failed. No retries permitted until 2024-03-14 19:41:13.95466674 +0000 UTC m=+13.776290961 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/2a563b3f-a175-4dc2-9f0b-67dbaefbfaac-config-volume") pod "coredns-5dd5756b68-d22jc" (UID: "2a563b3f-a175-4dc2-9f0b-67dbaefbfaac") : object "kube-system"/"coredns" not registered
	I0314 19:42:18.235691    8428 command_runner.go:130] > Mar 14 19:41:10 multinode-442000 kubelet[1523]: E0314 19:41:10.055917    1523 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0314 19:42:18.235691    8428 command_runner.go:130] > Mar 14 19:41:10 multinode-442000 kubelet[1523]: E0314 19:41:10.055948    1523 projected.go:198] Error preparing data for projected volume kube-api-access-6hh9s for pod default/busybox-5b5d89c9d6-7446n: object "default"/"kube-root-ca.crt" not registered
	I0314 19:42:18.235691    8428 command_runner.go:130] > Mar 14 19:41:10 multinode-442000 kubelet[1523]: E0314 19:41:10.055999    1523 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6ca0ace6-596a-4504-80b5-0cc0cc11f9a2-kube-api-access-6hh9s podName:6ca0ace6-596a-4504-80b5-0cc0cc11f9a2 nodeName:}" failed. No retries permitted until 2024-03-14 19:41:14.055983733 +0000 UTC m=+13.877608054 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-6hh9s" (UniqueName: "kubernetes.io/projected/6ca0ace6-596a-4504-80b5-0cc0cc11f9a2-kube-api-access-6hh9s") pod "busybox-5b5d89c9d6-7446n" (UID: "6ca0ace6-596a-4504-80b5-0cc0cc11f9a2") : object "default"/"kube-root-ca.crt" not registered
	I0314 19:42:18.235691    8428 command_runner.go:130] > Mar 14 19:41:11 multinode-442000 kubelet[1523]: E0314 19:41:11.480167    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-d22jc" podUID="2a563b3f-a175-4dc2-9f0b-67dbaefbfaac"
	I0314 19:42:18.236215    8428 command_runner.go:130] > Mar 14 19:41:11 multinode-442000 kubelet[1523]: E0314 19:41:11.480285    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-7446n" podUID="6ca0ace6-596a-4504-80b5-0cc0cc11f9a2"
	I0314 19:42:18.236288    8428 command_runner.go:130] > Mar 14 19:41:13 multinode-442000 kubelet[1523]: E0314 19:41:13.480095    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-d22jc" podUID="2a563b3f-a175-4dc2-9f0b-67dbaefbfaac"
	I0314 19:42:18.236288    8428 command_runner.go:130] > Mar 14 19:41:13 multinode-442000 kubelet[1523]: E0314 19:41:13.480797    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-7446n" podUID="6ca0ace6-596a-4504-80b5-0cc0cc11f9a2"
	I0314 19:42:18.236288    8428 command_runner.go:130] > Mar 14 19:41:13 multinode-442000 kubelet[1523]: E0314 19:41:13.988392    1523 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0314 19:42:18.236288    8428 command_runner.go:130] > Mar 14 19:41:13 multinode-442000 kubelet[1523]: E0314 19:41:13.988528    1523 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2a563b3f-a175-4dc2-9f0b-67dbaefbfaac-config-volume podName:2a563b3f-a175-4dc2-9f0b-67dbaefbfaac nodeName:}" failed. No retries permitted until 2024-03-14 19:41:21.98850961 +0000 UTC m=+21.810133831 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/2a563b3f-a175-4dc2-9f0b-67dbaefbfaac-config-volume") pod "coredns-5dd5756b68-d22jc" (UID: "2a563b3f-a175-4dc2-9f0b-67dbaefbfaac") : object "kube-system"/"coredns" not registered
	I0314 19:42:18.236288    8428 command_runner.go:130] > Mar 14 19:41:14 multinode-442000 kubelet[1523]: E0314 19:41:14.089208    1523 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0314 19:42:18.236288    8428 command_runner.go:130] > Mar 14 19:41:14 multinode-442000 kubelet[1523]: E0314 19:41:14.089365    1523 projected.go:198] Error preparing data for projected volume kube-api-access-6hh9s for pod default/busybox-5b5d89c9d6-7446n: object "default"/"kube-root-ca.crt" not registered
	I0314 19:42:18.236288    8428 command_runner.go:130] > Mar 14 19:41:14 multinode-442000 kubelet[1523]: E0314 19:41:14.089427    1523 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6ca0ace6-596a-4504-80b5-0cc0cc11f9a2-kube-api-access-6hh9s podName:6ca0ace6-596a-4504-80b5-0cc0cc11f9a2 nodeName:}" failed. No retries permitted until 2024-03-14 19:41:22.089409571 +0000 UTC m=+21.911033792 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-6hh9s" (UniqueName: "kubernetes.io/projected/6ca0ace6-596a-4504-80b5-0cc0cc11f9a2-kube-api-access-6hh9s") pod "busybox-5b5d89c9d6-7446n" (UID: "6ca0ace6-596a-4504-80b5-0cc0cc11f9a2") : object "default"/"kube-root-ca.crt" not registered
	I0314 19:42:18.236288    8428 command_runner.go:130] > Mar 14 19:41:15 multinode-442000 kubelet[1523]: E0314 19:41:15.480116    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-d22jc" podUID="2a563b3f-a175-4dc2-9f0b-67dbaefbfaac"
	I0314 19:42:18.236288    8428 command_runner.go:130] > Mar 14 19:41:15 multinode-442000 kubelet[1523]: E0314 19:41:15.480286    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-7446n" podUID="6ca0ace6-596a-4504-80b5-0cc0cc11f9a2"
	I0314 19:42:18.236288    8428 command_runner.go:130] > Mar 14 19:41:17 multinode-442000 kubelet[1523]: E0314 19:41:17.479583    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-7446n" podUID="6ca0ace6-596a-4504-80b5-0cc0cc11f9a2"
	I0314 19:42:18.236288    8428 command_runner.go:130] > Mar 14 19:41:17 multinode-442000 kubelet[1523]: E0314 19:41:17.480025    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-d22jc" podUID="2a563b3f-a175-4dc2-9f0b-67dbaefbfaac"
	I0314 19:42:18.236288    8428 command_runner.go:130] > Mar 14 19:41:19 multinode-442000 kubelet[1523]: E0314 19:41:19.480562    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-d22jc" podUID="2a563b3f-a175-4dc2-9f0b-67dbaefbfaac"
	I0314 19:42:18.236288    8428 command_runner.go:130] > Mar 14 19:41:19 multinode-442000 kubelet[1523]: E0314 19:41:19.480625    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-7446n" podUID="6ca0ace6-596a-4504-80b5-0cc0cc11f9a2"
	I0314 19:42:18.236288    8428 command_runner.go:130] > Mar 14 19:41:21 multinode-442000 kubelet[1523]: E0314 19:41:21.479895    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-7446n" podUID="6ca0ace6-596a-4504-80b5-0cc0cc11f9a2"
	I0314 19:42:18.236288    8428 command_runner.go:130] > Mar 14 19:41:21 multinode-442000 kubelet[1523]: E0314 19:41:21.480437    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-d22jc" podUID="2a563b3f-a175-4dc2-9f0b-67dbaefbfaac"
	I0314 19:42:18.236811    8428 command_runner.go:130] > Mar 14 19:41:22 multinode-442000 kubelet[1523]: E0314 19:41:22.061436    1523 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0314 19:42:18.236811    8428 command_runner.go:130] > Mar 14 19:41:22 multinode-442000 kubelet[1523]: E0314 19:41:22.061515    1523 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2a563b3f-a175-4dc2-9f0b-67dbaefbfaac-config-volume podName:2a563b3f-a175-4dc2-9f0b-67dbaefbfaac nodeName:}" failed. No retries permitted until 2024-03-14 19:41:38.061499618 +0000 UTC m=+37.883123839 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/2a563b3f-a175-4dc2-9f0b-67dbaefbfaac-config-volume") pod "coredns-5dd5756b68-d22jc" (UID: "2a563b3f-a175-4dc2-9f0b-67dbaefbfaac") : object "kube-system"/"coredns" not registered
	I0314 19:42:18.236890    8428 command_runner.go:130] > Mar 14 19:41:22 multinode-442000 kubelet[1523]: E0314 19:41:22.162555    1523 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0314 19:42:18.236890    8428 command_runner.go:130] > Mar 14 19:41:22 multinode-442000 kubelet[1523]: E0314 19:41:22.162603    1523 projected.go:198] Error preparing data for projected volume kube-api-access-6hh9s for pod default/busybox-5b5d89c9d6-7446n: object "default"/"kube-root-ca.crt" not registered
	I0314 19:42:18.236890    8428 command_runner.go:130] > Mar 14 19:41:22 multinode-442000 kubelet[1523]: E0314 19:41:22.162667    1523 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6ca0ace6-596a-4504-80b5-0cc0cc11f9a2-kube-api-access-6hh9s podName:6ca0ace6-596a-4504-80b5-0cc0cc11f9a2 nodeName:}" failed. No retries permitted until 2024-03-14 19:41:38.162650651 +0000 UTC m=+37.984274872 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-6hh9s" (UniqueName: "kubernetes.io/projected/6ca0ace6-596a-4504-80b5-0cc0cc11f9a2-kube-api-access-6hh9s") pod "busybox-5b5d89c9d6-7446n" (UID: "6ca0ace6-596a-4504-80b5-0cc0cc11f9a2") : object "default"/"kube-root-ca.crt" not registered
	I0314 19:42:18.236890    8428 command_runner.go:130] > Mar 14 19:41:23 multinode-442000 kubelet[1523]: E0314 19:41:23.480157    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-d22jc" podUID="2a563b3f-a175-4dc2-9f0b-67dbaefbfaac"
	I0314 19:42:18.236890    8428 command_runner.go:130] > Mar 14 19:41:23 multinode-442000 kubelet[1523]: E0314 19:41:23.481151    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-7446n" podUID="6ca0ace6-596a-4504-80b5-0cc0cc11f9a2"
	I0314 19:42:18.236890    8428 command_runner.go:130] > Mar 14 19:41:25 multinode-442000 kubelet[1523]: E0314 19:41:25.479970    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-d22jc" podUID="2a563b3f-a175-4dc2-9f0b-67dbaefbfaac"
	I0314 19:42:18.236890    8428 command_runner.go:130] > Mar 14 19:41:25 multinode-442000 kubelet[1523]: E0314 19:41:25.480065    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-7446n" podUID="6ca0ace6-596a-4504-80b5-0cc0cc11f9a2"
	I0314 19:42:18.236890    8428 command_runner.go:130] > Mar 14 19:41:27 multinode-442000 kubelet[1523]: E0314 19:41:27.480032    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-d22jc" podUID="2a563b3f-a175-4dc2-9f0b-67dbaefbfaac"
	I0314 19:42:18.236890    8428 command_runner.go:130] > Mar 14 19:41:27 multinode-442000 kubelet[1523]: E0314 19:41:27.480122    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-7446n" podUID="6ca0ace6-596a-4504-80b5-0cc0cc11f9a2"
	I0314 19:42:18.236890    8428 command_runner.go:130] > Mar 14 19:41:29 multinode-442000 kubelet[1523]: E0314 19:41:29.480034    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-7446n" podUID="6ca0ace6-596a-4504-80b5-0cc0cc11f9a2"
	I0314 19:42:18.236890    8428 command_runner.go:130] > Mar 14 19:41:29 multinode-442000 kubelet[1523]: E0314 19:41:29.480291    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-d22jc" podUID="2a563b3f-a175-4dc2-9f0b-67dbaefbfaac"
	I0314 19:42:18.236890    8428 command_runner.go:130] > Mar 14 19:41:31 multinode-442000 kubelet[1523]: E0314 19:41:31.479554    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-d22jc" podUID="2a563b3f-a175-4dc2-9f0b-67dbaefbfaac"
	I0314 19:42:18.236890    8428 command_runner.go:130] > Mar 14 19:41:31 multinode-442000 kubelet[1523]: E0314 19:41:31.479650    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-7446n" podUID="6ca0ace6-596a-4504-80b5-0cc0cc11f9a2"
	I0314 19:42:18.236890    8428 command_runner.go:130] > Mar 14 19:41:33 multinode-442000 kubelet[1523]: E0314 19:41:33.479299    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-d22jc" podUID="2a563b3f-a175-4dc2-9f0b-67dbaefbfaac"
	I0314 19:42:18.236890    8428 command_runner.go:130] > Mar 14 19:41:33 multinode-442000 kubelet[1523]: E0314 19:41:33.479835    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-7446n" podUID="6ca0ace6-596a-4504-80b5-0cc0cc11f9a2"
	I0314 19:42:18.236890    8428 command_runner.go:130] > Mar 14 19:41:35 multinode-442000 kubelet[1523]: E0314 19:41:35.479778    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-7446n" podUID="6ca0ace6-596a-4504-80b5-0cc0cc11f9a2"
	I0314 19:42:18.237426    8428 command_runner.go:130] > Mar 14 19:41:35 multinode-442000 kubelet[1523]: E0314 19:41:35.480230    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-d22jc" podUID="2a563b3f-a175-4dc2-9f0b-67dbaefbfaac"
	I0314 19:42:18.237494    8428 command_runner.go:130] > Mar 14 19:41:37 multinode-442000 kubelet[1523]: E0314 19:41:37.480388    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-d22jc" podUID="2a563b3f-a175-4dc2-9f0b-67dbaefbfaac"
	I0314 19:42:18.237494    8428 command_runner.go:130] > Mar 14 19:41:37 multinode-442000 kubelet[1523]: E0314 19:41:37.480921    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-7446n" podUID="6ca0ace6-596a-4504-80b5-0cc0cc11f9a2"
	I0314 19:42:18.237494    8428 command_runner.go:130] > Mar 14 19:41:38 multinode-442000 kubelet[1523]: E0314 19:41:38.089907    1523 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0314 19:42:18.237494    8428 command_runner.go:130] > Mar 14 19:41:38 multinode-442000 kubelet[1523]: E0314 19:41:38.090056    1523 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2a563b3f-a175-4dc2-9f0b-67dbaefbfaac-config-volume podName:2a563b3f-a175-4dc2-9f0b-67dbaefbfaac nodeName:}" failed. No retries permitted until 2024-03-14 19:42:10.090036325 +0000 UTC m=+69.911660546 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/2a563b3f-a175-4dc2-9f0b-67dbaefbfaac-config-volume") pod "coredns-5dd5756b68-d22jc" (UID: "2a563b3f-a175-4dc2-9f0b-67dbaefbfaac") : object "kube-system"/"coredns" not registered
	I0314 19:42:18.237494    8428 command_runner.go:130] > Mar 14 19:41:38 multinode-442000 kubelet[1523]: E0314 19:41:38.191172    1523 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0314 19:42:18.237494    8428 command_runner.go:130] > Mar 14 19:41:38 multinode-442000 kubelet[1523]: E0314 19:41:38.191351    1523 projected.go:198] Error preparing data for projected volume kube-api-access-6hh9s for pod default/busybox-5b5d89c9d6-7446n: object "default"/"kube-root-ca.crt" not registered
	I0314 19:42:18.237494    8428 command_runner.go:130] > Mar 14 19:41:38 multinode-442000 kubelet[1523]: E0314 19:41:38.191425    1523 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6ca0ace6-596a-4504-80b5-0cc0cc11f9a2-kube-api-access-6hh9s podName:6ca0ace6-596a-4504-80b5-0cc0cc11f9a2 nodeName:}" failed. No retries permitted until 2024-03-14 19:42:10.191406835 +0000 UTC m=+70.013031056 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-6hh9s" (UniqueName: "kubernetes.io/projected/6ca0ace6-596a-4504-80b5-0cc0cc11f9a2-kube-api-access-6hh9s") pod "busybox-5b5d89c9d6-7446n" (UID: "6ca0ace6-596a-4504-80b5-0cc0cc11f9a2") : object "default"/"kube-root-ca.crt" not registered
	I0314 19:42:18.237494    8428 command_runner.go:130] > Mar 14 19:41:38 multinode-442000 kubelet[1523]: I0314 19:41:38.578418    1523 scope.go:117] "RemoveContainer" containerID="07c2872c48edaa090b20d66267963c0d69c5c9eb97824b199af2d7e611ac596a"
	I0314 19:42:18.237494    8428 command_runner.go:130] > Mar 14 19:41:38 multinode-442000 kubelet[1523]: I0314 19:41:38.578814    1523 scope.go:117] "RemoveContainer" containerID="2876622a2618d9b60f7cb4f182054a8b2d30209e3bd14c5d4afe515101547bc8"
	I0314 19:42:18.237494    8428 command_runner.go:130] > Mar 14 19:41:38 multinode-442000 kubelet[1523]: E0314 19:41:38.579025    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(65d76566-4401-4b28-8452-10ed98624901)\"" pod="kube-system/storage-provisioner" podUID="65d76566-4401-4b28-8452-10ed98624901"
	I0314 19:42:18.237494    8428 command_runner.go:130] > Mar 14 19:41:39 multinode-442000 kubelet[1523]: E0314 19:41:39.479691    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-d22jc" podUID="2a563b3f-a175-4dc2-9f0b-67dbaefbfaac"
	I0314 19:42:18.237494    8428 command_runner.go:130] > Mar 14 19:41:39 multinode-442000 kubelet[1523]: E0314 19:41:39.479909    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-7446n" podUID="6ca0ace6-596a-4504-80b5-0cc0cc11f9a2"
	I0314 19:42:18.237494    8428 command_runner.go:130] > Mar 14 19:41:41 multinode-442000 kubelet[1523]: E0314 19:41:41.479574    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-7446n" podUID="6ca0ace6-596a-4504-80b5-0cc0cc11f9a2"
	I0314 19:42:18.237494    8428 command_runner.go:130] > Mar 14 19:41:41 multinode-442000 kubelet[1523]: E0314 19:41:41.480003    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-d22jc" podUID="2a563b3f-a175-4dc2-9f0b-67dbaefbfaac"
	I0314 19:42:18.237494    8428 command_runner.go:130] > Mar 14 19:41:41 multinode-442000 kubelet[1523]: I0314 19:41:41.518811    1523 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	I0314 19:42:18.237494    8428 command_runner.go:130] > Mar 14 19:41:53 multinode-442000 kubelet[1523]: I0314 19:41:53.480206    1523 scope.go:117] "RemoveContainer" containerID="2876622a2618d9b60f7cb4f182054a8b2d30209e3bd14c5d4afe515101547bc8"
	I0314 19:42:18.237494    8428 command_runner.go:130] > Mar 14 19:42:00 multinode-442000 kubelet[1523]: I0314 19:42:00.447192    1523 scope.go:117] "RemoveContainer" containerID="9585e3eb2ead2f471eb0d22c8e29e4bfd954095774af365d80329ea39fff78e1"
	I0314 19:42:18.237494    8428 command_runner.go:130] > Mar 14 19:42:00 multinode-442000 kubelet[1523]: I0314 19:42:00.490865    1523 scope.go:117] "RemoveContainer" containerID="cd640f130e429bd4182c258358ec791604b8f307f9c45f2e3880e9b1a7df666a"
	I0314 19:42:18.237494    8428 command_runner.go:130] > Mar 14 19:42:00 multinode-442000 kubelet[1523]: E0314 19:42:00.516969    1523 iptables.go:575] "Could not set up iptables canary" err=<
	I0314 19:42:18.237494    8428 command_runner.go:130] > Mar 14 19:42:00 multinode-442000 kubelet[1523]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	I0314 19:42:18.237494    8428 command_runner.go:130] > Mar 14 19:42:00 multinode-442000 kubelet[1523]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	I0314 19:42:18.237494    8428 command_runner.go:130] > Mar 14 19:42:00 multinode-442000 kubelet[1523]:         Perhaps ip6tables or your kernel needs to be upgraded.
	I0314 19:42:18.237494    8428 command_runner.go:130] > Mar 14 19:42:00 multinode-442000 kubelet[1523]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	I0314 19:42:18.237494    8428 command_runner.go:130] > Mar 14 19:42:11 multinode-442000 kubelet[1523]: I0314 19:42:11.167906    1523 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="89f326046d00d990fbe8611867f6438ef498caad91d78b4f265633a7cd56307f"
	I0314 19:42:18.237494    8428 command_runner.go:130] > Mar 14 19:42:11 multinode-442000 kubelet[1523]: I0314 19:42:11.214897    1523 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cddebe360bf3a58d057146523ff9f043ddb40843d3e55a24f8f364524780a439"
	I0314 19:42:18.278378    8428 logs.go:123] Gathering logs for kube-scheduler [32d90a3ea213] ...
	I0314 19:42:18.278378    8428 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32d90a3ea213"
	I0314 19:42:18.305382    8428 command_runner.go:130] ! I0314 19:41:03.376319       1 serving.go:348] Generated self-signed cert in-memory
	I0314 19:42:18.305382    8428 command_runner.go:130] ! W0314 19:41:05.770317       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	I0314 19:42:18.305382    8428 command_runner.go:130] ! W0314 19:41:05.770426       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	I0314 19:42:18.305382    8428 command_runner.go:130] ! W0314 19:41:05.770581       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	I0314 19:42:18.305382    8428 command_runner.go:130] ! W0314 19:41:05.770640       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0314 19:42:18.305382    8428 command_runner.go:130] ! I0314 19:41:05.841573       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.4"
	I0314 19:42:18.305382    8428 command_runner.go:130] ! I0314 19:41:05.841674       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0314 19:42:18.305382    8428 command_runner.go:130] ! I0314 19:41:05.844125       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0314 19:42:18.305382    8428 command_runner.go:130] ! I0314 19:41:05.845062       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0314 19:42:18.305382    8428 command_runner.go:130] ! I0314 19:41:05.845143       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0314 19:42:18.305382    8428 command_runner.go:130] ! I0314 19:41:05.845293       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0314 19:42:18.305382    8428 command_runner.go:130] ! I0314 19:41:05.946840       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0314 19:42:18.306387    8428 logs.go:123] Gathering logs for kube-proxy [2a62baf3f1b4] ...
	I0314 19:42:18.306387    8428 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2a62baf3f1b4"
	I0314 19:42:18.335928    8428 command_runner.go:130] ! I0314 19:19:18.247796       1 server_others.go:69] "Using iptables proxy"
	I0314 19:42:18.335988    8428 command_runner.go:130] ! I0314 19:19:18.275162       1 node.go:141] Successfully retrieved node IP: 172.17.86.124
	I0314 19:42:18.335988    8428 command_runner.go:130] ! I0314 19:19:18.379821       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0314 19:42:18.335988    8428 command_runner.go:130] ! I0314 19:19:18.379851       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0314 19:42:18.336060    8428 command_runner.go:130] ! I0314 19:19:18.395429       1 server_others.go:152] "Using iptables Proxier"
	I0314 19:42:18.336060    8428 command_runner.go:130] ! I0314 19:19:18.395506       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0314 19:42:18.336060    8428 command_runner.go:130] ! I0314 19:19:18.395856       1 server.go:846] "Version info" version="v1.28.4"
	I0314 19:42:18.336115    8428 command_runner.go:130] ! I0314 19:19:18.395890       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0314 19:42:18.336115    8428 command_runner.go:130] ! I0314 19:19:18.417861       1 config.go:188] "Starting service config controller"
	I0314 19:42:18.336115    8428 command_runner.go:130] ! I0314 19:19:18.417913       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0314 19:42:18.336170    8428 command_runner.go:130] ! I0314 19:19:18.417950       1 config.go:97] "Starting endpoint slice config controller"
	I0314 19:42:18.336170    8428 command_runner.go:130] ! I0314 19:19:18.420511       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0314 19:42:18.336208    8428 command_runner.go:130] ! I0314 19:19:18.426566       1 config.go:315] "Starting node config controller"
	I0314 19:42:18.336208    8428 command_runner.go:130] ! I0314 19:19:18.426600       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0314 19:42:18.336258    8428 command_runner.go:130] ! I0314 19:19:18.519508       1 shared_informer.go:318] Caches are synced for service config
	I0314 19:42:18.336258    8428 command_runner.go:130] ! I0314 19:19:18.524347       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0314 19:42:18.336293    8428 command_runner.go:130] ! I0314 19:19:18.527360       1 shared_informer.go:318] Caches are synced for node config
	I0314 19:42:18.337010    8428 logs.go:123] Gathering logs for container status ...
	I0314 19:42:18.337010    8428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:42:18.428614    8428 command_runner.go:130] > CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	I0314 19:42:18.428614    8428 command_runner.go:130] > b159aedddf94a       ead0a4a53df89                                                                                         7 seconds ago        Running             coredns                   1                   89f326046d00d       coredns-5dd5756b68-d22jc
	I0314 19:42:18.428614    8428 command_runner.go:130] > 813492ad2d666       8c811b4aec35f                                                                                         7 seconds ago        Running             busybox                   1                   cddebe360bf3a       busybox-5b5d89c9d6-7446n
	I0314 19:42:18.428614    8428 command_runner.go:130] > 3167caea2534f       6e38f40d628db                                                                                         25 seconds ago       Running             storage-provisioner       2                   a723f141543f2       storage-provisioner
	I0314 19:42:18.428614    8428 command_runner.go:130] > 999e4c168afef       4950bb10b3f87                                                                                         About a minute ago   Running             kindnet-cni               1                   a9176b5544663       kindnet-7b9lf
	I0314 19:42:18.428614    8428 command_runner.go:130] > 497007582e446       83f6cc407eed8                                                                                         About a minute ago   Running             kube-proxy                1                   f513a7aff6720       kube-proxy-cg28g
	I0314 19:42:18.428614    8428 command_runner.go:130] > 2876622a2618d       6e38f40d628db                                                                                         About a minute ago   Exited              storage-provisioner       1                   a723f141543f2       storage-provisioner
	I0314 19:42:18.429135    8428 command_runner.go:130] > 32d90a3ea2131       e3db313c6dbc0                                                                                         About a minute ago   Running             kube-scheduler            1                   c70744e60ac50       kube-scheduler-multinode-442000
	I0314 19:42:18.429213    8428 command_runner.go:130] > a598d24960de8       7fe0e6f37db33                                                                                         About a minute ago   Running             kube-apiserver            0                   a27fa2188ee4c       kube-apiserver-multinode-442000
	I0314 19:42:18.429292    8428 command_runner.go:130] > 12baf105f0bb2       d058aa5ab969c                                                                                         About a minute ago   Running             kube-controller-manager   1                   67475bf80ddd9       kube-controller-manager-multinode-442000
	I0314 19:42:18.429401    8428 command_runner.go:130] > a81a9c43c3552       73deb9a3f7025                                                                                         About a minute ago   Running             etcd                      0                   35dd339c8a08d       etcd-multinode-442000
	I0314 19:42:18.429476    8428 command_runner.go:130] > 0cd43cdaa31c9       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   19 minutes ago       Exited              busybox                   0                   fa0f2372c88ee       busybox-5b5d89c9d6-7446n
	I0314 19:42:18.429550    8428 command_runner.go:130] > 8899bc0038935       ead0a4a53df89                                                                                         22 minutes ago       Exited              coredns                   0                   a3dba3fc54c01       coredns-5dd5756b68-d22jc
	I0314 19:42:18.429656    8428 command_runner.go:130] > 1a321c0e89971       kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988              22 minutes ago       Exited              kindnet-cni               0                   b046b896affe9       kindnet-7b9lf
	I0314 19:42:18.429683    8428 command_runner.go:130] > 2a62baf3f1b46       83f6cc407eed8                                                                                         23 minutes ago       Exited              kube-proxy                0                   9b3244b47278e       kube-proxy-cg28g
	I0314 19:42:18.429683    8428 command_runner.go:130] > dbb603289bf16       e3db313c6dbc0                                                                                         23 minutes ago       Exited              kube-scheduler            0                   54e39762d7a64       kube-scheduler-multinode-442000
	I0314 19:42:18.429683    8428 command_runner.go:130] > 16b80f73683dc       d058aa5ab969c                                                                                         23 minutes ago       Exited              kube-controller-manager   0                   102c907609a3a       kube-controller-manager-multinode-442000
	I0314 19:42:20.942057    8428 api_server.go:253] Checking apiserver healthz at https://172.17.93.236:8443/healthz ...
	I0314 19:42:20.950293    8428 api_server.go:279] https://172.17.93.236:8443/healthz returned 200:
	ok
	I0314 19:42:20.950754    8428 round_trippers.go:463] GET https://172.17.93.236:8443/version
	I0314 19:42:20.950754    8428 round_trippers.go:469] Request Headers:
	I0314 19:42:20.950754    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:42:20.950754    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:42:20.952431    8428 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0314 19:42:20.952431    8428 round_trippers.go:577] Response Headers:
	I0314 19:42:20.952431    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:42:20.952431    8428 round_trippers.go:580]     Content-Length: 264
	I0314 19:42:20.952431    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:42:21 GMT
	I0314 19:42:20.952431    8428 round_trippers.go:580]     Audit-Id: ddea6ce7-c94f-4e9e-8283-b11429c3c424
	I0314 19:42:20.952431    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:42:20.952431    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:42:20.952431    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:42:20.952431    8428 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.4",
	  "gitCommit": "bae2c62678db2b5053817bc97181fcc2e8388103",
	  "gitTreeState": "clean",
	  "buildDate": "2023-11-15T16:48:54Z",
	  "goVersion": "go1.20.11",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0314 19:42:20.952924    8428 api_server.go:141] control plane version: v1.28.4
	I0314 19:42:20.952924    8428 api_server.go:131] duration metric: took 3.7413464s to wait for apiserver health ...
	I0314 19:42:20.952924    8428 system_pods.go:43] waiting for kube-system pods to appear ...
	I0314 19:42:20.959195    8428 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0314 19:42:20.984809    8428 command_runner.go:130] > a598d24960de
	I0314 19:42:20.984892    8428 logs.go:276] 1 containers: [a598d24960de]
	I0314 19:42:20.993697    8428 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0314 19:42:21.017448    8428 command_runner.go:130] > a81a9c43c355
	I0314 19:42:21.018226    8428 logs.go:276] 1 containers: [a81a9c43c355]
	I0314 19:42:21.025637    8428 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0314 19:42:21.050369    8428 command_runner.go:130] > b159aedddf94
	I0314 19:42:21.050430    8428 command_runner.go:130] > 8899bc003893
	I0314 19:42:21.050529    8428 logs.go:276] 2 containers: [b159aedddf94 8899bc003893]
	I0314 19:42:21.057547    8428 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0314 19:42:21.080768    8428 command_runner.go:130] > 32d90a3ea213
	I0314 19:42:21.080768    8428 command_runner.go:130] > dbb603289bf1
	I0314 19:42:21.081742    8428 logs.go:276] 2 containers: [32d90a3ea213 dbb603289bf1]
	I0314 19:42:21.091487    8428 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0314 19:42:21.144402    8428 command_runner.go:130] > 497007582e44
	I0314 19:42:21.144475    8428 command_runner.go:130] > 2a62baf3f1b4
	I0314 19:42:21.144523    8428 logs.go:276] 2 containers: [497007582e44 2a62baf3f1b4]
	I0314 19:42:21.154982    8428 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0314 19:42:21.203077    8428 command_runner.go:130] > 12baf105f0bb
	I0314 19:42:21.203231    8428 command_runner.go:130] > 16b80f73683d
	I0314 19:42:21.203231    8428 logs.go:276] 2 containers: [12baf105f0bb 16b80f73683d]
	I0314 19:42:21.214969    8428 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0314 19:42:21.242294    8428 command_runner.go:130] > 999e4c168afe
	I0314 19:42:21.242294    8428 command_runner.go:130] > 1a321c0e8997
	I0314 19:42:21.242294    8428 logs.go:276] 2 containers: [999e4c168afe 1a321c0e8997]
	I0314 19:42:21.242294    8428 logs.go:123] Gathering logs for kube-scheduler [32d90a3ea213] ...
	I0314 19:42:21.242294    8428 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32d90a3ea213"
	I0314 19:42:21.271269    8428 command_runner.go:130] ! I0314 19:41:03.376319       1 serving.go:348] Generated self-signed cert in-memory
	I0314 19:42:21.271269    8428 command_runner.go:130] ! W0314 19:41:05.770317       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	I0314 19:42:21.271269    8428 command_runner.go:130] ! W0314 19:41:05.770426       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	I0314 19:42:21.271269    8428 command_runner.go:130] ! W0314 19:41:05.770581       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	I0314 19:42:21.271269    8428 command_runner.go:130] ! W0314 19:41:05.770640       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0314 19:42:21.271269    8428 command_runner.go:130] ! I0314 19:41:05.841573       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.4"
	I0314 19:42:21.271269    8428 command_runner.go:130] ! I0314 19:41:05.841674       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0314 19:42:21.271269    8428 command_runner.go:130] ! I0314 19:41:05.844125       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0314 19:42:21.271269    8428 command_runner.go:130] ! I0314 19:41:05.845062       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0314 19:42:21.271269    8428 command_runner.go:130] ! I0314 19:41:05.845143       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0314 19:42:21.271269    8428 command_runner.go:130] ! I0314 19:41:05.845293       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0314 19:42:21.271269    8428 command_runner.go:130] ! I0314 19:41:05.946840       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0314 19:42:21.273943    8428 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:42:21.274013    8428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 19:42:21.471978    8428 command_runner.go:130] > Name:               multinode-442000
	I0314 19:42:21.471978    8428 command_runner.go:130] > Roles:              control-plane
	I0314 19:42:21.472046    8428 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0314 19:42:21.472046    8428 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0314 19:42:21.472046    8428 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0314 19:42:21.472046    8428 command_runner.go:130] >                     kubernetes.io/hostname=multinode-442000
	I0314 19:42:21.472046    8428 command_runner.go:130] >                     kubernetes.io/os=linux
	I0314 19:42:21.472046    8428 command_runner.go:130] >                     minikube.k8s.io/commit=c6f78a3db54ac629870afb44fb5bc8be9e04a8c7
	I0314 19:42:21.472107    8428 command_runner.go:130] >                     minikube.k8s.io/name=multinode-442000
	I0314 19:42:21.472107    8428 command_runner.go:130] >                     minikube.k8s.io/primary=true
	I0314 19:42:21.472107    8428 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_03_14T19_19_05_0700
	I0314 19:42:21.472107    8428 command_runner.go:130] >                     minikube.k8s.io/version=v1.32.0
	I0314 19:42:21.472107    8428 command_runner.go:130] >                     node-role.kubernetes.io/control-plane=
	I0314 19:42:21.472107    8428 command_runner.go:130] >                     node.kubernetes.io/exclude-from-external-load-balancers=
	I0314 19:42:21.472165    8428 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0314 19:42:21.472165    8428 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0314 19:42:21.472165    8428 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0314 19:42:21.472165    8428 command_runner.go:130] > CreationTimestamp:  Thu, 14 Mar 2024 19:19:00 +0000
	I0314 19:42:21.472222    8428 command_runner.go:130] > Taints:             <none>
	I0314 19:42:21.472288    8428 command_runner.go:130] > Unschedulable:      false
	I0314 19:42:21.472288    8428 command_runner.go:130] > Lease:
	I0314 19:42:21.472288    8428 command_runner.go:130] >   HolderIdentity:  multinode-442000
	I0314 19:42:21.472288    8428 command_runner.go:130] >   AcquireTime:     <unset>
	I0314 19:42:21.472288    8428 command_runner.go:130] >   RenewTime:       Thu, 14 Mar 2024 19:42:17 +0000
	I0314 19:42:21.472288    8428 command_runner.go:130] > Conditions:
	I0314 19:42:21.472334    8428 command_runner.go:130] >   Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	I0314 19:42:21.472334    8428 command_runner.go:130] >   ----             ------  -----------------                 ------------------                ------                       -------
	I0314 19:42:21.472334    8428 command_runner.go:130] >   MemoryPressure   False   Thu, 14 Mar 2024 19:41:41 +0000   Thu, 14 Mar 2024 19:18:58 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	I0314 19:42:21.472334    8428 command_runner.go:130] >   DiskPressure     False   Thu, 14 Mar 2024 19:41:41 +0000   Thu, 14 Mar 2024 19:18:58 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	I0314 19:42:21.472395    8428 command_runner.go:130] >   PIDPressure      False   Thu, 14 Mar 2024 19:41:41 +0000   Thu, 14 Mar 2024 19:18:58 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	I0314 19:42:21.472395    8428 command_runner.go:130] >   Ready            True    Thu, 14 Mar 2024 19:41:41 +0000   Thu, 14 Mar 2024 19:41:41 +0000   KubeletReady                 kubelet is posting ready status
	I0314 19:42:21.472395    8428 command_runner.go:130] > Addresses:
	I0314 19:42:21.472395    8428 command_runner.go:130] >   InternalIP:  172.17.93.236
	I0314 19:42:21.472395    8428 command_runner.go:130] >   Hostname:    multinode-442000
	I0314 19:42:21.472395    8428 command_runner.go:130] > Capacity:
	I0314 19:42:21.472475    8428 command_runner.go:130] >   cpu:                2
	I0314 19:42:21.472511    8428 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0314 19:42:21.472545    8428 command_runner.go:130] >   hugepages-2Mi:      0
	I0314 19:42:21.472545    8428 command_runner.go:130] >   memory:             2164268Ki
	I0314 19:42:21.472545    8428 command_runner.go:130] >   pods:               110
	I0314 19:42:21.472545    8428 command_runner.go:130] > Allocatable:
	I0314 19:42:21.472545    8428 command_runner.go:130] >   cpu:                2
	I0314 19:42:21.472545    8428 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0314 19:42:21.472545    8428 command_runner.go:130] >   hugepages-2Mi:      0
	I0314 19:42:21.472606    8428 command_runner.go:130] >   memory:             2164268Ki
	I0314 19:42:21.472606    8428 command_runner.go:130] >   pods:               110
	I0314 19:42:21.472606    8428 command_runner.go:130] > System Info:
	I0314 19:42:21.472635    8428 command_runner.go:130] >   Machine ID:                 37c811f81f1d4d709fd4a6eb79d70749
	I0314 19:42:21.472635    8428 command_runner.go:130] >   System UUID:                8469b663-ea90-da4f-856d-11034a8f65d8
	I0314 19:42:21.472635    8428 command_runner.go:130] >   Boot ID:                    91589624-f8f3-469e-b556-aa6dd64e54de
	I0314 19:42:21.472635    8428 command_runner.go:130] >   Kernel Version:             5.10.207
	I0314 19:42:21.472687    8428 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0314 19:42:21.472703    8428 command_runner.go:130] >   Operating System:           linux
	I0314 19:42:21.472703    8428 command_runner.go:130] >   Architecture:               amd64
	I0314 19:42:21.472703    8428 command_runner.go:130] >   Container Runtime Version:  docker://25.0.4
	I0314 19:42:21.472703    8428 command_runner.go:130] >   Kubelet Version:            v1.28.4
	I0314 19:42:21.472703    8428 command_runner.go:130] >   Kube-Proxy Version:         v1.28.4
	I0314 19:42:21.472703    8428 command_runner.go:130] > PodCIDR:                      10.244.0.0/24
	I0314 19:42:21.472703    8428 command_runner.go:130] > PodCIDRs:                     10.244.0.0/24
	I0314 19:42:21.472703    8428 command_runner.go:130] > Non-terminated Pods:          (9 in total)
	I0314 19:42:21.472703    8428 command_runner.go:130] >   Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0314 19:42:21.472786    8428 command_runner.go:130] >   ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	I0314 19:42:21.472786    8428 command_runner.go:130] >   default                     busybox-5b5d89c9d6-7446n                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	I0314 19:42:21.472786    8428 command_runner.go:130] >   kube-system                 coredns-5dd5756b68-d22jc                    100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     23m
	I0314 19:42:21.472786    8428 command_runner.go:130] >   kube-system                 etcd-multinode-442000                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         75s
	I0314 19:42:21.472847    8428 command_runner.go:130] >   kube-system                 kindnet-7b9lf                               100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      23m
	I0314 19:42:21.472847    8428 command_runner.go:130] >   kube-system                 kube-apiserver-multinode-442000             250m (12%)    0 (0%)      0 (0%)           0 (0%)         75s
	I0314 19:42:21.472877    8428 command_runner.go:130] >   kube-system                 kube-controller-manager-multinode-442000    200m (10%)    0 (0%)      0 (0%)           0 (0%)         23m
	I0314 19:42:21.472919    8428 command_runner.go:130] >   kube-system                 kube-proxy-cg28g                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         23m
	I0314 19:42:21.472919    8428 command_runner.go:130] >   kube-system                 kube-scheduler-multinode-442000             100m (5%)     0 (0%)      0 (0%)           0 (0%)         23m
	I0314 19:42:21.472919    8428 command_runner.go:130] >   kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         22m
	I0314 19:42:21.472952    8428 command_runner.go:130] > Allocated resources:
	I0314 19:42:21.472970    8428 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0314 19:42:21.472970    8428 command_runner.go:130] >   Resource           Requests     Limits
	I0314 19:42:21.472970    8428 command_runner.go:130] >   --------           --------     ------
	I0314 19:42:21.472970    8428 command_runner.go:130] >   cpu                850m (42%)   100m (5%)
	I0314 19:42:21.472970    8428 command_runner.go:130] >   memory             220Mi (10%)  220Mi (10%)
	I0314 19:42:21.472970    8428 command_runner.go:130] >   ephemeral-storage  0 (0%)       0 (0%)
	I0314 19:42:21.472970    8428 command_runner.go:130] >   hugepages-2Mi      0 (0%)       0 (0%)
	I0314 19:42:21.472970    8428 command_runner.go:130] > Events:
	I0314 19:42:21.473033    8428 command_runner.go:130] >   Type    Reason                   Age                From             Message
	I0314 19:42:21.473033    8428 command_runner.go:130] >   ----    ------                   ----               ----             -------
	I0314 19:42:21.473057    8428 command_runner.go:130] >   Normal  Starting                 23m                kube-proxy       
	I0314 19:42:21.473057    8428 command_runner.go:130] >   Normal  Starting                 72s                kube-proxy       
	I0314 19:42:21.473057    8428 command_runner.go:130] >   Normal  NodeHasSufficientMemory  23m (x8 over 23m)  kubelet          Node multinode-442000 status is now: NodeHasSufficientMemory
	I0314 19:42:21.473057    8428 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    23m (x8 over 23m)  kubelet          Node multinode-442000 status is now: NodeHasNoDiskPressure
	I0314 19:42:21.473057    8428 command_runner.go:130] >   Normal  NodeHasSufficientPID     23m (x7 over 23m)  kubelet          Node multinode-442000 status is now: NodeHasSufficientPID
	I0314 19:42:21.473118    8428 command_runner.go:130] >   Normal  NodeAllocatableEnforced  23m                kubelet          Updated Node Allocatable limit across pods
	I0314 19:42:21.473118    8428 command_runner.go:130] >   Normal  NodeHasSufficientMemory  23m                kubelet          Node multinode-442000 status is now: NodeHasSufficientMemory
	I0314 19:42:21.473118    8428 command_runner.go:130] >   Normal  NodeAllocatableEnforced  23m                kubelet          Updated Node Allocatable limit across pods
	I0314 19:42:21.473175    8428 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    23m                kubelet          Node multinode-442000 status is now: NodeHasNoDiskPressure
	I0314 19:42:21.473175    8428 command_runner.go:130] >   Normal  NodeHasSufficientPID     23m                kubelet          Node multinode-442000 status is now: NodeHasSufficientPID
	I0314 19:42:21.473175    8428 command_runner.go:130] >   Normal  Starting                 23m                kubelet          Starting kubelet.
	I0314 19:42:21.473175    8428 command_runner.go:130] >   Normal  RegisteredNode           23m                node-controller  Node multinode-442000 event: Registered Node multinode-442000 in Controller
	I0314 19:42:21.473175    8428 command_runner.go:130] >   Normal  NodeReady                22m                kubelet          Node multinode-442000 status is now: NodeReady
	I0314 19:42:21.473251    8428 command_runner.go:130] >   Normal  Starting                 81s                kubelet          Starting kubelet.
	I0314 19:42:21.473251    8428 command_runner.go:130] >   Normal  NodeHasSufficientMemory  81s (x8 over 81s)  kubelet          Node multinode-442000 status is now: NodeHasSufficientMemory
	I0314 19:42:21.473281    8428 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    81s (x8 over 81s)  kubelet          Node multinode-442000 status is now: NodeHasNoDiskPressure
	I0314 19:42:21.473281    8428 command_runner.go:130] >   Normal  NodeHasSufficientPID     81s (x7 over 81s)  kubelet          Node multinode-442000 status is now: NodeHasSufficientPID
	I0314 19:42:21.473281    8428 command_runner.go:130] >   Normal  NodeAllocatableEnforced  81s                kubelet          Updated Node Allocatable limit across pods
	I0314 19:42:21.473318    8428 command_runner.go:130] >   Normal  RegisteredNode           63s                node-controller  Node multinode-442000 event: Registered Node multinode-442000 in Controller
	I0314 19:42:21.473318    8428 command_runner.go:130] > Name:               multinode-442000-m02
	I0314 19:42:21.473357    8428 command_runner.go:130] > Roles:              <none>
	I0314 19:42:21.473373    8428 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0314 19:42:21.473373    8428 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0314 19:42:21.473373    8428 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0314 19:42:21.473373    8428 command_runner.go:130] >                     kubernetes.io/hostname=multinode-442000-m02
	I0314 19:42:21.473373    8428 command_runner.go:130] >                     kubernetes.io/os=linux
	I0314 19:42:21.473373    8428 command_runner.go:130] >                     minikube.k8s.io/commit=c6f78a3db54ac629870afb44fb5bc8be9e04a8c7
	I0314 19:42:21.473434    8428 command_runner.go:130] >                     minikube.k8s.io/name=multinode-442000
	I0314 19:42:21.473465    8428 command_runner.go:130] >                     minikube.k8s.io/primary=false
	I0314 19:42:21.473465    8428 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_03_14T19_22_02_0700
	I0314 19:42:21.473465    8428 command_runner.go:130] >                     minikube.k8s.io/version=v1.32.0
	I0314 19:42:21.473500    8428 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0314 19:42:21.473500    8428 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0314 19:42:21.473500    8428 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0314 19:42:21.473500    8428 command_runner.go:130] > CreationTimestamp:  Thu, 14 Mar 2024 19:22:02 +0000
	I0314 19:42:21.473559    8428 command_runner.go:130] > Taints:             node.kubernetes.io/unreachable:NoExecute
	I0314 19:42:21.473559    8428 command_runner.go:130] >                     node.kubernetes.io/unreachable:NoSchedule
	I0314 19:42:21.473599    8428 command_runner.go:130] > Unschedulable:      false
	I0314 19:42:21.473599    8428 command_runner.go:130] > Lease:
	I0314 19:42:21.473599    8428 command_runner.go:130] >   HolderIdentity:  multinode-442000-m02
	I0314 19:42:21.473634    8428 command_runner.go:130] >   AcquireTime:     <unset>
	I0314 19:42:21.473634    8428 command_runner.go:130] >   RenewTime:       Thu, 14 Mar 2024 19:38:03 +0000
	I0314 19:42:21.473634    8428 command_runner.go:130] > Conditions:
	I0314 19:42:21.473684    8428 command_runner.go:130] >   Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	I0314 19:42:21.473684    8428 command_runner.go:130] >   ----             ------    -----------------                 ------------------                ------              -------
	I0314 19:42:21.473684    8428 command_runner.go:130] >   MemoryPressure   Unknown   Thu, 14 Mar 2024 19:33:15 +0000   Thu, 14 Mar 2024 19:41:59 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0314 19:42:21.473733    8428 command_runner.go:130] >   DiskPressure     Unknown   Thu, 14 Mar 2024 19:33:15 +0000   Thu, 14 Mar 2024 19:41:59 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0314 19:42:21.473766    8428 command_runner.go:130] >   PIDPressure      Unknown   Thu, 14 Mar 2024 19:33:15 +0000   Thu, 14 Mar 2024 19:41:59 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0314 19:42:21.473766    8428 command_runner.go:130] >   Ready            Unknown   Thu, 14 Mar 2024 19:33:15 +0000   Thu, 14 Mar 2024 19:41:59 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0314 19:42:21.473766    8428 command_runner.go:130] > Addresses:
	I0314 19:42:21.473810    8428 command_runner.go:130] >   InternalIP:  172.17.80.135
	I0314 19:42:21.473810    8428 command_runner.go:130] >   Hostname:    multinode-442000-m02
	I0314 19:42:21.473846    8428 command_runner.go:130] > Capacity:
	I0314 19:42:21.473846    8428 command_runner.go:130] >   cpu:                2
	I0314 19:42:21.473846    8428 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0314 19:42:21.473902    8428 command_runner.go:130] >   hugepages-2Mi:      0
	I0314 19:42:21.473902    8428 command_runner.go:130] >   memory:             2164268Ki
	I0314 19:42:21.473902    8428 command_runner.go:130] >   pods:               110
	I0314 19:42:21.473902    8428 command_runner.go:130] > Allocatable:
	I0314 19:42:21.473902    8428 command_runner.go:130] >   cpu:                2
	I0314 19:42:21.473902    8428 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0314 19:42:21.473953    8428 command_runner.go:130] >   hugepages-2Mi:      0
	I0314 19:42:21.473953    8428 command_runner.go:130] >   memory:             2164268Ki
	I0314 19:42:21.473953    8428 command_runner.go:130] >   pods:               110
	I0314 19:42:21.473953    8428 command_runner.go:130] > System Info:
	I0314 19:42:21.473953    8428 command_runner.go:130] >   Machine ID:                 35b6f7da4d3943d99d8a5913cae1c8fb
	I0314 19:42:21.474005    8428 command_runner.go:130] >   System UUID:                0b9b8376-0767-f940-9973-d373e3dc050d
	I0314 19:42:21.474005    8428 command_runner.go:130] >   Boot ID:                    45d479cc-26e8-46a6-9431-50637071f586
	I0314 19:42:21.474005    8428 command_runner.go:130] >   Kernel Version:             5.10.207
	I0314 19:42:21.474005    8428 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0314 19:42:21.474005    8428 command_runner.go:130] >   Operating System:           linux
	I0314 19:42:21.474005    8428 command_runner.go:130] >   Architecture:               amd64
	I0314 19:42:21.474005    8428 command_runner.go:130] >   Container Runtime Version:  docker://25.0.4
	I0314 19:42:21.474081    8428 command_runner.go:130] >   Kubelet Version:            v1.28.4
	I0314 19:42:21.474112    8428 command_runner.go:130] >   Kube-Proxy Version:         v1.28.4
	I0314 19:42:21.474112    8428 command_runner.go:130] > PodCIDR:                      10.244.1.0/24
	I0314 19:42:21.474146    8428 command_runner.go:130] > PodCIDRs:                     10.244.1.0/24
	I0314 19:42:21.474146    8428 command_runner.go:130] > Non-terminated Pods:          (3 in total)
	I0314 19:42:21.474146    8428 command_runner.go:130] >   Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0314 19:42:21.474194    8428 command_runner.go:130] >   ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	I0314 19:42:21.474194    8428 command_runner.go:130] >   default                     busybox-5b5d89c9d6-8drpb    0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	I0314 19:42:21.474236    8428 command_runner.go:130] >   kube-system                 kindnet-c7m4p               100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      20m
	I0314 19:42:21.474236    8428 command_runner.go:130] >   kube-system                 kube-proxy-72dzs            0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	I0314 19:42:21.474236    8428 command_runner.go:130] > Allocated resources:
	I0314 19:42:21.474236    8428 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0314 19:42:21.474236    8428 command_runner.go:130] >   Resource           Requests   Limits
	I0314 19:42:21.474236    8428 command_runner.go:130] >   --------           --------   ------
	I0314 19:42:21.474236    8428 command_runner.go:130] >   cpu                100m (5%)  100m (5%)
	I0314 19:42:21.474236    8428 command_runner.go:130] >   memory             50Mi (2%)  50Mi (2%)
	I0314 19:42:21.474318    8428 command_runner.go:130] >   ephemeral-storage  0 (0%)     0 (0%)
	I0314 19:42:21.474318    8428 command_runner.go:130] >   hugepages-2Mi      0 (0%)     0 (0%)
	I0314 19:42:21.474351    8428 command_runner.go:130] > Events:
	I0314 19:42:21.474351    8428 command_runner.go:130] >   Type    Reason                   Age                From             Message
	I0314 19:42:21.474351    8428 command_runner.go:130] >   ----    ------                   ----               ----             -------
	I0314 19:42:21.474385    8428 command_runner.go:130] >   Normal  Starting                 20m                kube-proxy       
	I0314 19:42:21.474385    8428 command_runner.go:130] >   Normal  NodeHasSufficientMemory  20m (x5 over 20m)  kubelet          Node multinode-442000-m02 status is now: NodeHasSufficientMemory
	I0314 19:42:21.474385    8428 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    20m (x5 over 20m)  kubelet          Node multinode-442000-m02 status is now: NodeHasNoDiskPressure
	I0314 19:42:21.474445    8428 command_runner.go:130] >   Normal  NodeHasSufficientPID     20m (x5 over 20m)  kubelet          Node multinode-442000-m02 status is now: NodeHasSufficientPID
	I0314 19:42:21.474445    8428 command_runner.go:130] >   Normal  RegisteredNode           20m                node-controller  Node multinode-442000-m02 event: Registered Node multinode-442000-m02 in Controller
	I0314 19:42:21.474477    8428 command_runner.go:130] >   Normal  NodeReady                20m                kubelet          Node multinode-442000-m02 status is now: NodeReady
	I0314 19:42:21.474518    8428 command_runner.go:130] >   Normal  RegisteredNode           63s                node-controller  Node multinode-442000-m02 event: Registered Node multinode-442000-m02 in Controller
	I0314 19:42:21.474518    8428 command_runner.go:130] >   Normal  NodeNotReady             22s                node-controller  Node multinode-442000-m02 status is now: NodeNotReady
	I0314 19:42:21.474518    8428 command_runner.go:130] > Name:               multinode-442000-m03
	I0314 19:42:21.474518    8428 command_runner.go:130] > Roles:              <none>
	I0314 19:42:21.474577    8428 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0314 19:42:21.474577    8428 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0314 19:42:21.474606    8428 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0314 19:42:21.474606    8428 command_runner.go:130] >                     kubernetes.io/hostname=multinode-442000-m03
	I0314 19:42:21.474652    8428 command_runner.go:130] >                     kubernetes.io/os=linux
	I0314 19:42:21.474668    8428 command_runner.go:130] >                     minikube.k8s.io/commit=c6f78a3db54ac629870afb44fb5bc8be9e04a8c7
	I0314 19:42:21.474668    8428 command_runner.go:130] >                     minikube.k8s.io/name=multinode-442000
	I0314 19:42:21.474668    8428 command_runner.go:130] >                     minikube.k8s.io/primary=false
	I0314 19:42:21.474668    8428 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_03_14T19_36_47_0700
	I0314 19:42:21.474668    8428 command_runner.go:130] >                     minikube.k8s.io/version=v1.32.0
	I0314 19:42:21.474732    8428 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0314 19:42:21.474732    8428 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0314 19:42:21.474754    8428 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0314 19:42:21.474754    8428 command_runner.go:130] > CreationTimestamp:  Thu, 14 Mar 2024 19:36:47 +0000
	I0314 19:42:21.474754    8428 command_runner.go:130] > Taints:             node.kubernetes.io/unreachable:NoExecute
	I0314 19:42:21.474754    8428 command_runner.go:130] >                     node.kubernetes.io/unreachable:NoSchedule
	I0314 19:42:21.474815    8428 command_runner.go:130] > Unschedulable:      false
	I0314 19:42:21.474815    8428 command_runner.go:130] > Lease:
	I0314 19:42:21.474815    8428 command_runner.go:130] >   HolderIdentity:  multinode-442000-m03
	I0314 19:42:21.474845    8428 command_runner.go:130] >   AcquireTime:     <unset>
	I0314 19:42:21.474845    8428 command_runner.go:130] >   RenewTime:       Thu, 14 Mar 2024 19:37:37 +0000
	I0314 19:42:21.474845    8428 command_runner.go:130] > Conditions:
	I0314 19:42:21.474877    8428 command_runner.go:130] >   Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	I0314 19:42:21.474877    8428 command_runner.go:130] >   ----             ------    -----------------                 ------------------                ------              -------
	I0314 19:42:21.474877    8428 command_runner.go:130] >   MemoryPressure   Unknown   Thu, 14 Mar 2024 19:36:54 +0000   Thu, 14 Mar 2024 19:38:21 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0314 19:42:21.474937    8428 command_runner.go:130] >   DiskPressure     Unknown   Thu, 14 Mar 2024 19:36:54 +0000   Thu, 14 Mar 2024 19:38:21 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0314 19:42:21.474937    8428 command_runner.go:130] >   PIDPressure      Unknown   Thu, 14 Mar 2024 19:36:54 +0000   Thu, 14 Mar 2024 19:38:21 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0314 19:42:21.474972    8428 command_runner.go:130] >   Ready            Unknown   Thu, 14 Mar 2024 19:36:54 +0000   Thu, 14 Mar 2024 19:38:21 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0314 19:42:21.474972    8428 command_runner.go:130] > Addresses:
	I0314 19:42:21.474972    8428 command_runner.go:130] >   InternalIP:  172.17.84.215
	I0314 19:42:21.475006    8428 command_runner.go:130] >   Hostname:    multinode-442000-m03
	I0314 19:42:21.475038    8428 command_runner.go:130] > Capacity:
	I0314 19:42:21.475038    8428 command_runner.go:130] >   cpu:                2
	I0314 19:42:21.475055    8428 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0314 19:42:21.475055    8428 command_runner.go:130] >   hugepages-2Mi:      0
	I0314 19:42:21.475055    8428 command_runner.go:130] >   memory:             2164268Ki
	I0314 19:42:21.475055    8428 command_runner.go:130] >   pods:               110
	I0314 19:42:21.475055    8428 command_runner.go:130] > Allocatable:
	I0314 19:42:21.475055    8428 command_runner.go:130] >   cpu:                2
	I0314 19:42:21.475055    8428 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0314 19:42:21.475055    8428 command_runner.go:130] >   hugepages-2Mi:      0
	I0314 19:42:21.475055    8428 command_runner.go:130] >   memory:             2164268Ki
	I0314 19:42:21.475055    8428 command_runner.go:130] >   pods:               110
	I0314 19:42:21.475055    8428 command_runner.go:130] > System Info:
	I0314 19:42:21.475055    8428 command_runner.go:130] >   Machine ID:                 dc7772516bfe448db22a5c28796f53ab
	I0314 19:42:21.475157    8428 command_runner.go:130] >   System UUID:                71573585-d564-f043-9154-3d5854ce61b8
	I0314 19:42:21.475157    8428 command_runner.go:130] >   Boot ID:                    fed746b2-110b-43ee-9065-09983ba74a37
	I0314 19:42:21.475157    8428 command_runner.go:130] >   Kernel Version:             5.10.207
	I0314 19:42:21.475157    8428 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0314 19:42:21.475157    8428 command_runner.go:130] >   Operating System:           linux
	I0314 19:42:21.475157    8428 command_runner.go:130] >   Architecture:               amd64
	I0314 19:42:21.475157    8428 command_runner.go:130] >   Container Runtime Version:  docker://25.0.4
	I0314 19:42:21.475157    8428 command_runner.go:130] >   Kubelet Version:            v1.28.4
	I0314 19:42:21.475157    8428 command_runner.go:130] >   Kube-Proxy Version:         v1.28.4
	I0314 19:42:21.475157    8428 command_runner.go:130] > PodCIDR:                      10.244.3.0/24
	I0314 19:42:21.475157    8428 command_runner.go:130] > PodCIDRs:                     10.244.3.0/24
	I0314 19:42:21.475157    8428 command_runner.go:130] > Non-terminated Pods:          (2 in total)
	I0314 19:42:21.475157    8428 command_runner.go:130] >   Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0314 19:42:21.475157    8428 command_runner.go:130] >   ---------                   ----                ------------  ----------  ---------------  -------------  ---
	I0314 19:42:21.475157    8428 command_runner.go:130] >   kube-system                 kindnet-r7zdb       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      15m
	I0314 19:42:21.475157    8428 command_runner.go:130] >   kube-system                 kube-proxy-w2qls    0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	I0314 19:42:21.475157    8428 command_runner.go:130] > Allocated resources:
	I0314 19:42:21.475157    8428 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0314 19:42:21.475157    8428 command_runner.go:130] >   Resource           Requests   Limits
	I0314 19:42:21.475157    8428 command_runner.go:130] >   --------           --------   ------
	I0314 19:42:21.475157    8428 command_runner.go:130] >   cpu                100m (5%)  100m (5%)
	I0314 19:42:21.475157    8428 command_runner.go:130] >   memory             50Mi (2%)  50Mi (2%)
	I0314 19:42:21.475157    8428 command_runner.go:130] >   ephemeral-storage  0 (0%)     0 (0%)
	I0314 19:42:21.475157    8428 command_runner.go:130] >   hugepages-2Mi      0 (0%)     0 (0%)
	I0314 19:42:21.475157    8428 command_runner.go:130] > Events:
	I0314 19:42:21.475157    8428 command_runner.go:130] >   Type    Reason                   Age                    From             Message
	I0314 19:42:21.475157    8428 command_runner.go:130] >   ----    ------                   ----                   ----             -------
	I0314 19:42:21.475157    8428 command_runner.go:130] >   Normal  Starting                 15m                    kube-proxy       
	I0314 19:42:21.475157    8428 command_runner.go:130] >   Normal  Starting                 5m32s                  kube-proxy       
	I0314 19:42:21.475157    8428 command_runner.go:130] >   Normal  NodeHasSufficientMemory  15m (x5 over 15m)      kubelet          Node multinode-442000-m03 status is now: NodeHasSufficientMemory
	I0314 19:42:21.475157    8428 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    15m (x5 over 15m)      kubelet          Node multinode-442000-m03 status is now: NodeHasNoDiskPressure
	I0314 19:42:21.475157    8428 command_runner.go:130] >   Normal  NodeHasSufficientPID     15m (x5 over 15m)      kubelet          Node multinode-442000-m03 status is now: NodeHasSufficientPID
	I0314 19:42:21.475157    8428 command_runner.go:130] >   Normal  NodeReady                15m                    kubelet          Node multinode-442000-m03 status is now: NodeReady
	I0314 19:42:21.475157    8428 command_runner.go:130] >   Normal  NodeHasSufficientMemory  5m34s (x5 over 5m36s)  kubelet          Node multinode-442000-m03 status is now: NodeHasSufficientMemory
	I0314 19:42:21.475157    8428 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    5m34s (x5 over 5m36s)  kubelet          Node multinode-442000-m03 status is now: NodeHasNoDiskPressure
	I0314 19:42:21.475157    8428 command_runner.go:130] >   Normal  NodeHasSufficientPID     5m34s (x5 over 5m36s)  kubelet          Node multinode-442000-m03 status is now: NodeHasSufficientPID
	I0314 19:42:21.475157    8428 command_runner.go:130] >   Normal  RegisteredNode           5m30s                  node-controller  Node multinode-442000-m03 event: Registered Node multinode-442000-m03 in Controller
	I0314 19:42:21.475157    8428 command_runner.go:130] >   Normal  NodeReady                5m27s                  kubelet          Node multinode-442000-m03 status is now: NodeReady
	I0314 19:42:21.475157    8428 command_runner.go:130] >   Normal  NodeNotReady             4m                     node-controller  Node multinode-442000-m03 status is now: NodeNotReady
	I0314 19:42:21.475157    8428 command_runner.go:130] >   Normal  RegisteredNode           63s                    node-controller  Node multinode-442000-m03 event: Registered Node multinode-442000-m03 in Controller
	I0314 19:42:21.484880    8428 logs.go:123] Gathering logs for etcd [a81a9c43c355] ...
	I0314 19:42:21.484880    8428 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a81a9c43c355"
	I0314 19:42:21.519214    8428 command_runner.go:130] ! {"level":"warn","ts":"2024-03-14T19:41:01.944953Z","caller":"embed/config.go:673","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I0314 19:42:21.519385    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:01.945607Z","caller":"etcdmain/etcd.go:73","msg":"Running: ","args":["etcd","--advertise-client-urls=https://172.17.93.236:2379","--cert-file=/var/lib/minikube/certs/etcd/server.crt","--client-cert-auth=true","--data-dir=/var/lib/minikube/etcd","--experimental-initial-corrupt-check=true","--experimental-watch-progress-notify-interval=5s","--initial-advertise-peer-urls=https://172.17.93.236:2380","--initial-cluster=multinode-442000=https://172.17.93.236:2380","--key-file=/var/lib/minikube/certs/etcd/server.key","--listen-client-urls=https://127.0.0.1:2379,https://172.17.93.236:2379","--listen-metrics-urls=http://127.0.0.1:2381","--listen-peer-urls=https://172.17.93.236:2380","--name=multinode-442000","--peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt","--peer-client-cert-auth=true","--peer-key-file=/var/lib/minikube/certs/etcd/peer.key","--peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt","--prox
y-refresh-interval=70000","--snapshot-count=10000","--trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt"]}
	I0314 19:42:21.519442    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:01.945676Z","caller":"etcdmain/etcd.go:116","msg":"server has been already initialized","data-dir":"/var/lib/minikube/etcd","dir-type":"member"}
	I0314 19:42:21.519442    8428 command_runner.go:130] ! {"level":"warn","ts":"2024-03-14T19:41:01.945701Z","caller":"embed/config.go:673","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I0314 19:42:21.519562    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:01.94571Z","caller":"embed/etcd.go:127","msg":"configuring peer listeners","listen-peer-urls":["https://172.17.93.236:2380"]}
	I0314 19:42:21.519634    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:01.94582Z","caller":"embed/etcd.go:495","msg":"starting with peer TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I0314 19:42:21.519634    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:01.94751Z","caller":"embed/etcd.go:135","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://172.17.93.236:2379"]}
	I0314 19:42:21.519634    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:01.948798Z","caller":"embed/etcd.go:309","msg":"starting an etcd server","etcd-version":"3.5.9","git-sha":"bdbbde998","go-version":"go1.19.9","go-os":"linux","go-arch":"amd64","max-cpu-set":2,"max-cpu-available":2,"member-initialized":true,"name":"multinode-442000","data-dir":"/var/lib/minikube/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/minikube/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://172.17.93.236:2380"],"listen-peer-urls":["https://172.17.93.236:2380"],"advertise-client-urls":["https://172.17.93.236:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.17.93.236:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"cors":["*"],"host-whitelist":["*"],"initial-
cluster":"","initial-cluster-state":"new","initial-cluster-token":"","quota-backend-bytes":2147483648,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"initial-corrupt-check":true,"corrupt-check-time-interval":"0s","compact-check-time-enabled":false,"compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s"}
	I0314 19:42:21.519634    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:01.989049Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"39.493838ms"}
	I0314 19:42:21.519634    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:02.0258Z","caller":"etcdserver/server.go:530","msg":"No snapshot found. Recovering WAL from scratch!"}
	I0314 19:42:21.519634    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:02.055698Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"76b99849a2fc5549","local-member-id":"fa26a6ed08186c39","commit-index":1967}
	I0314 19:42:21.519634    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:02.067927Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fa26a6ed08186c39 switched to configuration voters=()"}
	I0314 19:42:21.519634    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:02.067975Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fa26a6ed08186c39 became follower at term 2"}
	I0314 19:42:21.519634    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:02.068051Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft fa26a6ed08186c39 [peers: [], term: 2, commit: 1967, applied: 0, lastindex: 1967, lastterm: 2]"}
	I0314 19:42:21.519634    8428 command_runner.go:130] ! {"level":"warn","ts":"2024-03-14T19:41:02.100633Z","caller":"auth/store.go:1238","msg":"simple token is not cryptographically signed"}
	I0314 19:42:21.519634    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:02.113992Z","caller":"mvcc/kvstore.go:323","msg":"restored last compact revision","meta-bucket-name":"meta","meta-bucket-name-key":"finishedCompactRev","restored-compact-revision":1090}
	I0314 19:42:21.519634    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:02.125551Z","caller":"mvcc/kvstore.go:393","msg":"kvstore restored","current-rev":1704}
	I0314 19:42:21.519634    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:02.137052Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	I0314 19:42:21.519634    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:02.152836Z","caller":"etcdserver/corrupt.go:95","msg":"starting initial corruption check","local-member-id":"fa26a6ed08186c39","timeout":"7s"}
	I0314 19:42:21.520181    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:02.153448Z","caller":"etcdserver/corrupt.go:165","msg":"initial corruption checking passed; no corruption","local-member-id":"fa26a6ed08186c39"}
	I0314 19:42:21.520244    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:02.153504Z","caller":"etcdserver/server.go:854","msg":"starting etcd server","local-member-id":"fa26a6ed08186c39","local-server-version":"3.5.9","cluster-version":"to_be_decided"}
	I0314 19:42:21.520300    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:02.154089Z","caller":"etcdserver/server.go:754","msg":"starting initial election tick advance","election-ticks":10}
	I0314 19:42:21.520300    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:02.154894Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	I0314 19:42:21.520370    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:02.154977Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	I0314 19:42:21.520423    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:02.154992Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	I0314 19:42:21.520423    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:02.158559Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fa26a6ed08186c39 switched to configuration voters=(18025278095570267193)"}
	I0314 19:42:21.520482    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:02.158756Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"76b99849a2fc5549","local-member-id":"fa26a6ed08186c39","added-peer-id":"fa26a6ed08186c39","added-peer-peer-urls":["https://172.17.86.124:2380"]}
	I0314 19:42:21.520535    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:02.158933Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"76b99849a2fc5549","local-member-id":"fa26a6ed08186c39","cluster-version":"3.5"}
	I0314 19:42:21.520535    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:02.158969Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	I0314 19:42:21.520603    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:02.159838Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I0314 19:42:21.520714    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:02.160148Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"fa26a6ed08186c39","initial-advertise-peer-urls":["https://172.17.93.236:2380"],"listen-peer-urls":["https://172.17.93.236:2380"],"advertise-client-urls":["https://172.17.93.236:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.17.93.236:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	I0314 19:42:21.520714    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:02.160272Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	I0314 19:42:21.520769    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:02.161335Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"172.17.93.236:2380"}
	I0314 19:42:21.520769    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:02.161389Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"172.17.93.236:2380"}
	I0314 19:42:21.520769    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:03.281331Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fa26a6ed08186c39 is starting a new election at term 2"}
	I0314 19:42:21.520876    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:03.281645Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fa26a6ed08186c39 became pre-candidate at term 2"}
	I0314 19:42:21.520919    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:03.281829Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fa26a6ed08186c39 received MsgPreVoteResp from fa26a6ed08186c39 at term 2"}
	I0314 19:42:21.520974    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:03.281928Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fa26a6ed08186c39 became candidate at term 3"}
	I0314 19:42:21.520974    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:03.282044Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fa26a6ed08186c39 received MsgVoteResp from fa26a6ed08186c39 at term 3"}
	I0314 19:42:21.520974    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:03.282164Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fa26a6ed08186c39 became leader at term 3"}
	I0314 19:42:21.521043    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:03.282332Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: fa26a6ed08186c39 elected leader fa26a6ed08186c39 at term 3"}
	I0314 19:42:21.521096    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:03.292472Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"fa26a6ed08186c39","local-member-attributes":"{Name:multinode-442000 ClientURLs:[https://172.17.93.236:2379]}","request-path":"/0/members/fa26a6ed08186c39/attributes","cluster-id":"76b99849a2fc5549","publish-timeout":"7s"}
	I0314 19:42:21.521155    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:03.292867Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	I0314 19:42:21.521155    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:03.296522Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	I0314 19:42:21.521220    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:03.298446Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	I0314 19:42:21.521220    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:03.311867Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.17.93.236:2379"}
	I0314 19:42:21.521292    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:03.311957Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	I0314 19:42:21.521292    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:03.31205Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	I0314 19:42:21.528085    8428 logs.go:123] Gathering logs for coredns [b159aedddf94] ...
	I0314 19:42:21.528158    8428 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b159aedddf94"
	I0314 19:42:21.558526    8428 command_runner.go:130] > .:53
	I0314 19:42:21.558526    8428 command_runner.go:130] > [INFO] plugin/reload: Running configuration SHA512 = d518b2f22d7013b4ce33ee954d9f8802810eac8bed02a1cf0be20d76208a6f83258316421f15df605ab13f1704501370ffcd7655fbac5818a200880248c94b94
	I0314 19:42:21.558526    8428 command_runner.go:130] > CoreDNS-1.10.1
	I0314 19:42:21.558605    8428 command_runner.go:130] > linux/amd64, go1.20, 055b2c3
	I0314 19:42:21.558605    8428 command_runner.go:130] > [INFO] 127.0.0.1:38965 - 37747 "HINFO IN 9162400456686827331.1281991328183180689. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.052220616s
	I0314 19:42:21.558839    8428 logs.go:123] Gathering logs for kube-proxy [2a62baf3f1b4] ...
	I0314 19:42:21.558839    8428 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2a62baf3f1b4"
	I0314 19:42:21.585131    8428 command_runner.go:130] ! I0314 19:19:18.247796       1 server_others.go:69] "Using iptables proxy"
	I0314 19:42:21.585752    8428 command_runner.go:130] ! I0314 19:19:18.275162       1 node.go:141] Successfully retrieved node IP: 172.17.86.124
	I0314 19:42:21.585800    8428 command_runner.go:130] ! I0314 19:19:18.379821       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0314 19:42:21.585800    8428 command_runner.go:130] ! I0314 19:19:18.379851       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0314 19:42:21.585800    8428 command_runner.go:130] ! I0314 19:19:18.395429       1 server_others.go:152] "Using iptables Proxier"
	I0314 19:42:21.585800    8428 command_runner.go:130] ! I0314 19:19:18.395506       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0314 19:42:21.585851    8428 command_runner.go:130] ! I0314 19:19:18.395856       1 server.go:846] "Version info" version="v1.28.4"
	I0314 19:42:21.585851    8428 command_runner.go:130] ! I0314 19:19:18.395890       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0314 19:42:21.585851    8428 command_runner.go:130] ! I0314 19:19:18.417861       1 config.go:188] "Starting service config controller"
	I0314 19:42:21.585896    8428 command_runner.go:130] ! I0314 19:19:18.417913       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0314 19:42:21.585896    8428 command_runner.go:130] ! I0314 19:19:18.417950       1 config.go:97] "Starting endpoint slice config controller"
	I0314 19:42:21.585964    8428 command_runner.go:130] ! I0314 19:19:18.420511       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0314 19:42:21.585964    8428 command_runner.go:130] ! I0314 19:19:18.426566       1 config.go:315] "Starting node config controller"
	I0314 19:42:21.585964    8428 command_runner.go:130] ! I0314 19:19:18.426600       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0314 19:42:21.585964    8428 command_runner.go:130] ! I0314 19:19:18.519508       1 shared_informer.go:318] Caches are synced for service config
	I0314 19:42:21.586006    8428 command_runner.go:130] ! I0314 19:19:18.524347       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0314 19:42:21.586006    8428 command_runner.go:130] ! I0314 19:19:18.527360       1 shared_informer.go:318] Caches are synced for node config
	I0314 19:42:21.588004    8428 logs.go:123] Gathering logs for kube-controller-manager [12baf105f0bb] ...
	I0314 19:42:21.588067    8428 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12baf105f0bb"
	I0314 19:42:21.619007    8428 command_runner.go:130] ! I0314 19:41:03.101287       1 serving.go:348] Generated self-signed cert in-memory
	I0314 19:42:21.619007    8428 command_runner.go:130] ! I0314 19:41:03.872151       1 controllermanager.go:189] "Starting" version="v1.28.4"
	I0314 19:42:21.619007    8428 command_runner.go:130] ! I0314 19:41:03.874301       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0314 19:42:21.619007    8428 command_runner.go:130] ! I0314 19:41:03.879645       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0314 19:42:21.619007    8428 command_runner.go:130] ! I0314 19:41:03.880765       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0314 19:42:21.619007    8428 command_runner.go:130] ! I0314 19:41:03.883873       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0314 19:42:21.619007    8428 command_runner.go:130] ! I0314 19:41:03.883977       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0314 19:42:21.619007    8428 command_runner.go:130] ! I0314 19:41:07.787609       1 shared_informer.go:311] Waiting for caches to sync for tokens
	I0314 19:42:21.619007    8428 command_runner.go:130] ! I0314 19:41:07.796442       1 controllermanager.go:642] "Started controller" controller="replicationcontroller-controller"
	I0314 19:42:21.619007    8428 command_runner.go:130] ! I0314 19:41:07.796953       1 replica_set.go:214] "Starting controller" name="replicationcontroller"
	I0314 19:42:21.619007    8428 command_runner.go:130] ! I0314 19:41:07.798900       1 shared_informer.go:311] Waiting for caches to sync for ReplicationController
	I0314 19:42:21.619007    8428 command_runner.go:130] ! I0314 19:41:07.848846       1 controllermanager.go:642] "Started controller" controller="namespace-controller"
	I0314 19:42:21.619007    8428 command_runner.go:130] ! I0314 19:41:07.849015       1 namespace_controller.go:197] "Starting namespace controller"
	I0314 19:42:21.619007    8428 command_runner.go:130] ! I0314 19:41:07.849025       1 shared_informer.go:311] Waiting for caches to sync for namespace
	I0314 19:42:21.619007    8428 command_runner.go:130] ! I0314 19:41:07.855296       1 controllermanager.go:642] "Started controller" controller="persistentvolumeclaim-protection-controller"
	I0314 19:42:21.619007    8428 command_runner.go:130] ! I0314 19:41:07.858491       1 pvc_protection_controller.go:102] "Starting PVC protection controller"
	I0314 19:42:21.619007    8428 command_runner.go:130] ! I0314 19:41:07.858512       1 shared_informer.go:311] Waiting for caches to sync for PVC protection
	I0314 19:42:21.619007    8428 command_runner.go:130] ! I0314 19:41:07.864964       1 controllermanager.go:642] "Started controller" controller="endpoints-controller"
	I0314 19:42:21.619007    8428 command_runner.go:130] ! I0314 19:41:07.865080       1 endpoints_controller.go:174] "Starting endpoint controller"
	I0314 19:42:21.619007    8428 command_runner.go:130] ! I0314 19:41:07.865088       1 shared_informer.go:311] Waiting for caches to sync for endpoint
	I0314 19:42:21.619007    8428 command_runner.go:130] ! I0314 19:41:07.870629       1 controllermanager.go:642] "Started controller" controller="daemonset-controller"
	I0314 19:42:21.619007    8428 command_runner.go:130] ! I0314 19:41:07.871089       1 daemon_controller.go:291] "Starting daemon sets controller"
	I0314 19:42:21.619007    8428 command_runner.go:130] ! I0314 19:41:07.871332       1 shared_informer.go:311] Waiting for caches to sync for daemon sets
	I0314 19:42:21.619007    8428 command_runner.go:130] ! I0314 19:41:07.889997       1 shared_informer.go:318] Caches are synced for tokens
	I0314 19:42:21.619007    8428 command_runner.go:130] ! I0314 19:41:07.899597       1 controllermanager.go:642] "Started controller" controller="horizontal-pod-autoscaler-controller"
	I0314 19:42:21.619007    8428 command_runner.go:130] ! I0314 19:41:07.900355       1 horizontal.go:200] "Starting HPA controller"
	I0314 19:42:21.619007    8428 command_runner.go:130] ! I0314 19:41:07.901325       1 shared_informer.go:311] Waiting for caches to sync for HPA
	I0314 19:42:21.619007    8428 command_runner.go:130] ! I0314 19:41:07.921217       1 controllermanager.go:642] "Started controller" controller="disruption-controller"
	I0314 19:42:21.619007    8428 command_runner.go:130] ! I0314 19:41:07.922072       1 disruption.go:433] "Sending events to api server."
	I0314 19:42:21.619007    8428 command_runner.go:130] ! I0314 19:41:07.922293       1 disruption.go:444] "Starting disruption controller"
	I0314 19:42:21.619007    8428 command_runner.go:130] ! I0314 19:41:07.922481       1 shared_informer.go:311] Waiting for caches to sync for disruption
	I0314 19:42:21.619007    8428 command_runner.go:130] ! I0314 19:41:07.927437       1 controllermanager.go:642] "Started controller" controller="persistentvolume-attach-detach-controller"
	I0314 19:42:21.619007    8428 command_runner.go:130] ! I0314 19:41:07.929290       1 attach_detach_controller.go:337] "Starting attach detach controller"
	I0314 19:42:21.619007    8428 command_runner.go:130] ! I0314 19:41:07.929325       1 shared_informer.go:311] Waiting for caches to sync for attach detach
	I0314 19:42:21.619007    8428 command_runner.go:130] ! I0314 19:41:07.936410       1 controllermanager.go:642] "Started controller" controller="ephemeral-volume-controller"
	I0314 19:42:21.619007    8428 command_runner.go:130] ! I0314 19:41:07.936565       1 controller.go:169] "Starting ephemeral volume controller"
	I0314 19:42:21.619007    8428 command_runner.go:130] ! I0314 19:41:07.936765       1 shared_informer.go:311] Waiting for caches to sync for ephemeral
	I0314 19:42:21.619007    8428 command_runner.go:130] ! I0314 19:41:07.954720       1 controllermanager.go:642] "Started controller" controller="cronjob-controller"
	I0314 19:42:21.619547    8428 command_runner.go:130] ! I0314 19:41:07.954939       1 cronjob_controllerv2.go:139] "Starting cronjob controller v2"
	I0314 19:42:21.619547    8428 command_runner.go:130] ! I0314 19:41:07.955142       1 shared_informer.go:311] Waiting for caches to sync for cronjob
	I0314 19:42:21.619602    8428 command_runner.go:130] ! I0314 19:41:07.970387       1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-kubelet-serving"
	I0314 19:42:21.619602    8428 command_runner.go:130] ! I0314 19:41:07.970474       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I0314 19:42:21.619652    8428 command_runner.go:130] ! I0314 19:41:07.970624       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0314 19:42:21.619652    8428 command_runner.go:130] ! I0314 19:41:07.971307       1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-kubelet-client"
	I0314 19:42:21.619704    8428 command_runner.go:130] ! I0314 19:41:07.975049       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I0314 19:42:21.619755    8428 command_runner.go:130] ! I0314 19:41:07.973288       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0314 19:42:21.619755    8428 command_runner.go:130] ! I0314 19:41:07.974848       1 controllermanager.go:642] "Started controller" controller="certificatesigningrequest-signing-controller"
	I0314 19:42:21.619809    8428 command_runner.go:130] ! I0314 19:41:07.974977       1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-legacy-unknown"
	I0314 19:42:21.619857    8428 command_runner.go:130] ! I0314 19:41:07.977476       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0314 19:42:21.619857    8428 command_runner.go:130] ! I0314 19:41:07.974992       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0314 19:42:21.619902    8428 command_runner.go:130] ! I0314 19:41:07.975020       1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-kube-apiserver-client"
	I0314 19:42:21.619942    8428 command_runner.go:130] ! I0314 19:41:07.977827       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I0314 19:42:21.619960    8428 command_runner.go:130] ! I0314 19:41:07.975030       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0314 19:42:21.620014    8428 command_runner.go:130] ! I0314 19:41:07.990774       1 controllermanager.go:642] "Started controller" controller="ttl-controller"
	I0314 19:42:21.620050    8428 command_runner.go:130] ! I0314 19:41:07.995647       1 ttl_controller.go:124] "Starting TTL controller"
	I0314 19:42:21.620071    8428 command_runner.go:130] ! I0314 19:41:07.995667       1 shared_informer.go:311] Waiting for caches to sync for TTL
	I0314 19:42:21.620109    8428 command_runner.go:130] ! I0314 19:41:08.019000       1 controllermanager.go:642] "Started controller" controller="token-cleaner-controller"
	I0314 19:42:21.620157    8428 command_runner.go:130] ! I0314 19:41:08.019415       1 tokencleaner.go:112] "Starting token cleaner controller"
	I0314 19:42:21.620225    8428 command_runner.go:130] ! I0314 19:41:08.019568       1 shared_informer.go:311] Waiting for caches to sync for token_cleaner
	I0314 19:42:21.620225    8428 command_runner.go:130] ! I0314 19:41:08.019700       1 shared_informer.go:318] Caches are synced for token_cleaner
	I0314 19:42:21.620225    8428 command_runner.go:130] ! E0314 19:41:08.029770       1 core.go:92] "Failed to start service controller" err="WARNING: no cloud provider provided, services of type LoadBalancer will fail"
	I0314 19:42:21.620225    8428 command_runner.go:130] ! I0314 19:41:08.029950       1 controllermanager.go:620] "Warning: skipping controller" controller="service-lb-controller"
	I0314 19:42:21.620225    8428 command_runner.go:130] ! I0314 19:41:08.030066       1 core.go:228] "Warning: configure-cloud-routes is set, but no cloud provider specified. Will not configure cloud provider routes."
	I0314 19:42:21.620225    8428 command_runner.go:130] ! I0314 19:41:08.030148       1 controllermanager.go:620] "Warning: skipping controller" controller="node-route-controller"
	I0314 19:42:21.620225    8428 command_runner.go:130] ! I0314 19:41:08.056856       1 controllermanager.go:642] "Started controller" controller="clusterrole-aggregation-controller"
	I0314 19:42:21.620225    8428 command_runner.go:130] ! I0314 19:41:08.058933       1 clusterroleaggregation_controller.go:189] "Starting ClusterRoleAggregator controller"
	I0314 19:42:21.620225    8428 command_runner.go:130] ! I0314 19:41:08.059323       1 shared_informer.go:311] Waiting for caches to sync for ClusterRoleAggregator
	I0314 19:42:21.620225    8428 command_runner.go:130] ! I0314 19:41:08.062839       1 controllermanager.go:642] "Started controller" controller="endpointslice-controller"
	I0314 19:42:21.620225    8428 command_runner.go:130] ! I0314 19:41:08.063208       1 endpointslice_controller.go:264] "Starting endpoint slice controller"
	I0314 19:42:21.620225    8428 command_runner.go:130] ! I0314 19:41:08.063512       1 shared_informer.go:311] Waiting for caches to sync for endpoint_slice
	I0314 19:42:21.620225    8428 command_runner.go:130] ! I0314 19:41:08.070376       1 node_lifecycle_controller.go:431] "Controller will reconcile labels"
	I0314 19:42:21.620225    8428 command_runner.go:130] ! I0314 19:41:08.070635       1 controllermanager.go:642] "Started controller" controller="node-lifecycle-controller"
	I0314 19:42:21.620225    8428 command_runner.go:130] ! I0314 19:41:08.070748       1 node_lifecycle_controller.go:465] "Sending events to api server"
	I0314 19:42:21.620225    8428 command_runner.go:130] ! I0314 19:41:08.071006       1 node_lifecycle_controller.go:476] "Starting node controller"
	I0314 19:42:21.620225    8428 command_runner.go:130] ! I0314 19:41:08.071615       1 shared_informer.go:311] Waiting for caches to sync for taint
	I0314 19:42:21.620225    8428 command_runner.go:130] ! I0314 19:41:08.079849       1 controllermanager.go:642] "Started controller" controller="persistentvolume-binder-controller"
	I0314 19:42:21.620225    8428 command_runner.go:130] ! I0314 19:41:08.080117       1 pv_controller_base.go:319] "Starting persistent volume controller"
	I0314 19:42:21.620225    8428 command_runner.go:130] ! I0314 19:41:08.081765       1 shared_informer.go:311] Waiting for caches to sync for persistent volume
	I0314 19:42:21.620225    8428 command_runner.go:130] ! I0314 19:41:08.084328       1 controllermanager.go:642] "Started controller" controller="ttl-after-finished-controller"
	I0314 19:42:21.620225    8428 command_runner.go:130] ! I0314 19:41:08.084731       1 ttlafterfinished_controller.go:109] "Starting TTL after finished controller"
	I0314 19:42:21.620225    8428 command_runner.go:130] ! I0314 19:41:08.085301       1 shared_informer.go:311] Waiting for caches to sync for TTL after finished
	I0314 19:42:21.620225    8428 command_runner.go:130] ! I0314 19:41:08.092529       1 controllermanager.go:642] "Started controller" controller="garbage-collector-controller"
	I0314 19:42:21.620225    8428 command_runner.go:130] ! I0314 19:41:08.092761       1 garbagecollector.go:155] "Starting controller" controller="garbagecollector"
	I0314 19:42:21.620225    8428 command_runner.go:130] ! I0314 19:41:08.092771       1 shared_informer.go:311] Waiting for caches to sync for garbage collector
	I0314 19:42:21.620225    8428 command_runner.go:130] ! I0314 19:41:08.097268       1 controllermanager.go:642] "Started controller" controller="persistentvolume-expander-controller"
	I0314 19:42:21.620225    8428 command_runner.go:130] ! I0314 19:41:08.097521       1 expand_controller.go:328] "Starting expand controller"
	I0314 19:42:21.620225    8428 command_runner.go:130] ! I0314 19:41:08.097531       1 shared_informer.go:311] Waiting for caches to sync for expand
	I0314 19:42:21.620225    8428 command_runner.go:130] ! I0314 19:41:08.097559       1 graph_builder.go:294] "Running" component="GraphBuilder"
	I0314 19:42:21.620225    8428 command_runner.go:130] ! I0314 19:41:08.117374       1 controllermanager.go:642] "Started controller" controller="root-ca-certificate-publisher-controller"
	I0314 19:42:21.620225    8428 command_runner.go:130] ! I0314 19:41:08.117512       1 publisher.go:102] "Starting root CA cert publisher controller"
	I0314 19:42:21.620225    8428 command_runner.go:130] ! I0314 19:41:08.117524       1 shared_informer.go:311] Waiting for caches to sync for crt configmap
	I0314 19:42:21.620225    8428 command_runner.go:130] ! I0314 19:41:08.126388       1 controllermanager.go:642] "Started controller" controller="statefulset-controller"
	I0314 19:42:21.620225    8428 command_runner.go:130] ! I0314 19:41:08.127645       1 stateful_set.go:161] "Starting stateful set controller"
	I0314 19:42:21.620753    8428 command_runner.go:130] ! I0314 19:41:08.127702       1 shared_informer.go:311] Waiting for caches to sync for stateful set
	I0314 19:42:21.620793    8428 command_runner.go:130] ! I0314 19:41:08.131336       1 controllermanager.go:642] "Started controller" controller="certificatesigningrequest-cleaner-controller"
	I0314 19:42:21.620793    8428 command_runner.go:130] ! I0314 19:41:08.131505       1 cleaner.go:83] "Starting CSR cleaner controller"
	I0314 19:42:21.620841    8428 command_runner.go:130] ! E0314 19:41:08.142589       1 core.go:213] "Failed to start cloud node lifecycle controller" err="no cloud provider provided"
	I0314 19:42:21.620882    8428 command_runner.go:130] ! I0314 19:41:08.142621       1 controllermanager.go:620] "Warning: skipping controller" controller="cloud-node-lifecycle-controller"
	I0314 19:42:21.620882    8428 command_runner.go:130] ! I0314 19:41:08.150057       1 controllermanager.go:642] "Started controller" controller="pod-garbage-collector-controller"
	I0314 19:42:21.620882    8428 command_runner.go:130] ! I0314 19:41:08.152574       1 gc_controller.go:101] "Starting GC controller"
	I0314 19:42:21.620882    8428 command_runner.go:130] ! I0314 19:41:08.152724       1 shared_informer.go:311] Waiting for caches to sync for GC
	I0314 19:42:21.620882    8428 command_runner.go:130] ! I0314 19:41:08.302881       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="ingresses.networking.k8s.io"
	I0314 19:42:21.620882    8428 command_runner.go:130] ! I0314 19:41:08.303337       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="endpoints"
	I0314 19:42:21.620882    8428 command_runner.go:130] ! W0314 19:41:08.303671       1 shared_informer.go:593] resyncPeriod 21h24m41.293167603s is smaller than resyncCheckPeriod 22h48m56.659186017s and the informer has already started. Changing it to 22h48m56.659186017s
	I0314 19:42:21.620882    8428 command_runner.go:130] ! I0314 19:41:08.303970       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="limitranges"
	I0314 19:42:21.620882    8428 command_runner.go:130] ! I0314 19:41:08.304292       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="podtemplates"
	I0314 19:42:21.620882    8428 command_runner.go:130] ! I0314 19:41:08.304532       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="endpointslices.discovery.k8s.io"
	I0314 19:42:21.620882    8428 command_runner.go:130] ! I0314 19:41:08.304816       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="daemonsets.apps"
	I0314 19:42:21.620882    8428 command_runner.go:130] ! I0314 19:41:08.305073       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="rolebindings.rbac.authorization.k8s.io"
	I0314 19:42:21.620882    8428 command_runner.go:130] ! I0314 19:41:08.305373       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="roles.rbac.authorization.k8s.io"
	I0314 19:42:21.620882    8428 command_runner.go:130] ! I0314 19:41:08.305634       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="leases.coordination.k8s.io"
	I0314 19:42:21.620882    8428 command_runner.go:130] ! I0314 19:41:08.305976       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="statefulsets.apps"
	I0314 19:42:21.620882    8428 command_runner.go:130] ! I0314 19:41:08.306286       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="horizontalpodautoscalers.autoscaling"
	I0314 19:42:21.620882    8428 command_runner.go:130] ! I0314 19:41:08.306541       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="networkpolicies.networking.k8s.io"
	I0314 19:42:21.620882    8428 command_runner.go:130] ! I0314 19:41:08.306699       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="controllerrevisions.apps"
	I0314 19:42:21.620882    8428 command_runner.go:130] ! I0314 19:41:08.306843       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="deployments.apps"
	I0314 19:42:21.620882    8428 command_runner.go:130] ! I0314 19:41:08.307119       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="jobs.batch"
	I0314 19:42:21.620882    8428 command_runner.go:130] ! I0314 19:41:08.307379       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="serviceaccounts"
	I0314 19:42:21.620882    8428 command_runner.go:130] ! I0314 19:41:08.307553       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="cronjobs.batch"
	I0314 19:42:21.620882    8428 command_runner.go:130] ! I0314 19:41:08.307700       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="poddisruptionbudgets.policy"
	I0314 19:42:21.620882    8428 command_runner.go:130] ! I0314 19:41:08.308022       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="csistoragecapacities.storage.k8s.io"
	I0314 19:42:21.620882    8428 command_runner.go:130] ! I0314 19:41:08.308207       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="replicasets.apps"
	I0314 19:42:21.620882    8428 command_runner.go:130] ! I0314 19:41:08.308473       1 controllermanager.go:642] "Started controller" controller="resourcequota-controller"
	I0314 19:42:21.620882    8428 command_runner.go:130] ! I0314 19:41:08.308664       1 resource_quota_controller.go:294] "Starting resource quota controller"
	I0314 19:42:21.620882    8428 command_runner.go:130] ! I0314 19:41:08.309850       1 shared_informer.go:311] Waiting for caches to sync for resource quota
	I0314 19:42:21.620882    8428 command_runner.go:130] ! I0314 19:41:08.310060       1 resource_quota_monitor.go:305] "QuotaMonitor running"
	I0314 19:42:21.620882    8428 command_runner.go:130] ! I0314 19:41:08.344084       1 controllermanager.go:642] "Started controller" controller="serviceaccount-controller"
	I0314 19:42:21.620882    8428 command_runner.go:130] ! I0314 19:41:08.344536       1 serviceaccounts_controller.go:111] "Starting service account controller"
	I0314 19:42:21.620882    8428 command_runner.go:130] ! I0314 19:41:08.344832       1 shared_informer.go:311] Waiting for caches to sync for service account
	I0314 19:42:21.620882    8428 command_runner.go:130] ! I0314 19:41:08.397742       1 controllermanager.go:642] "Started controller" controller="deployment-controller"
	I0314 19:42:21.620882    8428 command_runner.go:130] ! I0314 19:41:08.400742       1 deployment_controller.go:168] "Starting controller" controller="deployment"
	I0314 19:42:21.621408    8428 command_runner.go:130] ! I0314 19:41:08.401126       1 shared_informer.go:311] Waiting for caches to sync for deployment
	I0314 19:42:21.621408    8428 command_runner.go:130] ! I0314 19:41:08.448054       1 controllermanager.go:642] "Started controller" controller="bootstrap-signer-controller"
	I0314 19:42:21.621408    8428 command_runner.go:130] ! I0314 19:41:08.448538       1 shared_informer.go:311] Waiting for caches to sync for bootstrap_signer
	I0314 19:42:21.621490    8428 command_runner.go:130] ! I0314 19:41:08.495738       1 controllermanager.go:642] "Started controller" controller="persistentvolume-protection-controller"
	I0314 19:42:21.621490    8428 command_runner.go:130] ! I0314 19:41:08.496045       1 pv_protection_controller.go:78] "Starting PV protection controller"
	I0314 19:42:21.621490    8428 command_runner.go:130] ! I0314 19:41:08.496112       1 shared_informer.go:311] Waiting for caches to sync for PV protection
	I0314 19:42:21.621490    8428 command_runner.go:130] ! I0314 19:41:08.547967       1 controllermanager.go:642] "Started controller" controller="endpointslice-mirroring-controller"
	I0314 19:42:21.621572    8428 command_runner.go:130] ! I0314 19:41:08.548352       1 endpointslicemirroring_controller.go:223] "Starting EndpointSliceMirroring controller"
	I0314 19:42:21.621572    8428 command_runner.go:130] ! I0314 19:41:08.548556       1 shared_informer.go:311] Waiting for caches to sync for endpoint_slice_mirroring
	I0314 19:42:21.621572    8428 command_runner.go:130] ! I0314 19:41:08.593742       1 controllermanager.go:642] "Started controller" controller="job-controller"
	I0314 19:42:21.621572    8428 command_runner.go:130] ! I0314 19:41:08.593860       1 job_controller.go:226] "Starting job controller"
	I0314 19:42:21.621655    8428 command_runner.go:130] ! I0314 19:41:08.594297       1 shared_informer.go:311] Waiting for caches to sync for job
	I0314 19:42:21.621655    8428 command_runner.go:130] ! I0314 19:41:08.650392       1 controllermanager.go:642] "Started controller" controller="replicaset-controller"
	I0314 19:42:21.621736    8428 command_runner.go:130] ! I0314 19:41:08.650668       1 replica_set.go:214] "Starting controller" name="replicaset"
	I0314 19:42:21.621736    8428 command_runner.go:130] ! I0314 19:41:08.650851       1 shared_informer.go:311] Waiting for caches to sync for ReplicaSet
	I0314 19:42:21.621736    8428 command_runner.go:130] ! I0314 19:41:08.704591       1 controllermanager.go:642] "Started controller" controller="certificatesigningrequest-approving-controller"
	I0314 19:42:21.621736    8428 command_runner.go:130] ! I0314 19:41:08.704627       1 certificate_controller.go:115] "Starting certificate controller" name="csrapproving"
	I0314 19:42:21.621816    8428 command_runner.go:130] ! I0314 19:41:08.704645       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrapproving
	I0314 19:42:21.621816    8428 command_runner.go:130] ! I0314 19:41:18.768485       1 range_allocator.go:111] "No Secondary Service CIDR provided. Skipping filtering out secondary service addresses"
	I0314 19:42:21.621816    8428 command_runner.go:130] ! I0314 19:41:18.768824       1 controllermanager.go:642] "Started controller" controller="node-ipam-controller"
	I0314 19:42:21.621816    8428 command_runner.go:130] ! I0314 19:41:18.769281       1 node_ipam_controller.go:162] "Starting ipam controller"
	I0314 19:42:21.621816    8428 command_runner.go:130] ! I0314 19:41:18.769315       1 shared_informer.go:311] Waiting for caches to sync for node
	I0314 19:42:21.621898    8428 command_runner.go:130] ! I0314 19:41:18.779639       1 shared_informer.go:311] Waiting for caches to sync for resource quota
	I0314 19:42:21.621898    8428 command_runner.go:130] ! I0314 19:41:18.796167       1 shared_informer.go:318] Caches are synced for PV protection
	I0314 19:42:21.621898    8428 command_runner.go:130] ! I0314 19:41:18.796514       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-442000-m02"
	I0314 19:42:21.621898    8428 command_runner.go:130] ! I0314 19:41:18.796299       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-442000\" does not exist"
	I0314 19:42:21.621980    8428 command_runner.go:130] ! I0314 19:41:18.799471       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-442000-m02\" does not exist"
	I0314 19:42:21.621980    8428 command_runner.go:130] ! I0314 19:41:18.799722       1 shared_informer.go:318] Caches are synced for ReplicationController
	I0314 19:42:21.621980    8428 command_runner.go:130] ! I0314 19:41:18.799937       1 shared_informer.go:318] Caches are synced for TTL
	I0314 19:42:21.621980    8428 command_runner.go:130] ! I0314 19:41:18.800165       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-442000-m03\" does not exist"
	I0314 19:42:21.622077    8428 command_runner.go:130] ! I0314 19:41:18.802329       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-442000-m02"
	I0314 19:42:21.622077    8428 command_runner.go:130] ! I0314 19:41:18.802379       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-442000-m02"
	I0314 19:42:21.622077    8428 command_runner.go:130] ! I0314 19:41:18.806338       1 shared_informer.go:318] Caches are synced for certificate-csrapproving
	I0314 19:42:21.622158    8428 command_runner.go:130] ! I0314 19:41:18.836188       1 shared_informer.go:318] Caches are synced for attach detach
	I0314 19:42:21.622158    8428 command_runner.go:130] ! I0314 19:41:18.842003       1 shared_informer.go:318] Caches are synced for ephemeral
	I0314 19:42:21.622158    8428 command_runner.go:130] ! I0314 19:41:18.842516       1 shared_informer.go:318] Caches are synced for stateful set
	I0314 19:42:21.622158    8428 command_runner.go:130] ! I0314 19:41:18.845380       1 shared_informer.go:318] Caches are synced for service account
	I0314 19:42:21.622158    8428 command_runner.go:130] ! I0314 19:41:18.848744       1 shared_informer.go:318] Caches are synced for bootstrap_signer
	I0314 19:42:21.622239    8428 command_runner.go:130] ! I0314 19:41:18.849154       1 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring
	I0314 19:42:21.622239    8428 command_runner.go:130] ! I0314 19:41:18.849988       1 shared_informer.go:318] Caches are synced for namespace
	I0314 19:42:21.622239    8428 command_runner.go:130] ! I0314 19:41:18.850447       1 shared_informer.go:311] Waiting for caches to sync for garbage collector
	I0314 19:42:21.622239    8428 command_runner.go:130] ! I0314 19:41:18.851139       1 shared_informer.go:318] Caches are synced for ReplicaSet
	I0314 19:42:21.622319    8428 command_runner.go:130] ! I0314 19:41:18.852942       1 shared_informer.go:318] Caches are synced for GC
	I0314 19:42:21.622319    8428 command_runner.go:130] ! I0314 19:41:18.860631       1 shared_informer.go:318] Caches are synced for ClusterRoleAggregator
	I0314 19:42:21.622319    8428 command_runner.go:130] ! I0314 19:41:18.862001       1 shared_informer.go:318] Caches are synced for cronjob
	I0314 19:42:21.622319    8428 command_runner.go:130] ! I0314 19:41:18.862045       1 shared_informer.go:318] Caches are synced for PVC protection
	I0314 19:42:21.622400    8428 command_runner.go:130] ! I0314 19:41:18.864453       1 shared_informer.go:318] Caches are synced for endpoint_slice
	I0314 19:42:21.622400    8428 command_runner.go:130] ! I0314 19:41:18.865205       1 shared_informer.go:318] Caches are synced for endpoint
	I0314 19:42:21.622400    8428 command_runner.go:130] ! I0314 19:41:18.870312       1 shared_informer.go:318] Caches are synced for node
	I0314 19:42:21.622400    8428 command_runner.go:130] ! I0314 19:41:18.871490       1 range_allocator.go:174] "Sending events to api server"
	I0314 19:42:21.622482    8428 command_runner.go:130] ! I0314 19:41:18.871652       1 range_allocator.go:178] "Starting range CIDR allocator"
	I0314 19:42:21.622482    8428 command_runner.go:130] ! I0314 19:41:18.871843       1 shared_informer.go:311] Waiting for caches to sync for cidrallocator
	I0314 19:42:21.622482    8428 command_runner.go:130] ! I0314 19:41:18.871901       1 shared_informer.go:318] Caches are synced for cidrallocator
	I0314 19:42:21.622482    8428 command_runner.go:130] ! I0314 19:41:18.871655       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-serving
	I0314 19:42:21.622482    8428 command_runner.go:130] ! I0314 19:41:18.871600       1 shared_informer.go:318] Caches are synced for daemon sets
	I0314 19:42:21.622563    8428 command_runner.go:130] ! I0314 19:41:18.877449       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-client
	I0314 19:42:21.622563    8428 command_runner.go:130] ! I0314 19:41:18.878919       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-legacy-unknown
	I0314 19:42:21.622563    8428 command_runner.go:130] ! I0314 19:41:18.880521       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0314 19:42:21.622563    8428 command_runner.go:130] ! I0314 19:41:18.886337       1 shared_informer.go:318] Caches are synced for TTL after finished
	I0314 19:42:21.622643    8428 command_runner.go:130] ! I0314 19:41:18.895206       1 shared_informer.go:318] Caches are synced for job
	I0314 19:42:21.622643    8428 command_runner.go:130] ! I0314 19:41:18.898522       1 shared_informer.go:318] Caches are synced for expand
	I0314 19:42:21.622643    8428 command_runner.go:130] ! I0314 19:41:18.902360       1 shared_informer.go:318] Caches are synced for deployment
	I0314 19:42:21.622643    8428 command_runner.go:130] ! I0314 19:41:18.905493       1 shared_informer.go:318] Caches are synced for HPA
	I0314 19:42:21.622643    8428 command_runner.go:130] ! I0314 19:41:18.906213       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="54.805878ms"
	I0314 19:42:21.622722    8428 command_runner.go:130] ! I0314 19:41:18.908178       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="47.802µs"
	I0314 19:42:21.622722    8428 command_runner.go:130] ! I0314 19:41:18.908549       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="56.720551ms"
	I0314 19:42:21.622722    8428 command_runner.go:130] ! I0314 19:41:18.911784       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="133.705µs"
	I0314 19:42:21.622803    8428 command_runner.go:130] ! I0314 19:41:18.919410       1 shared_informer.go:318] Caches are synced for crt configmap
	I0314 19:42:21.622803    8428 command_runner.go:130] ! I0314 19:41:18.923587       1 shared_informer.go:318] Caches are synced for disruption
	I0314 19:42:21.622803    8428 command_runner.go:130] ! I0314 19:41:18.974303       1 shared_informer.go:318] Caches are synced for taint
	I0314 19:42:21.622803    8428 command_runner.go:130] ! I0314 19:41:18.974653       1 node_lifecycle_controller.go:1225] "Initializing eviction metric for zone" zone=""
	I0314 19:42:21.622891    8428 command_runner.go:130] ! I0314 19:41:18.975178       1 taint_manager.go:205] "Starting NoExecuteTaintManager"
	I0314 19:42:21.622891    8428 command_runner.go:130] ! I0314 19:41:18.975416       1 taint_manager.go:210] "Sending events to api server"
	I0314 19:42:21.622891    8428 command_runner.go:130] ! I0314 19:41:18.977051       1 event.go:307] "Event occurred" object="multinode-442000" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-442000 event: Registered Node multinode-442000 in Controller"
	I0314 19:42:21.622973    8428 command_runner.go:130] ! I0314 19:41:18.977995       1 event.go:307] "Event occurred" object="multinode-442000-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-442000-m02 event: Registered Node multinode-442000-m02 in Controller"
	I0314 19:42:21.622973    8428 command_runner.go:130] ! I0314 19:41:18.978165       1 event.go:307] "Event occurred" object="multinode-442000-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-442000-m03 event: Registered Node multinode-442000-m03 in Controller"
	I0314 19:42:21.623054    8428 command_runner.go:130] ! I0314 19:41:18.980168       1 shared_informer.go:318] Caches are synced for resource quota
	I0314 19:42:21.623054    8428 command_runner.go:130] ! I0314 19:41:18.982162       1 shared_informer.go:318] Caches are synced for persistent volume
	I0314 19:42:21.623054    8428 command_runner.go:130] ! I0314 19:41:19.001384       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-442000"
	I0314 19:42:21.623054    8428 command_runner.go:130] ! I0314 19:41:19.002299       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-442000-m02"
	I0314 19:42:21.623136    8428 command_runner.go:130] ! I0314 19:41:19.002838       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-442000-m03"
	I0314 19:42:21.623136    8428 command_runner.go:130] ! I0314 19:41:19.003844       1 node_lifecycle_controller.go:1071] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I0314 19:42:21.623136    8428 command_runner.go:130] ! I0314 19:41:19.010468       1 shared_informer.go:318] Caches are synced for resource quota
	I0314 19:42:21.623136    8428 command_runner.go:130] ! I0314 19:41:19.393074       1 shared_informer.go:318] Caches are synced for garbage collector
	I0314 19:42:21.623219    8428 command_runner.go:130] ! I0314 19:41:19.393161       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0314 19:42:21.623219    8428 command_runner.go:130] ! I0314 19:41:19.450734       1 shared_informer.go:318] Caches are synced for garbage collector
	I0314 19:42:21.623219    8428 command_runner.go:130] ! I0314 19:41:41.542550       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-442000-m02"
	I0314 19:42:21.623300    8428 command_runner.go:130] ! I0314 19:41:44.029818       1 event.go:307] "Event occurred" object="kube-system/storage-provisioner" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod kube-system/storage-provisioner"
	I0314 19:42:21.623300    8428 command_runner.go:130] ! I0314 19:41:44.029853       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68-d22jc" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod kube-system/coredns-5dd5756b68-d22jc"
	I0314 19:42:21.623300    8428 command_runner.go:130] ! I0314 19:41:44.029866       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6-7446n" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-5b5d89c9d6-7446n"
	I0314 19:42:21.623383    8428 command_runner.go:130] ! I0314 19:41:59.058949       1 event.go:307] "Event occurred" object="multinode-442000-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-442000-m02 status is now: NodeNotReady"
	I0314 19:42:21.623383    8428 command_runner.go:130] ! I0314 19:41:59.074940       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6-8drpb" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0314 19:42:21.623383    8428 command_runner.go:130] ! I0314 19:41:59.085508       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="9.938337ms"
	I0314 19:42:21.623465    8428 command_runner.go:130] ! I0314 19:41:59.086845       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="54.804µs"
	I0314 19:42:21.623465    8428 command_runner.go:130] ! I0314 19:41:59.099029       1 event.go:307] "Event occurred" object="kube-system/kindnet-c7m4p" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0314 19:42:21.623545    8428 command_runner.go:130] ! I0314 19:41:59.122329       1 event.go:307] "Event occurred" object="kube-system/kube-proxy-72dzs" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0314 19:42:21.623545    8428 command_runner.go:130] ! I0314 19:42:12.281109       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="13.332951ms"
	I0314 19:42:21.623545    8428 command_runner.go:130] ! I0314 19:42:12.281325       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="115.209µs"
	I0314 19:42:21.623545    8428 command_runner.go:130] ! I0314 19:42:12.305037       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="74.006µs"
	I0314 19:42:21.623626    8428 command_runner.go:130] ! I0314 19:42:12.366507       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="32.074928ms"
	I0314 19:42:21.623626    8428 command_runner.go:130] ! I0314 19:42:12.368560       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="106.408µs"
	I0314 19:42:21.637843    8428 logs.go:123] Gathering logs for kindnet [999e4c168afe] ...
	I0314 19:42:21.637843    8428 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 999e4c168afe"
	I0314 19:42:21.664882    8428 command_runner.go:130] ! I0314 19:41:08.409720       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0314 19:42:21.664931    8428 command_runner.go:130] ! I0314 19:41:08.410195       1 main.go:107] hostIP = 172.17.93.236
	I0314 19:42:21.664931    8428 command_runner.go:130] ! podIP = 172.17.93.236
	I0314 19:42:21.664931    8428 command_runner.go:130] ! I0314 19:41:08.411178       1 main.go:116] setting mtu 1500 for CNI 
	I0314 19:42:21.664931    8428 command_runner.go:130] ! I0314 19:41:08.411230       1 main.go:146] kindnetd IP family: "ipv4"
	I0314 19:42:21.664931    8428 command_runner.go:130] ! I0314 19:41:08.411277       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0314 19:42:21.664931    8428 command_runner.go:130] ! I0314 19:41:38.747509       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: i/o timeout
	I0314 19:42:21.665066    8428 command_runner.go:130] ! I0314 19:41:38.770843       1 main.go:223] Handling node with IPs: map[172.17.93.236:{}]
	I0314 19:42:21.665066    8428 command_runner.go:130] ! I0314 19:41:38.770994       1 main.go:227] handling current node
	I0314 19:42:21.665066    8428 command_runner.go:130] ! I0314 19:41:38.771413       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:21.665129    8428 command_runner.go:130] ! I0314 19:41:38.771428       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:21.665129    8428 command_runner.go:130] ! I0314 19:41:38.771670       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 172.17.80.135 Flags: [] Table: 0} 
	I0314 19:42:21.665129    8428 command_runner.go:130] ! I0314 19:41:38.771817       1 main.go:223] Handling node with IPs: map[172.17.84.215:{}]
	I0314 19:42:21.665203    8428 command_runner.go:130] ! I0314 19:41:38.771827       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.3.0/24] 
	I0314 19:42:21.665261    8428 command_runner.go:130] ! I0314 19:41:38.771944       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 172.17.84.215 Flags: [] Table: 0} 
	I0314 19:42:21.665308    8428 command_runner.go:130] ! I0314 19:41:48.777997       1 main.go:223] Handling node with IPs: map[172.17.93.236:{}]
	I0314 19:42:21.665308    8428 command_runner.go:130] ! I0314 19:41:48.778091       1 main.go:227] handling current node
	I0314 19:42:21.665308    8428 command_runner.go:130] ! I0314 19:41:48.778105       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:21.665308    8428 command_runner.go:130] ! I0314 19:41:48.778113       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:21.665376    8428 command_runner.go:130] ! I0314 19:41:48.778217       1 main.go:223] Handling node with IPs: map[172.17.84.215:{}]
	I0314 19:42:21.665376    8428 command_runner.go:130] ! I0314 19:41:48.778373       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.3.0/24] 
	I0314 19:42:21.665420    8428 command_runner.go:130] ! I0314 19:41:58.793215       1 main.go:223] Handling node with IPs: map[172.17.93.236:{}]
	I0314 19:42:21.665457    8428 command_runner.go:130] ! I0314 19:41:58.793285       1 main.go:227] handling current node
	I0314 19:42:21.665514    8428 command_runner.go:130] ! I0314 19:41:58.793297       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:21.665557    8428 command_runner.go:130] ! I0314 19:41:58.793304       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:21.665557    8428 command_runner.go:130] ! I0314 19:41:58.793793       1 main.go:223] Handling node with IPs: map[172.17.84.215:{}]
	I0314 19:42:21.665557    8428 command_runner.go:130] ! I0314 19:41:58.793859       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.3.0/24] 
	I0314 19:42:21.665618    8428 command_runner.go:130] ! I0314 19:42:08.808709       1 main.go:223] Handling node with IPs: map[172.17.93.236:{}]
	I0314 19:42:21.665618    8428 command_runner.go:130] ! I0314 19:42:08.808803       1 main.go:227] handling current node
	I0314 19:42:21.665663    8428 command_runner.go:130] ! I0314 19:42:08.808818       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:21.665663    8428 command_runner.go:130] ! I0314 19:42:08.808826       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:21.665739    8428 command_runner.go:130] ! I0314 19:42:08.809153       1 main.go:223] Handling node with IPs: map[172.17.84.215:{}]
	I0314 19:42:21.665739    8428 command_runner.go:130] ! I0314 19:42:08.809168       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.3.0/24] 
	I0314 19:42:21.665794    8428 command_runner.go:130] ! I0314 19:42:18.821697       1 main.go:223] Handling node with IPs: map[172.17.93.236:{}]
	I0314 19:42:21.665851    8428 command_runner.go:130] ! I0314 19:42:18.821789       1 main.go:227] handling current node
	I0314 19:42:21.665851    8428 command_runner.go:130] ! I0314 19:42:18.821805       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:21.665895    8428 command_runner.go:130] ! I0314 19:42:18.821814       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:21.665895    8428 command_runner.go:130] ! I0314 19:42:18.822290       1 main.go:223] Handling node with IPs: map[172.17.84.215:{}]
	I0314 19:42:21.665895    8428 command_runner.go:130] ! I0314 19:42:18.822324       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.3.0/24] 
	I0314 19:42:21.669432    8428 logs.go:123] Gathering logs for Docker ...
	I0314 19:42:21.669432    8428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0314 19:42:21.699492    8428 command_runner.go:130] > Mar 14 19:39:36 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0314 19:42:21.699492    8428 command_runner.go:130] > Mar 14 19:39:36 minikube cri-dockerd[222]: time="2024-03-14T19:39:36Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0314 19:42:21.699492    8428 command_runner.go:130] > Mar 14 19:39:36 minikube cri-dockerd[222]: time="2024-03-14T19:39:36Z" level=info msg="Start docker client with request timeout 0s"
	I0314 19:42:21.699583    8428 command_runner.go:130] > Mar 14 19:39:36 minikube cri-dockerd[222]: time="2024-03-14T19:39:36Z" level=fatal msg="failed to get docker version: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0314 19:42:21.699583    8428 command_runner.go:130] > Mar 14 19:39:37 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0314 19:42:21.699583    8428 command_runner.go:130] > Mar 14 19:39:37 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0314 19:42:21.699583    8428 command_runner.go:130] > Mar 14 19:39:37 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0314 19:42:21.699583    8428 command_runner.go:130] > Mar 14 19:39:39 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 1.
	I0314 19:42:21.699583    8428 command_runner.go:130] > Mar 14 19:39:39 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0314 19:42:21.699679    8428 command_runner.go:130] > Mar 14 19:39:39 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0314 19:42:21.699679    8428 command_runner.go:130] > Mar 14 19:39:39 minikube cri-dockerd[402]: time="2024-03-14T19:39:39Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0314 19:42:21.699679    8428 command_runner.go:130] > Mar 14 19:39:39 minikube cri-dockerd[402]: time="2024-03-14T19:39:39Z" level=info msg="Start docker client with request timeout 0s"
	I0314 19:42:21.699679    8428 command_runner.go:130] > Mar 14 19:39:39 minikube cri-dockerd[402]: time="2024-03-14T19:39:39Z" level=fatal msg="failed to get docker version: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0314 19:42:21.699679    8428 command_runner.go:130] > Mar 14 19:39:39 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0314 19:42:21.699774    8428 command_runner.go:130] > Mar 14 19:39:39 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0314 19:42:21.699774    8428 command_runner.go:130] > Mar 14 19:39:39 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0314 19:42:21.699774    8428 command_runner.go:130] > Mar 14 19:39:41 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 2.
	I0314 19:42:21.699774    8428 command_runner.go:130] > Mar 14 19:39:41 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0314 19:42:21.699774    8428 command_runner.go:130] > Mar 14 19:39:41 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0314 19:42:21.699858    8428 command_runner.go:130] > Mar 14 19:39:41 minikube cri-dockerd[422]: time="2024-03-14T19:39:41Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0314 19:42:21.699858    8428 command_runner.go:130] > Mar 14 19:39:41 minikube cri-dockerd[422]: time="2024-03-14T19:39:41Z" level=info msg="Start docker client with request timeout 0s"
	I0314 19:42:21.699858    8428 command_runner.go:130] > Mar 14 19:39:41 minikube cri-dockerd[422]: time="2024-03-14T19:39:41Z" level=fatal msg="failed to get docker version: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0314 19:42:21.699858    8428 command_runner.go:130] > Mar 14 19:39:41 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0314 19:42:21.699858    8428 command_runner.go:130] > Mar 14 19:39:41 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0314 19:42:21.699940    8428 command_runner.go:130] > Mar 14 19:39:41 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0314 19:42:21.699940    8428 command_runner.go:130] > Mar 14 19:39:44 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 3.
	I0314 19:42:21.699940    8428 command_runner.go:130] > Mar 14 19:39:44 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0314 19:42:21.699940    8428 command_runner.go:130] > Mar 14 19:39:44 minikube systemd[1]: cri-docker.service: Start request repeated too quickly.
	I0314 19:42:21.699940    8428 command_runner.go:130] > Mar 14 19:39:44 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0314 19:42:21.699940    8428 command_runner.go:130] > Mar 14 19:39:44 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0314 19:42:21.700022    8428 command_runner.go:130] > Mar 14 19:40:26 multinode-442000 systemd[1]: Starting Docker Application Container Engine...
	I0314 19:42:21.700022    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[650]: time="2024-03-14T19:40:27.010258466Z" level=info msg="Starting up"
	I0314 19:42:21.700022    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[650]: time="2024-03-14T19:40:27.011413188Z" level=info msg="containerd not running, starting managed containerd"
	I0314 19:42:21.700104    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[650]: time="2024-03-14T19:40:27.012927209Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=656
	I0314 19:42:21.700104    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.042687292Z" level=info msg="starting containerd" revision=dcf2847247e18caba8dce86522029642f60fe96b version=v1.7.14
	I0314 19:42:21.700104    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.069138554Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0314 19:42:21.700104    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.069242083Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0314 19:42:21.700184    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.069344111Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0314 19:42:21.700184    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.069362416Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0314 19:42:21.700184    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.070081016Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0314 19:42:21.700184    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.070164740Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0314 19:42:21.700270    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.070380400Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0314 19:42:21.700270    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.070511536Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0314 19:42:21.700270    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.070532642Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0314 19:42:21.700351    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.070544145Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0314 19:42:21.700383    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.070983067Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0314 19:42:21.700411    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.071556427Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0314 19:42:21.700411    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.074554061Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0314 19:42:21.700411    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.074645687Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0314 19:42:21.700411    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.074800830Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0314 19:42:21.700411    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.074883153Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0314 19:42:21.700411    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.075687977Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0314 19:42:21.700411    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.075800308Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0314 19:42:21.700411    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.075818813Z" level=info msg="metadata content store policy set" policy=shared
	I0314 19:42:21.700411    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.081334348Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0314 19:42:21.700411    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.081440978Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0314 19:42:21.700411    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.081463484Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0314 19:42:21.700411    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.081526902Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0314 19:42:21.700411    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.081545007Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0314 19:42:21.700411    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.081621128Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0314 19:42:21.700411    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.082036144Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0314 19:42:21.700411    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.082193387Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0314 19:42:21.700411    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.082276711Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0314 19:42:21.700411    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.082349431Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0314 19:42:21.700411    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.082368036Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0314 19:42:21.700411    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.082385141Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0314 19:42:21.700411    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.082401545Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0314 19:42:21.700411    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.082417450Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0314 19:42:21.700411    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.082433154Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0314 19:42:21.700411    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.082457161Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0314 19:42:21.700411    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.082515377Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0314 19:42:21.700411    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.082533482Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0314 19:42:21.701018    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.082554788Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0314 19:42:21.701051    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.082572093Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0314 19:42:21.701051    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.082586997Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0314 19:42:21.701051    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.082601801Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0314 19:42:21.701127    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.082616305Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0314 19:42:21.701127    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.082631109Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0314 19:42:21.701127    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.082643913Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0314 19:42:21.701127    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.082659317Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0314 19:42:21.701127    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.082673721Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0314 19:42:21.701210    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.082690226Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0314 19:42:21.701210    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.082704230Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0314 19:42:21.701210    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.082717333Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0314 19:42:21.701210    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.082730637Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0314 19:42:21.701210    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.082747942Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0314 19:42:21.701286    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.082771048Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0314 19:42:21.701318    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.082785952Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0314 19:42:21.701318    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.082799956Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0314 19:42:21.701346    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.082936994Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0314 19:42:21.701346    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.082973004Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	I0314 19:42:21.701346    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.082986808Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0314 19:42:21.701346    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.082998612Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	I0314 19:42:21.701346    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.083067631Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0314 19:42:21.701346    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.083095839Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0314 19:42:21.701346    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.083107842Z" level=info msg="NRI interface is disabled by configuration."
	I0314 19:42:21.701346    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.083364013Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0314 19:42:21.701346    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.083531860Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0314 19:42:21.701346    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.083575672Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0314 19:42:21.701346    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.083609482Z" level=info msg="containerd successfully booted in 0.043398s"
	I0314 19:42:21.701346    8428 command_runner.go:130] > Mar 14 19:40:28 multinode-442000 dockerd[650]: time="2024-03-14T19:40:28.063674621Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0314 19:42:21.701346    8428 command_runner.go:130] > Mar 14 19:40:28 multinode-442000 dockerd[650]: time="2024-03-14T19:40:28.220876850Z" level=info msg="Loading containers: start."
	I0314 19:42:21.701346    8428 command_runner.go:130] > Mar 14 19:40:28 multinode-442000 dockerd[650]: time="2024-03-14T19:40:28.643208421Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.18.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0314 19:42:21.701346    8428 command_runner.go:130] > Mar 14 19:40:28 multinode-442000 dockerd[650]: time="2024-03-14T19:40:28.726589336Z" level=info msg="Loading containers: done."
	I0314 19:42:21.701346    8428 command_runner.go:130] > Mar 14 19:40:28 multinode-442000 dockerd[650]: time="2024-03-14T19:40:28.750141296Z" level=info msg="Docker daemon" commit=061aa95 containerd-snapshotter=false storage-driver=overlay2 version=25.0.4
	I0314 19:42:21.701346    8428 command_runner.go:130] > Mar 14 19:40:28 multinode-442000 dockerd[650]: time="2024-03-14T19:40:28.750832983Z" level=info msg="Daemon has completed initialization"
	I0314 19:42:21.701346    8428 command_runner.go:130] > Mar 14 19:40:28 multinode-442000 systemd[1]: Started Docker Application Container Engine.
	I0314 19:42:21.701346    8428 command_runner.go:130] > Mar 14 19:40:28 multinode-442000 dockerd[650]: time="2024-03-14T19:40:28.799522730Z" level=info msg="API listen on [::]:2376"
	I0314 19:42:21.701346    8428 command_runner.go:130] > Mar 14 19:40:28 multinode-442000 dockerd[650]: time="2024-03-14T19:40:28.799691776Z" level=info msg="API listen on /var/run/docker.sock"
	I0314 19:42:21.701346    8428 command_runner.go:130] > Mar 14 19:40:52 multinode-442000 systemd[1]: Stopping Docker Application Container Engine...
	I0314 19:42:21.701346    8428 command_runner.go:130] > Mar 14 19:40:52 multinode-442000 dockerd[650]: time="2024-03-14T19:40:52.824796168Z" level=info msg="Processing signal 'terminated'"
	I0314 19:42:21.701346    8428 command_runner.go:130] > Mar 14 19:40:52 multinode-442000 dockerd[650]: time="2024-03-14T19:40:52.825961557Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0314 19:42:21.701346    8428 command_runner.go:130] > Mar 14 19:40:52 multinode-442000 dockerd[650]: time="2024-03-14T19:40:52.826585605Z" level=info msg="Daemon shutdown complete"
	I0314 19:42:21.701346    8428 command_runner.go:130] > Mar 14 19:40:52 multinode-442000 dockerd[650]: time="2024-03-14T19:40:52.826653911Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0314 19:42:21.701346    8428 command_runner.go:130] > Mar 14 19:40:52 multinode-442000 dockerd[650]: time="2024-03-14T19:40:52.826812323Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0314 19:42:21.701346    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 systemd[1]: docker.service: Deactivated successfully.
	I0314 19:42:21.701346    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 systemd[1]: Stopped Docker Application Container Engine.
	I0314 19:42:21.701346    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 systemd[1]: Starting Docker Application Container Engine...
	I0314 19:42:21.701346    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1043]: time="2024-03-14T19:40:53.899936864Z" level=info msg="Starting up"
	I0314 19:42:21.701872    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1043]: time="2024-03-14T19:40:53.900739426Z" level=info msg="containerd not running, starting managed containerd"
	I0314 19:42:21.701872    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1043]: time="2024-03-14T19:40:53.901763504Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1049
	I0314 19:42:21.701872    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.930795337Z" level=info msg="starting containerd" revision=dcf2847247e18caba8dce86522029642f60fe96b version=v1.7.14
	I0314 19:42:21.701872    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.957961927Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0314 19:42:21.701872    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.958063735Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0314 19:42:21.701971    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.958107338Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0314 19:42:21.701971    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.958123339Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0314 19:42:21.701971    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.958150841Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0314 19:42:21.701971    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.958163842Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0314 19:42:21.702049    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.958360458Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0314 19:42:21.702049    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.958444864Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0314 19:42:21.702125    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.958463766Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0314 19:42:21.702125    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.958475466Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0314 19:42:21.702125    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.958502569Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0314 19:42:21.702125    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.958670881Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0314 19:42:21.702201    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.961627209Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0314 19:42:21.702201    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.961715316Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0314 19:42:21.702201    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.961871928Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0314 19:42:21.702284    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.961949634Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0314 19:42:21.702317    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.961985336Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0314 19:42:21.702317    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.962005238Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0314 19:42:21.702365    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.962017139Z" level=info msg="metadata content store policy set" policy=shared
	I0314 19:42:21.702398    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.962188852Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0314 19:42:21.702420    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.962280259Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0314 19:42:21.702420    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.962311462Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0314 19:42:21.702457    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.962328263Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0314 19:42:21.702457    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.962344564Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0314 19:42:21.702493    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.962393368Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0314 19:42:21.702522    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.962810900Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0314 19:42:21.702522    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.962939310Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0314 19:42:21.702522    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.963018216Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0314 19:42:21.702522    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.963036317Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0314 19:42:21.702522    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.963060419Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0314 19:42:21.702522    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.963076820Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0314 19:42:21.702522    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.963091221Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0314 19:42:21.702522    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.963106323Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0314 19:42:21.702522    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.963121324Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0314 19:42:21.702522    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.963135425Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0314 19:42:21.702522    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.963148726Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0314 19:42:21.702522    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.963162027Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0314 19:42:21.702522    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.963184029Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0314 19:42:21.702522    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.963205330Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0314 19:42:21.702522    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.963220631Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0314 19:42:21.702522    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.963270235Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0314 19:42:21.702522    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.963286336Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0314 19:42:21.702522    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.963300438Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0314 19:42:21.702522    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.963313039Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0314 19:42:21.702522    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.963326640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0314 19:42:21.702522    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.963341141Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0314 19:42:21.702522    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.963357642Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0314 19:42:21.702522    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.963369743Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0314 19:42:21.702522    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.963382444Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0314 19:42:21.702522    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.963395545Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0314 19:42:21.702522    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.963411646Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0314 19:42:21.702522    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.963433148Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0314 19:42:21.702522    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.963449149Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0314 19:42:21.702522    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.963461550Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0314 19:42:21.702522    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.963512954Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0314 19:42:21.703048    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.963529855Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	I0314 19:42:21.703048    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.963593860Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0314 19:42:21.703048    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.963606261Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	I0314 19:42:21.703126    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.963665466Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0314 19:42:21.703126    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.963679767Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0314 19:42:21.703126    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.963695368Z" level=info msg="NRI interface is disabled by configuration."
	I0314 19:42:21.703126    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.964176205Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0314 19:42:21.703204    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.964503330Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0314 19:42:21.703204    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.965392899Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0314 19:42:21.703204    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.966787506Z" level=info msg="containerd successfully booted in 0.037267s"
	I0314 19:42:21.703280    8428 command_runner.go:130] > Mar 14 19:40:54 multinode-442000 dockerd[1043]: time="2024-03-14T19:40:54.945087153Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0314 19:42:21.703280    8428 command_runner.go:130] > Mar 14 19:40:54 multinode-442000 dockerd[1043]: time="2024-03-14T19:40:54.972020025Z" level=info msg="Loading containers: start."
	I0314 19:42:21.703280    8428 command_runner.go:130] > Mar 14 19:40:55 multinode-442000 dockerd[1043]: time="2024-03-14T19:40:55.259462934Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.18.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0314 19:42:21.703353    8428 command_runner.go:130] > Mar 14 19:40:55 multinode-442000 dockerd[1043]: time="2024-03-14T19:40:55.336883289Z" level=info msg="Loading containers: done."
	I0314 19:42:21.703353    8428 command_runner.go:130] > Mar 14 19:40:55 multinode-442000 dockerd[1043]: time="2024-03-14T19:40:55.370669888Z" level=info msg="Docker daemon" commit=061aa95 containerd-snapshotter=false storage-driver=overlay2 version=25.0.4
	I0314 19:42:21.703353    8428 command_runner.go:130] > Mar 14 19:40:55 multinode-442000 dockerd[1043]: time="2024-03-14T19:40:55.370874904Z" level=info msg="Daemon has completed initialization"
	I0314 19:42:21.703353    8428 command_runner.go:130] > Mar 14 19:40:55 multinode-442000 dockerd[1043]: time="2024-03-14T19:40:55.415311921Z" level=info msg="API listen on /var/run/docker.sock"
	I0314 19:42:21.703428    8428 command_runner.go:130] > Mar 14 19:40:55 multinode-442000 dockerd[1043]: time="2024-03-14T19:40:55.415467233Z" level=info msg="API listen on [::]:2376"
	I0314 19:42:21.703428    8428 command_runner.go:130] > Mar 14 19:40:55 multinode-442000 systemd[1]: Started Docker Application Container Engine.
	I0314 19:42:21.703428    8428 command_runner.go:130] > Mar 14 19:40:56 multinode-442000 systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0314 19:42:21.703428    8428 command_runner.go:130] > Mar 14 19:40:56 multinode-442000 cri-dockerd[1267]: time="2024-03-14T19:40:56Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0314 19:42:21.703501    8428 command_runner.go:130] > Mar 14 19:40:56 multinode-442000 cri-dockerd[1267]: time="2024-03-14T19:40:56Z" level=info msg="Start docker client with request timeout 0s"
	I0314 19:42:21.703501    8428 command_runner.go:130] > Mar 14 19:40:56 multinode-442000 cri-dockerd[1267]: time="2024-03-14T19:40:56Z" level=info msg="Hairpin mode is set to hairpin-veth"
	I0314 19:42:21.703501    8428 command_runner.go:130] > Mar 14 19:40:56 multinode-442000 cri-dockerd[1267]: time="2024-03-14T19:40:56Z" level=info msg="Loaded network plugin cni"
	I0314 19:42:21.703501    8428 command_runner.go:130] > Mar 14 19:40:56 multinode-442000 cri-dockerd[1267]: time="2024-03-14T19:40:56Z" level=info msg="Docker cri networking managed by network plugin cni"
	I0314 19:42:21.703689    8428 command_runner.go:130] > Mar 14 19:40:56 multinode-442000 cri-dockerd[1267]: time="2024-03-14T19:40:56Z" level=info msg="Docker Info: &{ID:04f4855f-417a-422c-b5bb-3cf8a43fb438 Containers:18 ContainersRunning:0 ContainersPaused:0 ContainersStopped:18 Images:10 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:[] Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:[] Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6tables:true Debug:false NFd:26 OomKillDisable:false NGoroutines:52 SystemTime:2024-03-14T19:40:56.401787998Z LoggingDriver:json-file CgroupDriver:cgroupfs CgroupVersion:2 NEventsListener:0 Ke
rnelVersion:5.10.207 OperatingSystem:Buildroot 2023.02.9 OSVersion:2023.02.9 OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:0xc0004c0150 NCPU:2 MemTotal:2216210432 GenericResources:[] DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:multinode-442000 Labels:[provider=hyperv] ExperimentalBuild:false ServerVersion:25.0.4 ClusterStore: ClusterAdvertise: Runtimes:map[io.containerd.runc.v2:{Path:runc Args:[] Shim:<nil>} runc:{Path:runc Args:[] Shim:<nil>}] DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:[] Nodes:0 Managers:0 Cluster:<nil> Warnings:[]} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dcf2847247e18caba8dce86522029642f60fe96b Expected:dcf2847247e18caba8dce86522029642f60fe96b} RuncCommit:{ID:51d5e94601ceffbbd85688df1c928ecccbfa4685 Expected:51d5e94601ceffbbd85688df1c928ecccbfa4685} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[nam
e=seccomp,profile=builtin name=cgroupns] ProductLicense:Community Engine DefaultAddressPools:[] Warnings:[]}"
	I0314 19:42:21.703725    8428 command_runner.go:130] > Mar 14 19:40:56 multinode-442000 cri-dockerd[1267]: time="2024-03-14T19:40:56Z" level=info msg="Setting cgroupDriver cgroupfs"
	I0314 19:42:21.703725    8428 command_runner.go:130] > Mar 14 19:40:56 multinode-442000 cri-dockerd[1267]: time="2024-03-14T19:40:56Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	I0314 19:42:21.703725    8428 command_runner.go:130] > Mar 14 19:40:56 multinode-442000 cri-dockerd[1267]: time="2024-03-14T19:40:56Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	I0314 19:42:21.703802    8428 command_runner.go:130] > Mar 14 19:40:56 multinode-442000 cri-dockerd[1267]: time="2024-03-14T19:40:56Z" level=info msg="Start cri-dockerd grpc backend"
	I0314 19:42:21.703802    8428 command_runner.go:130] > Mar 14 19:40:56 multinode-442000 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	I0314 19:42:21.703839    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 cri-dockerd[1267]: time="2024-03-14T19:41:00Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"busybox-5b5d89c9d6-7446n_default\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"fa0f2372c88eef3de0c7caa0041064157c314aff4c14bf6622f34dd89106f773\""
	I0314 19:42:21.703839    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 cri-dockerd[1267]: time="2024-03-14T19:41:00Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-5dd5756b68-d22jc_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"a3dba3fc54c01e7fb1675536e155d6b541ed5782f664675ccd953639013f50b0\""
	I0314 19:42:21.703905    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:01.294795352Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0314 19:42:21.703905    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:01.294882858Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0314 19:42:21.703905    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:01.294903860Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:21.703983    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:01.295303891Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:21.704045    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:01.380666857Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0314 19:42:21.704045    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:01.380946878Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0314 19:42:21.704045    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:01.381075288Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:21.704045    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:01.381588628Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:21.704045    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:01.418754186Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0314 19:42:21.704045    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:01.418872295Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0314 19:42:21.704045    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:01.418919499Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:21.704045    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:01.419130315Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:21.704045    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 cri-dockerd[1267]: time="2024-03-14T19:41:01Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/35dd339c8a08d84d0d1a4d2c062b04d44baff78d20c6ed33ce967d50c18eaa3c/resolv.conf as [nameserver 172.17.80.1]"
	I0314 19:42:21.704045    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:01.449937485Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0314 19:42:21.704045    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:01.450067495Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0314 19:42:21.704045    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:01.450100297Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:21.704045    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:01.450295012Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:21.704045    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 cri-dockerd[1267]: time="2024-03-14T19:41:01Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/67475bf80ddd91df7549842450a8d92c27cd16f814cd4e4c750a7cad7d82fc9f/resolv.conf as [nameserver 172.17.80.1]"
	I0314 19:42:21.704045    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 cri-dockerd[1267]: time="2024-03-14T19:41:01Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a27fa2188ee4cf0c44cde0f8cae03a83655bc574c856082192e3261801efcc72/resolv.conf as [nameserver 172.17.80.1]"
	I0314 19:42:21.704045    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 cri-dockerd[1267]: time="2024-03-14T19:41:01Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/c70744e60ac50b50085376d0c124ff15cc884b8a836b0085ef71a65ddb06bcfd/resolv.conf as [nameserver 172.17.80.1]"
	I0314 19:42:21.704045    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:01.782527266Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0314 19:42:21.704045    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:01.782834890Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0314 19:42:21.704045    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:01.782945299Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:21.704045    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:01.783324628Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:21.704045    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:01.950307171Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0314 19:42:21.704045    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:01.950638097Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0314 19:42:21.704045    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:01.950847113Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:21.704045    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:01.951959699Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:21.704572    8428 command_runner.go:130] > Mar 14 19:41:02 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:02.033329657Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0314 19:42:21.704572    8428 command_runner.go:130] > Mar 14 19:41:02 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:02.033826996Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0314 19:42:21.704572    8428 command_runner.go:130] > Mar 14 19:41:02 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:02.034090516Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:21.704572    8428 command_runner.go:130] > Mar 14 19:41:02 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:02.034801671Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:21.704652    8428 command_runner.go:130] > Mar 14 19:41:02 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:02.038389546Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0314 19:42:21.704652    8428 command_runner.go:130] > Mar 14 19:41:02 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:02.038570160Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0314 19:42:21.704652    8428 command_runner.go:130] > Mar 14 19:41:02 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:02.038686569Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:21.704727    8428 command_runner.go:130] > Mar 14 19:41:02 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:02.038972291Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:21.704727    8428 command_runner.go:130] > Mar 14 19:41:05 multinode-442000 cri-dockerd[1267]: time="2024-03-14T19:41:05Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	I0314 19:42:21.704727    8428 command_runner.go:130] > Mar 14 19:41:07 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:07.056067890Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0314 19:42:21.704803    8428 command_runner.go:130] > Mar 14 19:41:07 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:07.056148096Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0314 19:42:21.704803    8428 command_runner.go:130] > Mar 14 19:41:07 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:07.056166397Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:21.704803    8428 command_runner.go:130] > Mar 14 19:41:07 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:07.056406816Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:21.704876    8428 command_runner.go:130] > Mar 14 19:41:07 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:07.109761119Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0314 19:42:21.704876    8428 command_runner.go:130] > Mar 14 19:41:07 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:07.110023440Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0314 19:42:21.704876    8428 command_runner.go:130] > Mar 14 19:41:07 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:07.110099145Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:21.704950    8428 command_runner.go:130] > Mar 14 19:41:07 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:07.110475674Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:21.704950    8428 command_runner.go:130] > Mar 14 19:41:07 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:07.116978275Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0314 19:42:21.704950    8428 command_runner.go:130] > Mar 14 19:41:07 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:07.117046280Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0314 19:42:21.705024    8428 command_runner.go:130] > Mar 14 19:41:07 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:07.117060481Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:21.705024    8428 command_runner.go:130] > Mar 14 19:41:07 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:07.117158888Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:21.705024    8428 command_runner.go:130] > Mar 14 19:41:07 multinode-442000 cri-dockerd[1267]: time="2024-03-14T19:41:07Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a723f141543f2007cc07e048ef5836fca4ae70749b7266630f6c890bb233c09a/resolv.conf as [nameserver 172.17.80.1]"
	I0314 19:42:21.705099    8428 command_runner.go:130] > Mar 14 19:41:07 multinode-442000 cri-dockerd[1267]: time="2024-03-14T19:41:07Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/f513a7aff67200987eb0f28647720ea4cb9bbdb684fc85d1b08c0dd54563517d/resolv.conf as [nameserver 172.17.80.1]"
	I0314 19:42:21.705099    8428 command_runner.go:130] > Mar 14 19:41:07 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:07.432676357Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0314 19:42:21.705099    8428 command_runner.go:130] > Mar 14 19:41:07 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:07.432829669Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0314 19:42:21.705181    8428 command_runner.go:130] > Mar 14 19:41:07 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:07.432849370Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:21.705181    8428 command_runner.go:130] > Mar 14 19:41:07 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:07.433004382Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:21.705181    8428 command_runner.go:130] > Mar 14 19:41:07 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:07.579105320Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0314 19:42:21.705257    8428 command_runner.go:130] > Mar 14 19:41:07 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:07.580432922Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0314 19:42:21.705257    8428 command_runner.go:130] > Mar 14 19:41:07 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:07.580451623Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:21.705257    8428 command_runner.go:130] > Mar 14 19:41:07 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:07.580554931Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:21.705257    8428 command_runner.go:130] > Mar 14 19:41:07 multinode-442000 cri-dockerd[1267]: time="2024-03-14T19:41:07Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a9176b55446637c4407c9a64ce7d85fce2b395bcc0a22061f5f7ff304ff2d47f/resolv.conf as [nameserver 172.17.80.1]"
	I0314 19:42:21.705336    8428 command_runner.go:130] > Mar 14 19:41:07 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:07.897653021Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0314 19:42:21.705336    8428 command_runner.go:130] > Mar 14 19:41:07 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:07.897936143Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0314 19:42:21.705336    8428 command_runner.go:130] > Mar 14 19:41:07 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:07.898062553Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:21.705411    8428 command_runner.go:130] > Mar 14 19:41:07 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:07.898459584Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:21.705411    8428 command_runner.go:130] > Mar 14 19:41:37 multinode-442000 dockerd[1043]: time="2024-03-14T19:41:37.705977514Z" level=info msg="ignoring event" container=2876622a2618d9b60f7cb4f182054a8b2d30209e3bd14c5d4afe515101547bc8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0314 19:42:21.705411    8428 command_runner.go:130] > Mar 14 19:41:37 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:37.706482647Z" level=info msg="shim disconnected" id=2876622a2618d9b60f7cb4f182054a8b2d30209e3bd14c5d4afe515101547bc8 namespace=moby
	I0314 19:42:21.705487    8428 command_runner.go:130] > Mar 14 19:41:37 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:37.706677460Z" level=warning msg="cleaning up after shim disconnected" id=2876622a2618d9b60f7cb4f182054a8b2d30209e3bd14c5d4afe515101547bc8 namespace=moby
	I0314 19:42:21.705487    8428 command_runner.go:130] > Mar 14 19:41:37 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:37.706692261Z" level=info msg="cleaning up dead shim" namespace=moby
	I0314 19:42:21.705487    8428 command_runner.go:130] > Mar 14 19:41:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:53.663136392Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0314 19:42:21.705563    8428 command_runner.go:130] > Mar 14 19:41:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:53.663371709Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0314 19:42:21.705563    8428 command_runner.go:130] > Mar 14 19:41:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:53.663411212Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:21.705563    8428 command_runner.go:130] > Mar 14 19:41:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:53.663537821Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:21.705563    8428 command_runner.go:130] > Mar 14 19:42:10 multinode-442000 dockerd[1049]: time="2024-03-14T19:42:10.837487028Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0314 19:42:21.705639    8428 command_runner.go:130] > Mar 14 19:42:10 multinode-442000 dockerd[1049]: time="2024-03-14T19:42:10.837604337Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0314 19:42:21.705674    8428 command_runner.go:130] > Mar 14 19:42:10 multinode-442000 dockerd[1049]: time="2024-03-14T19:42:10.837625738Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:21.705704    8428 command_runner.go:130] > Mar 14 19:42:10 multinode-442000 dockerd[1049]: time="2024-03-14T19:42:10.837719345Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:21.705745    8428 command_runner.go:130] > Mar 14 19:42:10 multinode-442000 dockerd[1049]: time="2024-03-14T19:42:10.848167835Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0314 19:42:21.705766    8428 command_runner.go:130] > Mar 14 19:42:10 multinode-442000 dockerd[1049]: time="2024-03-14T19:42:10.849098605Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0314 19:42:21.705766    8428 command_runner.go:130] > Mar 14 19:42:10 multinode-442000 dockerd[1049]: time="2024-03-14T19:42:10.849287919Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:21.705766    8428 command_runner.go:130] > Mar 14 19:42:10 multinode-442000 dockerd[1049]: time="2024-03-14T19:42:10.849656747Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:21.705766    8428 command_runner.go:130] > Mar 14 19:42:11 multinode-442000 cri-dockerd[1267]: time="2024-03-14T19:42:11Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/cddebe360bf3a58d057146523ff9f043ddb40843d3e55a24f8f364524780a439/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	I0314 19:42:21.705766    8428 command_runner.go:130] > Mar 14 19:42:11 multinode-442000 cri-dockerd[1267]: time="2024-03-14T19:42:11Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/89f326046d00d990fbe8611867f6438ef498caad91d78b4f265633a7cd56307f/resolv.conf as [nameserver 172.17.80.1]"
	I0314 19:42:21.705766    8428 command_runner.go:130] > Mar 14 19:42:11 multinode-442000 dockerd[1049]: time="2024-03-14T19:42:11.575693713Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0314 19:42:21.705766    8428 command_runner.go:130] > Mar 14 19:42:11 multinode-442000 dockerd[1049]: time="2024-03-14T19:42:11.575950032Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0314 19:42:21.705766    8428 command_runner.go:130] > Mar 14 19:42:11 multinode-442000 dockerd[1049]: time="2024-03-14T19:42:11.576019637Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:21.705766    8428 command_runner.go:130] > Mar 14 19:42:11 multinode-442000 dockerd[1049]: time="2024-03-14T19:42:11.577004211Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0314 19:42:21.705766    8428 command_runner.go:130] > Mar 14 19:42:11 multinode-442000 dockerd[1049]: time="2024-03-14T19:42:11.577168224Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0314 19:42:21.705766    8428 command_runner.go:130] > Mar 14 19:42:11 multinode-442000 dockerd[1049]: time="2024-03-14T19:42:11.577288033Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:21.705766    8428 command_runner.go:130] > Mar 14 19:42:11 multinode-442000 dockerd[1049]: time="2024-03-14T19:42:11.577583255Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:21.705766    8428 command_runner.go:130] > Mar 14 19:42:11 multinode-442000 dockerd[1049]: time="2024-03-14T19:42:11.576656985Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:21.705766    8428 command_runner.go:130] > Mar 14 19:42:13 multinode-442000 dockerd[1043]: 2024/03/14 19:42:13 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0314 19:42:21.705766    8428 command_runner.go:130] > Mar 14 19:42:14 multinode-442000 dockerd[1043]: 2024/03/14 19:42:14 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0314 19:42:21.705766    8428 command_runner.go:130] > Mar 14 19:42:14 multinode-442000 dockerd[1043]: 2024/03/14 19:42:14 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0314 19:42:21.705766    8428 command_runner.go:130] > Mar 14 19:42:14 multinode-442000 dockerd[1043]: 2024/03/14 19:42:14 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0314 19:42:21.705766    8428 command_runner.go:130] > Mar 14 19:42:14 multinode-442000 dockerd[1043]: 2024/03/14 19:42:14 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0314 19:42:21.705766    8428 command_runner.go:130] > Mar 14 19:42:14 multinode-442000 dockerd[1043]: 2024/03/14 19:42:14 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0314 19:42:21.705766    8428 command_runner.go:130] > Mar 14 19:42:14 multinode-442000 dockerd[1043]: 2024/03/14 19:42:14 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0314 19:42:21.705766    8428 command_runner.go:130] > Mar 14 19:42:14 multinode-442000 dockerd[1043]: 2024/03/14 19:42:14 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0314 19:42:21.705766    8428 command_runner.go:130] > Mar 14 19:42:14 multinode-442000 dockerd[1043]: 2024/03/14 19:42:14 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0314 19:42:21.705766    8428 command_runner.go:130] > Mar 14 19:42:14 multinode-442000 dockerd[1043]: 2024/03/14 19:42:14 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0314 19:42:21.705766    8428 command_runner.go:130] > Mar 14 19:42:14 multinode-442000 dockerd[1043]: 2024/03/14 19:42:14 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0314 19:42:21.706291    8428 command_runner.go:130] > Mar 14 19:42:14 multinode-442000 dockerd[1043]: 2024/03/14 19:42:14 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0314 19:42:21.706291    8428 command_runner.go:130] > Mar 14 19:42:17 multinode-442000 dockerd[1043]: 2024/03/14 19:42:17 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0314 19:42:21.706291    8428 command_runner.go:130] > Mar 14 19:42:17 multinode-442000 dockerd[1043]: 2024/03/14 19:42:17 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0314 19:42:21.706368    8428 command_runner.go:130] > Mar 14 19:42:17 multinode-442000 dockerd[1043]: 2024/03/14 19:42:17 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0314 19:42:21.706401    8428 command_runner.go:130] > Mar 14 19:42:17 multinode-442000 dockerd[1043]: 2024/03/14 19:42:17 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0314 19:42:21.706401    8428 command_runner.go:130] > Mar 14 19:42:18 multinode-442000 dockerd[1043]: 2024/03/14 19:42:18 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0314 19:42:21.706401    8428 command_runner.go:130] > Mar 14 19:42:18 multinode-442000 dockerd[1043]: 2024/03/14 19:42:18 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0314 19:42:21.706401    8428 command_runner.go:130] > Mar 14 19:42:18 multinode-442000 dockerd[1043]: 2024/03/14 19:42:18 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0314 19:42:21.706401    8428 command_runner.go:130] > Mar 14 19:42:18 multinode-442000 dockerd[1043]: 2024/03/14 19:42:18 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0314 19:42:21.706401    8428 command_runner.go:130] > Mar 14 19:42:18 multinode-442000 dockerd[1043]: 2024/03/14 19:42:18 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0314 19:42:21.706401    8428 command_runner.go:130] > Mar 14 19:42:18 multinode-442000 dockerd[1043]: 2024/03/14 19:42:18 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0314 19:42:21.706401    8428 command_runner.go:130] > Mar 14 19:42:18 multinode-442000 dockerd[1043]: 2024/03/14 19:42:18 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0314 19:42:21.706401    8428 command_runner.go:130] > Mar 14 19:42:18 multinode-442000 dockerd[1043]: 2024/03/14 19:42:18 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0314 19:42:21.706401    8428 command_runner.go:130] > Mar 14 19:42:21 multinode-442000 dockerd[1043]: 2024/03/14 19:42:21 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0314 19:42:21.706401    8428 command_runner.go:130] > Mar 14 19:42:21 multinode-442000 dockerd[1043]: 2024/03/14 19:42:21 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0314 19:42:21.706926    8428 command_runner.go:130] > Mar 14 19:42:21 multinode-442000 dockerd[1043]: 2024/03/14 19:42:21 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0314 19:42:21.706926    8428 command_runner.go:130] > Mar 14 19:42:21 multinode-442000 dockerd[1043]: 2024/03/14 19:42:21 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0314 19:42:21.706999    8428 command_runner.go:130] > Mar 14 19:42:21 multinode-442000 dockerd[1043]: 2024/03/14 19:42:21 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0314 19:42:21.707031    8428 command_runner.go:130] > Mar 14 19:42:21 multinode-442000 dockerd[1043]: 2024/03/14 19:42:21 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0314 19:42:21.738751    8428 logs.go:123] Gathering logs for container status ...
	I0314 19:42:21.738751    8428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:42:21.830839    8428 command_runner.go:130] > CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	I0314 19:42:21.830839    8428 command_runner.go:130] > b159aedddf94a       ead0a4a53df89                                                                                         11 seconds ago       Running             coredns                   1                   89f326046d00d       coredns-5dd5756b68-d22jc
	I0314 19:42:21.830984    8428 command_runner.go:130] > 813492ad2d666       8c811b4aec35f                                                                                         11 seconds ago       Running             busybox                   1                   cddebe360bf3a       busybox-5b5d89c9d6-7446n
	I0314 19:42:21.830984    8428 command_runner.go:130] > 3167caea2534f       6e38f40d628db                                                                                         29 seconds ago       Running             storage-provisioner       2                   a723f141543f2       storage-provisioner
	I0314 19:42:21.830984    8428 command_runner.go:130] > 999e4c168afef       4950bb10b3f87                                                                                         About a minute ago   Running             kindnet-cni               1                   a9176b5544663       kindnet-7b9lf
	I0314 19:42:21.830984    8428 command_runner.go:130] > 497007582e446       83f6cc407eed8                                                                                         About a minute ago   Running             kube-proxy                1                   f513a7aff6720       kube-proxy-cg28g
	I0314 19:42:21.830984    8428 command_runner.go:130] > 2876622a2618d       6e38f40d628db                                                                                         About a minute ago   Exited              storage-provisioner       1                   a723f141543f2       storage-provisioner
	I0314 19:42:21.830984    8428 command_runner.go:130] > 32d90a3ea2131       e3db313c6dbc0                                                                                         About a minute ago   Running             kube-scheduler            1                   c70744e60ac50       kube-scheduler-multinode-442000
	I0314 19:42:21.831116    8428 command_runner.go:130] > a598d24960de8       7fe0e6f37db33                                                                                         About a minute ago   Running             kube-apiserver            0                   a27fa2188ee4c       kube-apiserver-multinode-442000
	I0314 19:42:21.831116    8428 command_runner.go:130] > 12baf105f0bb2       d058aa5ab969c                                                                                         About a minute ago   Running             kube-controller-manager   1                   67475bf80ddd9       kube-controller-manager-multinode-442000
	I0314 19:42:21.831173    8428 command_runner.go:130] > a81a9c43c3552       73deb9a3f7025                                                                                         About a minute ago   Running             etcd                      0                   35dd339c8a08d       etcd-multinode-442000
	I0314 19:42:21.831207    8428 command_runner.go:130] > 0cd43cdaa31c9       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   19 minutes ago       Exited              busybox                   0                   fa0f2372c88ee       busybox-5b5d89c9d6-7446n
	I0314 19:42:21.831207    8428 command_runner.go:130] > 8899bc0038935       ead0a4a53df89                                                                                         22 minutes ago       Exited              coredns                   0                   a3dba3fc54c01       coredns-5dd5756b68-d22jc
	I0314 19:42:21.831207    8428 command_runner.go:130] > 1a321c0e89971       kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988              22 minutes ago       Exited              kindnet-cni               0                   b046b896affe9       kindnet-7b9lf
	I0314 19:42:21.831207    8428 command_runner.go:130] > 2a62baf3f1b46       83f6cc407eed8                                                                                         23 minutes ago       Exited              kube-proxy                0                   9b3244b47278e       kube-proxy-cg28g
	I0314 19:42:21.831207    8428 command_runner.go:130] > dbb603289bf16       e3db313c6dbc0                                                                                         23 minutes ago       Exited              kube-scheduler            0                   54e39762d7a64       kube-scheduler-multinode-442000
	I0314 19:42:21.831207    8428 command_runner.go:130] > 16b80f73683dc       d058aa5ab969c                                                                                         23 minutes ago       Exited              kube-controller-manager   0                   102c907609a3a       kube-controller-manager-multinode-442000
	I0314 19:42:21.834044    8428 logs.go:123] Gathering logs for kubelet ...
	I0314 19:42:21.834123    8428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:42:21.856327    8428 command_runner.go:130] > Mar 14 19:40:57 multinode-442000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0314 19:42:21.856327    8428 command_runner.go:130] > Mar 14 19:40:57 multinode-442000 kubelet[1388]: I0314 19:40:57.516074    1388 server.go:467] "Kubelet version" kubeletVersion="v1.28.4"
	I0314 19:42:21.856327    8428 command_runner.go:130] > Mar 14 19:40:57 multinode-442000 kubelet[1388]: I0314 19:40:57.516440    1388 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0314 19:42:21.856327    8428 command_runner.go:130] > Mar 14 19:40:57 multinode-442000 kubelet[1388]: I0314 19:40:57.516773    1388 server.go:895] "Client rotation is on, will bootstrap in background"
	I0314 19:42:21.856327    8428 command_runner.go:130] > Mar 14 19:40:57 multinode-442000 kubelet[1388]: E0314 19:40:57.516893    1388 run.go:74] "command failed" err="failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory"
	I0314 19:42:21.856327    8428 command_runner.go:130] > Mar 14 19:40:57 multinode-442000 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	I0314 19:42:21.856327    8428 command_runner.go:130] > Mar 14 19:40:57 multinode-442000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	I0314 19:42:21.856327    8428 command_runner.go:130] > Mar 14 19:40:58 multinode-442000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1.
	I0314 19:42:21.856327    8428 command_runner.go:130] > Mar 14 19:40:58 multinode-442000 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I0314 19:42:21.856327    8428 command_runner.go:130] > Mar 14 19:40:58 multinode-442000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0314 19:42:21.856327    8428 command_runner.go:130] > Mar 14 19:40:58 multinode-442000 kubelet[1450]: I0314 19:40:58.293295    1450 server.go:467] "Kubelet version" kubeletVersion="v1.28.4"
	I0314 19:42:21.856327    8428 command_runner.go:130] > Mar 14 19:40:58 multinode-442000 kubelet[1450]: I0314 19:40:58.293422    1450 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0314 19:42:21.856327    8428 command_runner.go:130] > Mar 14 19:40:58 multinode-442000 kubelet[1450]: I0314 19:40:58.293759    1450 server.go:895] "Client rotation is on, will bootstrap in background"
	I0314 19:42:21.856327    8428 command_runner.go:130] > Mar 14 19:40:58 multinode-442000 kubelet[1450]: E0314 19:40:58.293809    1450 run.go:74] "command failed" err="failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory"
	I0314 19:42:21.856327    8428 command_runner.go:130] > Mar 14 19:40:58 multinode-442000 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	I0314 19:42:21.856327    8428 command_runner.go:130] > Mar 14 19:40:58 multinode-442000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	I0314 19:42:21.856327    8428 command_runner.go:130] > Mar 14 19:40:58 multinode-442000 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I0314 19:42:21.856327    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0314 19:42:21.856870    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.270178    1523 server.go:467] "Kubelet version" kubeletVersion="v1.28.4"
	I0314 19:42:21.856939    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.270275    1523 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0314 19:42:21.856999    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.270469    1523 server.go:895] "Client rotation is on, will bootstrap in background"
	I0314 19:42:21.856999    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.272943    1523 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	I0314 19:42:21.857069    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.286808    1523 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0314 19:42:21.857069    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.333673    1523 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
	I0314 19:42:21.857136    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.335204    1523 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
	I0314 19:42:21.857242    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.335543    1523 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","To
pologyManagerPolicyOptions":null}
	I0314 19:42:21.857289    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.335688    1523 topology_manager.go:138] "Creating topology manager with none policy"
	I0314 19:42:21.857289    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.335703    1523 container_manager_linux.go:301] "Creating device plugin manager"
	I0314 19:42:21.857289    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.336879    1523 state_mem.go:36] "Initialized new in-memory state store"
	I0314 19:42:21.857350    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.338507    1523 kubelet.go:393] "Attempting to sync node with API server"
	I0314 19:42:21.857416    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.338606    1523 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests"
	I0314 19:42:21.857416    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.339942    1523 kubelet.go:309] "Adding apiserver pod source"
	I0314 19:42:21.857467    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.339973    1523 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
	I0314 19:42:21.857542    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: W0314 19:41:00.342644    1523 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-442000&limit=500&resourceVersion=0": dial tcp 172.17.93.236:8443: connect: connection refused
	I0314 19:42:21.857542    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: E0314 19:41:00.342728    1523 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-442000&limit=500&resourceVersion=0": dial tcp 172.17.93.236:8443: connect: connection refused
	I0314 19:42:21.857621    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: W0314 19:41:00.352846    1523 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.17.93.236:8443: connect: connection refused
	I0314 19:42:21.857682    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: E0314 19:41:00.353005    1523 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.17.93.236:8443: connect: connection refused
	I0314 19:42:21.857749    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.362091    1523 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="docker" version="25.0.4" apiVersion="v1"
	I0314 19:42:21.857749    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: W0314 19:41:00.368654    1523 probe.go:268] Flexvolume plugin directory at /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
	I0314 19:42:21.857749    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.370831    1523 server.go:1232] "Started kubelet"
	I0314 19:42:21.857821    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.376404    1523 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
	I0314 19:42:21.857821    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.381472    1523 server.go:162] "Starting to listen" address="0.0.0.0" port=10250
	I0314 19:42:21.857891    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.381715    1523 volume_manager.go:291] "Starting Kubelet Volume Manager"
	I0314 19:42:21.857891    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.383735    1523 server.go:462] "Adding debug handlers to kubelet server"
	I0314 19:42:21.857956    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.385265    1523 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10
	I0314 19:42:21.857956    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.387577    1523 desired_state_of_world_populator.go:151] "Desired state populator starts to run"
	I0314 19:42:21.857956    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.392182    1523 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock"
	I0314 19:42:21.857956    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: E0314 19:41:00.392853    1523 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-442000?timeout=10s\": dial tcp 172.17.93.236:8443: connect: connection refused" interval="200ms"
	I0314 19:42:21.857956    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: W0314 19:41:00.392921    1523 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.17.93.236:8443: connect: connection refused
	I0314 19:42:21.857956    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: E0314 19:41:00.392970    1523 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.17.93.236:8443: connect: connection refused
	I0314 19:42:21.858309    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: E0314 19:41:00.402867    1523 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"multinode-442000.17bcb8e6e82683f3", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"multinode-442000", UID:"multinode-442000", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"multinode-442000"}, FirstTimestamp:time.Date(2024, ti
me.March, 14, 19, 41, 0, 370772979, time.Local), LastTimestamp:time.Date(2024, time.March, 14, 19, 41, 0, 370772979, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"multinode-442000"}': 'Post "https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events": dial tcp 172.17.93.236:8443: connect: connection refused'(may retry after sleeping)
	I0314 19:42:21.858309    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.431568    1523 reconciler_new.go:29] "Reconciler: start to sync state"
	I0314 19:42:21.858309    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.453043    1523 cpu_manager.go:214] "Starting CPU manager" policy="none"
	I0314 19:42:21.858383    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.453062    1523 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s"
	I0314 19:42:21.858383    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.453088    1523 state_mem.go:36] "Initialized new in-memory state store"
	I0314 19:42:21.858383    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.453812    1523 state_mem.go:88] "Updated default CPUSet" cpuSet=""
	I0314 19:42:21.858383    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.453838    1523 state_mem.go:96] "Updated CPUSet assignments" assignments={}
	I0314 19:42:21.858476    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.453846    1523 policy_none.go:49] "None policy: Start"
	I0314 19:42:21.858476    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.459854    1523 memory_manager.go:169] "Starting memorymanager" policy="None"
	I0314 19:42:21.858511    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.459925    1523 state_mem.go:35] "Initializing new in-memory state store"
	I0314 19:42:21.858535    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.460715    1523 state_mem.go:75] "Updated machine memory state"
	I0314 19:42:21.858586    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.466366    1523 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found"
	I0314 19:42:21.858586    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.471455    1523 plugin_manager.go:118] "Starting Kubelet Plugin Manager"
	I0314 19:42:21.858651    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.475344    1523 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4"
	I0314 19:42:21.858651    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.478780    1523 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6"
	I0314 19:42:21.858651    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.478820    1523 status_manager.go:217] "Starting to sync pod status with apiserver"
	I0314 19:42:21.858651    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.478846    1523 kubelet.go:2303] "Starting kubelet main sync loop"
	I0314 19:42:21.858731    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: E0314 19:41:00.478899    1523 kubelet.go:2327] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful"
	I0314 19:42:21.858806    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: W0314 19:41:00.485952    1523 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.17.93.236:8443: connect: connection refused
	I0314 19:42:21.858806    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: E0314 19:41:00.487569    1523 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.17.93.236:8443: connect: connection refused
	I0314 19:42:21.858806    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: E0314 19:41:00.493845    1523 eviction_manager.go:258] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"multinode-442000\" not found"
	I0314 19:42:21.858806    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.501023    1523 kubelet_node_status.go:70] "Attempting to register node" node="multinode-442000"
	I0314 19:42:21.858806    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: E0314 19:41:00.501915    1523 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.17.93.236:8443: connect: connection refused" node="multinode-442000"
	I0314 19:42:21.858806    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: E0314 19:41:00.503739    1523 iptables.go:575] "Could not set up iptables canary" err=<
	I0314 19:42:21.858806    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	I0314 19:42:21.859016    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	I0314 19:42:21.859088    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]:         Perhaps ip6tables or your kernel needs to be upgraded.
	I0314 19:42:21.859088    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	I0314 19:42:21.859168    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.578961    1523 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="af5b88117f99a24e81a324ab026c69a7058a7c1bc88d9b9a5386134abc257bba"
	I0314 19:42:21.859168    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.578983    1523 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="54e39762d7a6437164a9b2c6dd22b1f36b57514310190ce4acc3349001cb1774"
	I0314 19:42:21.859168    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.579017    1523 topology_manager.go:215] "Topology Admit Handler" podUID="2b2434280023596d1e3c90125a7219ed" podNamespace="kube-system" podName="kube-scheduler-multinode-442000"
	I0314 19:42:21.859168    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.592991    1523 topology_manager.go:215] "Topology Admit Handler" podUID="7754d2f32966faec8123dc3b8a2af767" podNamespace="kube-system" podName="kube-apiserver-multinode-442000"
	I0314 19:42:21.859364    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: E0314 19:41:00.594193    1523 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-442000?timeout=10s\": dial tcp 172.17.93.236:8443: connect: connection refused" interval="400ms"
	I0314 19:42:21.859416    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.609977    1523 topology_manager.go:215] "Topology Admit Handler" podUID="a7ee530f2bd843eddeace8cd6ec0d204" podNamespace="kube-system" podName="kube-controller-manager-multinode-442000"
	I0314 19:42:21.859416    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.622973    1523 topology_manager.go:215] "Topology Admit Handler" podUID="fa99a5621d016aa714804afcaa1e0a53" podNamespace="kube-system" podName="etcd-multinode-442000"
	I0314 19:42:21.859486    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.634832    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2b2434280023596d1e3c90125a7219ed-kubeconfig\") pod \"kube-scheduler-multinode-442000\" (UID: \"2b2434280023596d1e3c90125a7219ed\") " pod="kube-system/kube-scheduler-multinode-442000"
	I0314 19:42:21.859486    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.640587    1523 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b179d157b6b2f71cc980c7ea5060a613be77e84e89947fbcb91a687ea7310eaf"
	I0314 19:42:21.859561    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.640610    1523 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b046b896affe9f3219822b857a6b4dfa1427854d5df420b6b2e1cec631372548"
	I0314 19:42:21.859561    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.640625    1523 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fa0f2372c88eef3de0c7caa0041064157c314aff4c14bf6622f34dd89106f773"
	I0314 19:42:21.859627    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.640637    1523 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9b3244b47278e22e56ab0362b7a74ee80ca2806fb1074d718b0278b5bc70be76"
	I0314 19:42:21.859627    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.640648    1523 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a3dba3fc54c01e7fb1675536e155d6b541ed5782f664675ccd953639013f50b0"
	I0314 19:42:21.859627    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.640663    1523 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="102c907609a3ac28e95d46e2671477684c5a043672e21597c677ee9dbfcb7e08"
	I0314 19:42:21.859755    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.640674    1523 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ab390fc53b998ec55449f16c05933add797f430f2cc6f4b55afabf79cd8b0bc7"
	I0314 19:42:21.859755    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.713400    1523 kubelet_node_status.go:70] "Attempting to register node" node="multinode-442000"
	I0314 19:42:21.859755    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: E0314 19:41:00.714712    1523 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.17.93.236:8443: connect: connection refused" node="multinode-442000"
	I0314 19:42:21.859755    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.736377    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7754d2f32966faec8123dc3b8a2af767-ca-certs\") pod \"kube-apiserver-multinode-442000\" (UID: \"7754d2f32966faec8123dc3b8a2af767\") " pod="kube-system/kube-apiserver-multinode-442000"
	I0314 19:42:21.859755    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.736439    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7754d2f32966faec8123dc3b8a2af767-k8s-certs\") pod \"kube-apiserver-multinode-442000\" (UID: \"7754d2f32966faec8123dc3b8a2af767\") " pod="kube-system/kube-apiserver-multinode-442000"
	I0314 19:42:21.859923    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.736466    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7754d2f32966faec8123dc3b8a2af767-usr-share-ca-certificates\") pod \"kube-apiserver-multinode-442000\" (UID: \"7754d2f32966faec8123dc3b8a2af767\") " pod="kube-system/kube-apiserver-multinode-442000"
	I0314 19:42:21.859989    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.736490    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a7ee530f2bd843eddeace8cd6ec0d204-flexvolume-dir\") pod \"kube-controller-manager-multinode-442000\" (UID: \"a7ee530f2bd843eddeace8cd6ec0d204\") " pod="kube-system/kube-controller-manager-multinode-442000"
	I0314 19:42:21.860054    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.736521    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a7ee530f2bd843eddeace8cd6ec0d204-k8s-certs\") pod \"kube-controller-manager-multinode-442000\" (UID: \"a7ee530f2bd843eddeace8cd6ec0d204\") " pod="kube-system/kube-controller-manager-multinode-442000"
	I0314 19:42:21.860177    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.736546    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/fa99a5621d016aa714804afcaa1e0a53-etcd-certs\") pod \"etcd-multinode-442000\" (UID: \"fa99a5621d016aa714804afcaa1e0a53\") " pod="kube-system/etcd-multinode-442000"
	I0314 19:42:21.860443    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.736609    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a7ee530f2bd843eddeace8cd6ec0d204-ca-certs\") pod \"kube-controller-manager-multinode-442000\" (UID: \"a7ee530f2bd843eddeace8cd6ec0d204\") " pod="kube-system/kube-controller-manager-multinode-442000"
	I0314 19:42:21.860516    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.736642    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a7ee530f2bd843eddeace8cd6ec0d204-kubeconfig\") pod \"kube-controller-manager-multinode-442000\" (UID: \"a7ee530f2bd843eddeace8cd6ec0d204\") " pod="kube-system/kube-controller-manager-multinode-442000"
	I0314 19:42:21.860636    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.736675    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a7ee530f2bd843eddeace8cd6ec0d204-usr-share-ca-certificates\") pod \"kube-controller-manager-multinode-442000\" (UID: \"a7ee530f2bd843eddeace8cd6ec0d204\") " pod="kube-system/kube-controller-manager-multinode-442000"
	I0314 19:42:21.860689    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.736706    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/fa99a5621d016aa714804afcaa1e0a53-etcd-data\") pod \"etcd-multinode-442000\" (UID: \"fa99a5621d016aa714804afcaa1e0a53\") " pod="kube-system/etcd-multinode-442000"
	I0314 19:42:21.860689    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: E0314 19:41:00.996146    1523 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-442000?timeout=10s\": dial tcp 172.17.93.236:8443: connect: connection refused" interval="800ms"
	I0314 19:42:21.860875    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 kubelet[1523]: E0314 19:41:01.009288    1523 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"multinode-442000.17bcb8e6e82683f3", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"multinode-442000", UID:"multinode-442000", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"multinode-442000"}, FirstTimestamp:time.Date(2024, ti
me.March, 14, 19, 41, 0, 370772979, time.Local), LastTimestamp:time.Date(2024, time.March, 14, 19, 41, 0, 370772979, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"multinode-442000"}': 'Post "https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events": dial tcp 172.17.93.236:8443: connect: connection refused'(may retry after sleeping)
	I0314 19:42:21.860916    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 kubelet[1523]: I0314 19:41:01.128790    1523 kubelet_node_status.go:70] "Attempting to register node" node="multinode-442000"
	I0314 19:42:21.860984    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 kubelet[1523]: E0314 19:41:01.130034    1523 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.17.93.236:8443: connect: connection refused" node="multinode-442000"
	I0314 19:42:21.860984    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 kubelet[1523]: W0314 19:41:01.475229    1523 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.17.93.236:8443: connect: connection refused
	I0314 19:42:21.861049    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 kubelet[1523]: E0314 19:41:01.475367    1523 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.17.93.236:8443: connect: connection refused
	I0314 19:42:21.861115    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 kubelet[1523]: W0314 19:41:01.647700    1523 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-442000&limit=500&resourceVersion=0": dial tcp 172.17.93.236:8443: connect: connection refused
	I0314 19:42:21.861188    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 kubelet[1523]: E0314 19:41:01.647839    1523 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-442000&limit=500&resourceVersion=0": dial tcp 172.17.93.236:8443: connect: connection refused
	I0314 19:42:21.861188    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 kubelet[1523]: I0314 19:41:01.684558    1523 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c70744e60ac50b50085376d0c124ff15cc884b8a836b0085ef71a65ddb06bcfd"
	I0314 19:42:21.861188    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 kubelet[1523]: W0314 19:41:01.767121    1523 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.17.93.236:8443: connect: connection refused
	I0314 19:42:21.861407    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 kubelet[1523]: E0314 19:41:01.767283    1523 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.17.93.236:8443: connect: connection refused
	I0314 19:42:21.861407    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 kubelet[1523]: E0314 19:41:01.797772    1523 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-442000?timeout=10s\": dial tcp 172.17.93.236:8443: connect: connection refused" interval="1.6s"
	I0314 19:42:21.861407    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 kubelet[1523]: W0314 19:41:01.907277    1523 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.17.93.236:8443: connect: connection refused
	I0314 19:42:21.861407    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 kubelet[1523]: E0314 19:41:01.907408    1523 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.17.93.236:8443: connect: connection refused
	I0314 19:42:21.861407    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 kubelet[1523]: I0314 19:41:01.963548    1523 kubelet_node_status.go:70] "Attempting to register node" node="multinode-442000"
	I0314 19:42:21.861407    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 kubelet[1523]: E0314 19:41:01.967786    1523 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.17.93.236:8443: connect: connection refused" node="multinode-442000"
	I0314 19:42:21.861407    8428 command_runner.go:130] > Mar 14 19:41:03 multinode-442000 kubelet[1523]: I0314 19:41:03.581966    1523 kubelet_node_status.go:70] "Attempting to register node" node="multinode-442000"
	I0314 19:42:21.861407    8428 command_runner.go:130] > Mar 14 19:41:05 multinode-442000 kubelet[1523]: I0314 19:41:05.875219    1523 kubelet_node_status.go:108] "Node was previously registered" node="multinode-442000"
	I0314 19:42:21.861407    8428 command_runner.go:130] > Mar 14 19:41:05 multinode-442000 kubelet[1523]: I0314 19:41:05.875953    1523 kubelet_node_status.go:73] "Successfully registered node" node="multinode-442000"
	I0314 19:42:21.861407    8428 command_runner.go:130] > Mar 14 19:41:05 multinode-442000 kubelet[1523]: I0314 19:41:05.881726    1523 kuberuntime_manager.go:1528] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	I0314 19:42:21.861407    8428 command_runner.go:130] > Mar 14 19:41:05 multinode-442000 kubelet[1523]: I0314 19:41:05.882677    1523 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	I0314 19:42:21.861407    8428 command_runner.go:130] > Mar 14 19:41:05 multinode-442000 kubelet[1523]: I0314 19:41:05.894905    1523 setters.go:552] "Node became not ready" node="multinode-442000" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-03-14T19:41:05Z","lastTransitionTime":"2024-03-14T19:41:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"}
	I0314 19:42:21.861951    8428 command_runner.go:130] > Mar 14 19:41:05 multinode-442000 kubelet[1523]: E0314 19:41:05.973748    1523 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"etcd-multinode-442000\" already exists" pod="kube-system/etcd-multinode-442000"
	I0314 19:42:21.862025    8428 command_runner.go:130] > Mar 14 19:41:06 multinode-442000 kubelet[1523]: I0314 19:41:06.346543    1523 apiserver.go:52] "Watching apiserver"
	I0314 19:42:21.862067    8428 command_runner.go:130] > Mar 14 19:41:06 multinode-442000 kubelet[1523]: I0314 19:41:06.355573    1523 topology_manager.go:215] "Topology Admit Handler" podUID="677b9084-0026-4b21-b041-445940624ed7" podNamespace="kube-system" podName="kindnet-7b9lf"
	I0314 19:42:21.862067    8428 command_runner.go:130] > Mar 14 19:41:06 multinode-442000 kubelet[1523]: I0314 19:41:06.355823    1523 topology_manager.go:215] "Topology Admit Handler" podUID="c7f798bf-6722-4731-af8d-ccd5703d116e" podNamespace="kube-system" podName="kube-proxy-cg28g"
	I0314 19:42:21.862067    8428 command_runner.go:130] > Mar 14 19:41:06 multinode-442000 kubelet[1523]: I0314 19:41:06.355970    1523 topology_manager.go:215] "Topology Admit Handler" podUID="2a563b3f-a175-4dc2-9f0b-67dbaefbfaac" podNamespace="kube-system" podName="coredns-5dd5756b68-d22jc"
	I0314 19:42:21.862067    8428 command_runner.go:130] > Mar 14 19:41:06 multinode-442000 kubelet[1523]: I0314 19:41:06.356220    1523 topology_manager.go:215] "Topology Admit Handler" podUID="65d76566-4401-4b28-8452-10ed98624901" podNamespace="kube-system" podName="storage-provisioner"
	I0314 19:42:21.862229    8428 command_runner.go:130] > Mar 14 19:41:06 multinode-442000 kubelet[1523]: I0314 19:41:06.356515    1523 topology_manager.go:215] "Topology Admit Handler" podUID="6ca0ace6-596a-4504-80b5-0cc0cc11f9a2" podNamespace="default" podName="busybox-5b5d89c9d6-7446n"
	I0314 19:42:21.862229    8428 command_runner.go:130] > Mar 14 19:41:06 multinode-442000 kubelet[1523]: E0314 19:41:06.356776    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-7446n" podUID="6ca0ace6-596a-4504-80b5-0cc0cc11f9a2"
	I0314 19:42:21.862315    8428 command_runner.go:130] > Mar 14 19:41:06 multinode-442000 kubelet[1523]: E0314 19:41:06.356948    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-d22jc" podUID="2a563b3f-a175-4dc2-9f0b-67dbaefbfaac"
	I0314 19:42:21.862355    8428 command_runner.go:130] > Mar 14 19:41:06 multinode-442000 kubelet[1523]: I0314 19:41:06.360847    1523 kubelet.go:1872] "Trying to delete pod" pod="kube-system/kube-apiserver-multinode-442000" podUID="02a2d011-5f4c-451c-9698-a88e42e4b6c9"
	I0314 19:42:21.862434    8428 command_runner.go:130] > Mar 14 19:41:06 multinode-442000 kubelet[1523]: I0314 19:41:06.388530    1523 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
	I0314 19:42:21.862485    8428 command_runner.go:130] > Mar 14 19:41:06 multinode-442000 kubelet[1523]: I0314 19:41:06.394882    1523 kubelet.go:1877] "Deleted mirror pod because it is outdated" pod="kube-system/kube-apiserver-multinode-442000"
	I0314 19:42:21.862485    8428 command_runner.go:130] > Mar 14 19:41:06 multinode-442000 kubelet[1523]: I0314 19:41:06.419699    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c7f798bf-6722-4731-af8d-ccd5703d116e-xtables-lock\") pod \"kube-proxy-cg28g\" (UID: \"c7f798bf-6722-4731-af8d-ccd5703d116e\") " pod="kube-system/kube-proxy-cg28g"
	I0314 19:42:21.862564    8428 command_runner.go:130] > Mar 14 19:41:06 multinode-442000 kubelet[1523]: I0314 19:41:06.419828    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/677b9084-0026-4b21-b041-445940624ed7-cni-cfg\") pod \"kindnet-7b9lf\" (UID: \"677b9084-0026-4b21-b041-445940624ed7\") " pod="kube-system/kindnet-7b9lf"
	I0314 19:42:21.862654    8428 command_runner.go:130] > Mar 14 19:41:06 multinode-442000 kubelet[1523]: I0314 19:41:06.419854    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/677b9084-0026-4b21-b041-445940624ed7-lib-modules\") pod \"kindnet-7b9lf\" (UID: \"677b9084-0026-4b21-b041-445940624ed7\") " pod="kube-system/kindnet-7b9lf"
	I0314 19:42:21.862654    8428 command_runner.go:130] > Mar 14 19:41:06 multinode-442000 kubelet[1523]: I0314 19:41:06.419895    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/65d76566-4401-4b28-8452-10ed98624901-tmp\") pod \"storage-provisioner\" (UID: \"65d76566-4401-4b28-8452-10ed98624901\") " pod="kube-system/storage-provisioner"
	I0314 19:42:21.862654    8428 command_runner.go:130] > Mar 14 19:41:06 multinode-442000 kubelet[1523]: I0314 19:41:06.419943    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/677b9084-0026-4b21-b041-445940624ed7-xtables-lock\") pod \"kindnet-7b9lf\" (UID: \"677b9084-0026-4b21-b041-445940624ed7\") " pod="kube-system/kindnet-7b9lf"
	I0314 19:42:21.862774    8428 command_runner.go:130] > Mar 14 19:41:06 multinode-442000 kubelet[1523]: I0314 19:41:06.420062    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c7f798bf-6722-4731-af8d-ccd5703d116e-lib-modules\") pod \"kube-proxy-cg28g\" (UID: \"c7f798bf-6722-4731-af8d-ccd5703d116e\") " pod="kube-system/kube-proxy-cg28g"
	I0314 19:42:21.862774    8428 command_runner.go:130] > Mar 14 19:41:06 multinode-442000 kubelet[1523]: E0314 19:41:06.420370    1523 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0314 19:42:21.862896    8428 command_runner.go:130] > Mar 14 19:41:06 multinode-442000 kubelet[1523]: E0314 19:41:06.420509    1523 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2a563b3f-a175-4dc2-9f0b-67dbaefbfaac-config-volume podName:2a563b3f-a175-4dc2-9f0b-67dbaefbfaac nodeName:}" failed. No retries permitted until 2024-03-14 19:41:06.920467401 +0000 UTC m=+6.742091622 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/2a563b3f-a175-4dc2-9f0b-67dbaefbfaac-config-volume") pod "coredns-5dd5756b68-d22jc" (UID: "2a563b3f-a175-4dc2-9f0b-67dbaefbfaac") : object "kube-system"/"coredns" not registered
	I0314 19:42:21.862945    8428 command_runner.go:130] > Mar 14 19:41:06 multinode-442000 kubelet[1523]: E0314 19:41:06.447169    1523 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0314 19:42:21.863020    8428 command_runner.go:130] > Mar 14 19:41:06 multinode-442000 kubelet[1523]: E0314 19:41:06.447481    1523 projected.go:198] Error preparing data for projected volume kube-api-access-6hh9s for pod default/busybox-5b5d89c9d6-7446n: object "default"/"kube-root-ca.crt" not registered
	I0314 19:42:21.863020    8428 command_runner.go:130] > Mar 14 19:41:06 multinode-442000 kubelet[1523]: E0314 19:41:06.447769    1523 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6ca0ace6-596a-4504-80b5-0cc0cc11f9a2-kube-api-access-6hh9s podName:6ca0ace6-596a-4504-80b5-0cc0cc11f9a2 nodeName:}" failed. No retries permitted until 2024-03-14 19:41:06.9477485 +0000 UTC m=+6.769372721 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-6hh9s" (UniqueName: "kubernetes.io/projected/6ca0ace6-596a-4504-80b5-0cc0cc11f9a2-kube-api-access-6hh9s") pod "busybox-5b5d89c9d6-7446n" (UID: "6ca0ace6-596a-4504-80b5-0cc0cc11f9a2") : object "default"/"kube-root-ca.crt" not registered
	I0314 19:42:21.863097    8428 command_runner.go:130] > Mar 14 19:41:06 multinode-442000 kubelet[1523]: I0314 19:41:06.496544    1523 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="81fdcd9740169a0b72b7c7316eeac39f" path="/var/lib/kubelet/pods/81fdcd9740169a0b72b7c7316eeac39f/volumes"
	I0314 19:42:21.863097    8428 command_runner.go:130] > Mar 14 19:41:06 multinode-442000 kubelet[1523]: I0314 19:41:06.497856    1523 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="92e70beb375f9f247f5f8395dc065033" path="/var/lib/kubelet/pods/92e70beb375f9f247f5f8395dc065033/volumes"
	I0314 19:42:21.863186    8428 command_runner.go:130] > Mar 14 19:41:06 multinode-442000 kubelet[1523]: I0314 19:41:06.840791    1523 kubelet.go:1872] "Trying to delete pod" pod="kube-system/etcd-multinode-442000" podUID="8974ad44-5d36-48f0-bc6b-9115bab5fb5e"
	I0314 19:42:21.863186    8428 command_runner.go:130] > Mar 14 19:41:06 multinode-442000 kubelet[1523]: I0314 19:41:06.864488    1523 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-multinode-442000" podStartSLOduration=0.864428449 podCreationTimestamp="2024-03-14 19:41:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-03-14 19:41:06.656175631 +0000 UTC m=+6.477799952" watchObservedRunningTime="2024-03-14 19:41:06.864428449 +0000 UTC m=+6.686052670"
	I0314 19:42:21.863278    8428 command_runner.go:130] > Mar 14 19:41:06 multinode-442000 kubelet[1523]: I0314 19:41:06.889820    1523 kubelet.go:1877] "Deleted mirror pod because it is outdated" pod="kube-system/etcd-multinode-442000"
	I0314 19:42:21.863278    8428 command_runner.go:130] > Mar 14 19:41:06 multinode-442000 kubelet[1523]: E0314 19:41:06.925613    1523 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0314 19:42:21.863368    8428 command_runner.go:130] > Mar 14 19:41:06 multinode-442000 kubelet[1523]: E0314 19:41:06.925789    1523 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2a563b3f-a175-4dc2-9f0b-67dbaefbfaac-config-volume podName:2a563b3f-a175-4dc2-9f0b-67dbaefbfaac nodeName:}" failed. No retries permitted until 2024-03-14 19:41:07.925744766 +0000 UTC m=+7.747368987 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/2a563b3f-a175-4dc2-9f0b-67dbaefbfaac-config-volume") pod "coredns-5dd5756b68-d22jc" (UID: "2a563b3f-a175-4dc2-9f0b-67dbaefbfaac") : object "kube-system"/"coredns" not registered
	I0314 19:42:21.863457    8428 command_runner.go:130] > Mar 14 19:41:07 multinode-442000 kubelet[1523]: E0314 19:41:07.026456    1523 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0314 19:42:21.863457    8428 command_runner.go:130] > Mar 14 19:41:07 multinode-442000 kubelet[1523]: E0314 19:41:07.026485    1523 projected.go:198] Error preparing data for projected volume kube-api-access-6hh9s for pod default/busybox-5b5d89c9d6-7446n: object "default"/"kube-root-ca.crt" not registered
	I0314 19:42:21.863547    8428 command_runner.go:130] > Mar 14 19:41:07 multinode-442000 kubelet[1523]: E0314 19:41:07.026583    1523 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6ca0ace6-596a-4504-80b5-0cc0cc11f9a2-kube-api-access-6hh9s podName:6ca0ace6-596a-4504-80b5-0cc0cc11f9a2 nodeName:}" failed. No retries permitted until 2024-03-14 19:41:08.02656612 +0000 UTC m=+7.848190341 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-6hh9s" (UniqueName: "kubernetes.io/projected/6ca0ace6-596a-4504-80b5-0cc0cc11f9a2-kube-api-access-6hh9s") pod "busybox-5b5d89c9d6-7446n" (UID: "6ca0ace6-596a-4504-80b5-0cc0cc11f9a2") : object "default"/"kube-root-ca.crt" not registered
	I0314 19:42:21.863547    8428 command_runner.go:130] > Mar 14 19:41:07 multinode-442000 kubelet[1523]: E0314 19:41:07.479340    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-d22jc" podUID="2a563b3f-a175-4dc2-9f0b-67dbaefbfaac"
	I0314 19:42:21.863635    8428 command_runner.go:130] > Mar 14 19:41:07 multinode-442000 kubelet[1523]: E0314 19:41:07.479540    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-7446n" podUID="6ca0ace6-596a-4504-80b5-0cc0cc11f9a2"
	I0314 19:42:21.863635    8428 command_runner.go:130] > Mar 14 19:41:07 multinode-442000 kubelet[1523]: E0314 19:41:07.934416    1523 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0314 19:42:21.863725    8428 command_runner.go:130] > Mar 14 19:41:07 multinode-442000 kubelet[1523]: E0314 19:41:07.934566    1523 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2a563b3f-a175-4dc2-9f0b-67dbaefbfaac-config-volume podName:2a563b3f-a175-4dc2-9f0b-67dbaefbfaac nodeName:}" failed. No retries permitted until 2024-03-14 19:41:09.934544359 +0000 UTC m=+9.756168580 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/2a563b3f-a175-4dc2-9f0b-67dbaefbfaac-config-volume") pod "coredns-5dd5756b68-d22jc" (UID: "2a563b3f-a175-4dc2-9f0b-67dbaefbfaac") : object "kube-system"/"coredns" not registered
	I0314 19:42:21.863814    8428 command_runner.go:130] > Mar 14 19:41:08 multinode-442000 kubelet[1523]: E0314 19:41:08.035285    1523 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0314 19:42:21.863814    8428 command_runner.go:130] > Mar 14 19:41:08 multinode-442000 kubelet[1523]: E0314 19:41:08.035328    1523 projected.go:198] Error preparing data for projected volume kube-api-access-6hh9s for pod default/busybox-5b5d89c9d6-7446n: object "default"/"kube-root-ca.crt" not registered
	I0314 19:42:21.863904    8428 command_runner.go:130] > Mar 14 19:41:08 multinode-442000 kubelet[1523]: E0314 19:41:08.035382    1523 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6ca0ace6-596a-4504-80b5-0cc0cc11f9a2-kube-api-access-6hh9s podName:6ca0ace6-596a-4504-80b5-0cc0cc11f9a2 nodeName:}" failed. No retries permitted until 2024-03-14 19:41:10.035364414 +0000 UTC m=+9.856988635 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-6hh9s" (UniqueName: "kubernetes.io/projected/6ca0ace6-596a-4504-80b5-0cc0cc11f9a2-kube-api-access-6hh9s") pod "busybox-5b5d89c9d6-7446n" (UID: "6ca0ace6-596a-4504-80b5-0cc0cc11f9a2") : object "default"/"kube-root-ca.crt" not registered
	I0314 19:42:21.864870    8428 command_runner.go:130] > Mar 14 19:41:08 multinode-442000 kubelet[1523]: I0314 19:41:08.192454    1523 kubelet.go:1872] "Trying to delete pod" pod="kube-system/etcd-multinode-442000" podUID="8974ad44-5d36-48f0-bc6b-9115bab5fb5e"
	I0314 19:42:21.864870    8428 command_runner.go:130] > Mar 14 19:41:08 multinode-442000 kubelet[1523]: I0314 19:41:08.232807    1523 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/etcd-multinode-442000" podStartSLOduration=2.232765597 podCreationTimestamp="2024-03-14 19:41:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-03-14 19:41:08.211688076 +0000 UTC m=+8.033312297" watchObservedRunningTime="2024-03-14 19:41:08.232765597 +0000 UTC m=+8.054389818"
	I0314 19:42:21.864870    8428 command_runner.go:130] > Mar 14 19:41:09 multinode-442000 kubelet[1523]: E0314 19:41:09.480285    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-7446n" podUID="6ca0ace6-596a-4504-80b5-0cc0cc11f9a2"
	I0314 19:42:21.864870    8428 command_runner.go:130] > Mar 14 19:41:09 multinode-442000 kubelet[1523]: E0314 19:41:09.480350    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-d22jc" podUID="2a563b3f-a175-4dc2-9f0b-67dbaefbfaac"
	I0314 19:42:21.864870    8428 command_runner.go:130] > Mar 14 19:41:09 multinode-442000 kubelet[1523]: E0314 19:41:09.954598    1523 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0314 19:42:21.864870    8428 command_runner.go:130] > Mar 14 19:41:09 multinode-442000 kubelet[1523]: E0314 19:41:09.954683    1523 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2a563b3f-a175-4dc2-9f0b-67dbaefbfaac-config-volume podName:2a563b3f-a175-4dc2-9f0b-67dbaefbfaac nodeName:}" failed. No retries permitted until 2024-03-14 19:41:13.95466674 +0000 UTC m=+13.776290961 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/2a563b3f-a175-4dc2-9f0b-67dbaefbfaac-config-volume") pod "coredns-5dd5756b68-d22jc" (UID: "2a563b3f-a175-4dc2-9f0b-67dbaefbfaac") : object "kube-system"/"coredns" not registered
	I0314 19:42:21.864870    8428 command_runner.go:130] > Mar 14 19:41:10 multinode-442000 kubelet[1523]: E0314 19:41:10.055917    1523 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0314 19:42:21.864870    8428 command_runner.go:130] > Mar 14 19:41:10 multinode-442000 kubelet[1523]: E0314 19:41:10.055948    1523 projected.go:198] Error preparing data for projected volume kube-api-access-6hh9s for pod default/busybox-5b5d89c9d6-7446n: object "default"/"kube-root-ca.crt" not registered
	I0314 19:42:21.864870    8428 command_runner.go:130] > Mar 14 19:41:10 multinode-442000 kubelet[1523]: E0314 19:41:10.055999    1523 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6ca0ace6-596a-4504-80b5-0cc0cc11f9a2-kube-api-access-6hh9s podName:6ca0ace6-596a-4504-80b5-0cc0cc11f9a2 nodeName:}" failed. No retries permitted until 2024-03-14 19:41:14.055983733 +0000 UTC m=+13.877608054 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-6hh9s" (UniqueName: "kubernetes.io/projected/6ca0ace6-596a-4504-80b5-0cc0cc11f9a2-kube-api-access-6hh9s") pod "busybox-5b5d89c9d6-7446n" (UID: "6ca0ace6-596a-4504-80b5-0cc0cc11f9a2") : object "default"/"kube-root-ca.crt" not registered
	I0314 19:42:21.864870    8428 command_runner.go:130] > Mar 14 19:41:11 multinode-442000 kubelet[1523]: E0314 19:41:11.480167    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-d22jc" podUID="2a563b3f-a175-4dc2-9f0b-67dbaefbfaac"
	I0314 19:42:21.865406    8428 command_runner.go:130] > Mar 14 19:41:11 multinode-442000 kubelet[1523]: E0314 19:41:11.480285    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-7446n" podUID="6ca0ace6-596a-4504-80b5-0cc0cc11f9a2"
	I0314 19:42:21.865406    8428 command_runner.go:130] > Mar 14 19:41:13 multinode-442000 kubelet[1523]: E0314 19:41:13.480095    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-d22jc" podUID="2a563b3f-a175-4dc2-9f0b-67dbaefbfaac"
	I0314 19:42:21.865406    8428 command_runner.go:130] > Mar 14 19:41:13 multinode-442000 kubelet[1523]: E0314 19:41:13.480797    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-7446n" podUID="6ca0ace6-596a-4504-80b5-0cc0cc11f9a2"
	I0314 19:42:21.865527    8428 command_runner.go:130] > Mar 14 19:41:13 multinode-442000 kubelet[1523]: E0314 19:41:13.988392    1523 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0314 19:42:21.865527    8428 command_runner.go:130] > Mar 14 19:41:13 multinode-442000 kubelet[1523]: E0314 19:41:13.988528    1523 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2a563b3f-a175-4dc2-9f0b-67dbaefbfaac-config-volume podName:2a563b3f-a175-4dc2-9f0b-67dbaefbfaac nodeName:}" failed. No retries permitted until 2024-03-14 19:41:21.98850961 +0000 UTC m=+21.810133831 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/2a563b3f-a175-4dc2-9f0b-67dbaefbfaac-config-volume") pod "coredns-5dd5756b68-d22jc" (UID: "2a563b3f-a175-4dc2-9f0b-67dbaefbfaac") : object "kube-system"/"coredns" not registered
	I0314 19:42:21.865527    8428 command_runner.go:130] > Mar 14 19:41:14 multinode-442000 kubelet[1523]: E0314 19:41:14.089208    1523 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0314 19:42:21.865527    8428 command_runner.go:130] > Mar 14 19:41:14 multinode-442000 kubelet[1523]: E0314 19:41:14.089365    1523 projected.go:198] Error preparing data for projected volume kube-api-access-6hh9s for pod default/busybox-5b5d89c9d6-7446n: object "default"/"kube-root-ca.crt" not registered
	I0314 19:42:21.865527    8428 command_runner.go:130] > Mar 14 19:41:14 multinode-442000 kubelet[1523]: E0314 19:41:14.089427    1523 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6ca0ace6-596a-4504-80b5-0cc0cc11f9a2-kube-api-access-6hh9s podName:6ca0ace6-596a-4504-80b5-0cc0cc11f9a2 nodeName:}" failed. No retries permitted until 2024-03-14 19:41:22.089409571 +0000 UTC m=+21.911033792 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-6hh9s" (UniqueName: "kubernetes.io/projected/6ca0ace6-596a-4504-80b5-0cc0cc11f9a2-kube-api-access-6hh9s") pod "busybox-5b5d89c9d6-7446n" (UID: "6ca0ace6-596a-4504-80b5-0cc0cc11f9a2") : object "default"/"kube-root-ca.crt" not registered
	I0314 19:42:21.865527    8428 command_runner.go:130] > Mar 14 19:41:15 multinode-442000 kubelet[1523]: E0314 19:41:15.480116    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-d22jc" podUID="2a563b3f-a175-4dc2-9f0b-67dbaefbfaac"
	I0314 19:42:21.865527    8428 command_runner.go:130] > Mar 14 19:41:15 multinode-442000 kubelet[1523]: E0314 19:41:15.480286    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-7446n" podUID="6ca0ace6-596a-4504-80b5-0cc0cc11f9a2"
	I0314 19:42:21.866713    8428 command_runner.go:130] > Mar 14 19:41:17 multinode-442000 kubelet[1523]: E0314 19:41:17.479583    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-7446n" podUID="6ca0ace6-596a-4504-80b5-0cc0cc11f9a2"
	I0314 19:42:21.866713    8428 command_runner.go:130] > Mar 14 19:41:17 multinode-442000 kubelet[1523]: E0314 19:41:17.480025    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-d22jc" podUID="2a563b3f-a175-4dc2-9f0b-67dbaefbfaac"
	I0314 19:42:21.866713    8428 command_runner.go:130] > Mar 14 19:41:19 multinode-442000 kubelet[1523]: E0314 19:41:19.480562    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-d22jc" podUID="2a563b3f-a175-4dc2-9f0b-67dbaefbfaac"
	I0314 19:42:21.866713    8428 command_runner.go:130] > Mar 14 19:41:19 multinode-442000 kubelet[1523]: E0314 19:41:19.480625    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-7446n" podUID="6ca0ace6-596a-4504-80b5-0cc0cc11f9a2"
	I0314 19:42:21.866713    8428 command_runner.go:130] > Mar 14 19:41:21 multinode-442000 kubelet[1523]: E0314 19:41:21.479895    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-7446n" podUID="6ca0ace6-596a-4504-80b5-0cc0cc11f9a2"
	I0314 19:42:21.866713    8428 command_runner.go:130] > Mar 14 19:41:21 multinode-442000 kubelet[1523]: E0314 19:41:21.480437    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-d22jc" podUID="2a563b3f-a175-4dc2-9f0b-67dbaefbfaac"
	I0314 19:42:21.866713    8428 command_runner.go:130] > Mar 14 19:41:22 multinode-442000 kubelet[1523]: E0314 19:41:22.061436    1523 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0314 19:42:21.867693    8428 command_runner.go:130] > Mar 14 19:41:22 multinode-442000 kubelet[1523]: E0314 19:41:22.061515    1523 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2a563b3f-a175-4dc2-9f0b-67dbaefbfaac-config-volume podName:2a563b3f-a175-4dc2-9f0b-67dbaefbfaac nodeName:}" failed. No retries permitted until 2024-03-14 19:41:38.061499618 +0000 UTC m=+37.883123839 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/2a563b3f-a175-4dc2-9f0b-67dbaefbfaac-config-volume") pod "coredns-5dd5756b68-d22jc" (UID: "2a563b3f-a175-4dc2-9f0b-67dbaefbfaac") : object "kube-system"/"coredns" not registered
	I0314 19:42:21.867693    8428 command_runner.go:130] > Mar 14 19:41:22 multinode-442000 kubelet[1523]: E0314 19:41:22.162555    1523 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0314 19:42:21.867693    8428 command_runner.go:130] > Mar 14 19:41:22 multinode-442000 kubelet[1523]: E0314 19:41:22.162603    1523 projected.go:198] Error preparing data for projected volume kube-api-access-6hh9s for pod default/busybox-5b5d89c9d6-7446n: object "default"/"kube-root-ca.crt" not registered
	I0314 19:42:21.867693    8428 command_runner.go:130] > Mar 14 19:41:22 multinode-442000 kubelet[1523]: E0314 19:41:22.162667    1523 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6ca0ace6-596a-4504-80b5-0cc0cc11f9a2-kube-api-access-6hh9s podName:6ca0ace6-596a-4504-80b5-0cc0cc11f9a2 nodeName:}" failed. No retries permitted until 2024-03-14 19:41:38.162650651 +0000 UTC m=+37.984274872 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-6hh9s" (UniqueName: "kubernetes.io/projected/6ca0ace6-596a-4504-80b5-0cc0cc11f9a2-kube-api-access-6hh9s") pod "busybox-5b5d89c9d6-7446n" (UID: "6ca0ace6-596a-4504-80b5-0cc0cc11f9a2") : object "default"/"kube-root-ca.crt" not registered
	I0314 19:42:21.867693    8428 command_runner.go:130] > Mar 14 19:41:23 multinode-442000 kubelet[1523]: E0314 19:41:23.480157    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-d22jc" podUID="2a563b3f-a175-4dc2-9f0b-67dbaefbfaac"
	I0314 19:42:21.867693    8428 command_runner.go:130] > Mar 14 19:41:23 multinode-442000 kubelet[1523]: E0314 19:41:23.481151    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-7446n" podUID="6ca0ace6-596a-4504-80b5-0cc0cc11f9a2"
	I0314 19:42:21.867693    8428 command_runner.go:130] > Mar 14 19:41:25 multinode-442000 kubelet[1523]: E0314 19:41:25.479970    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-d22jc" podUID="2a563b3f-a175-4dc2-9f0b-67dbaefbfaac"
	I0314 19:42:21.867693    8428 command_runner.go:130] > Mar 14 19:41:25 multinode-442000 kubelet[1523]: E0314 19:41:25.480065    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-7446n" podUID="6ca0ace6-596a-4504-80b5-0cc0cc11f9a2"
	I0314 19:42:21.867693    8428 command_runner.go:130] > Mar 14 19:41:27 multinode-442000 kubelet[1523]: E0314 19:41:27.480032    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-d22jc" podUID="2a563b3f-a175-4dc2-9f0b-67dbaefbfaac"
	I0314 19:42:21.867693    8428 command_runner.go:130] > Mar 14 19:41:27 multinode-442000 kubelet[1523]: E0314 19:41:27.480122    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-7446n" podUID="6ca0ace6-596a-4504-80b5-0cc0cc11f9a2"
	I0314 19:42:21.867693    8428 command_runner.go:130] > Mar 14 19:41:29 multinode-442000 kubelet[1523]: E0314 19:41:29.480034    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-7446n" podUID="6ca0ace6-596a-4504-80b5-0cc0cc11f9a2"
	I0314 19:42:21.867693    8428 command_runner.go:130] > Mar 14 19:41:29 multinode-442000 kubelet[1523]: E0314 19:41:29.480291    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-d22jc" podUID="2a563b3f-a175-4dc2-9f0b-67dbaefbfaac"
	I0314 19:42:21.867693    8428 command_runner.go:130] > Mar 14 19:41:31 multinode-442000 kubelet[1523]: E0314 19:41:31.479554    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-d22jc" podUID="2a563b3f-a175-4dc2-9f0b-67dbaefbfaac"
	I0314 19:42:21.867693    8428 command_runner.go:130] > Mar 14 19:41:31 multinode-442000 kubelet[1523]: E0314 19:41:31.479650    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-7446n" podUID="6ca0ace6-596a-4504-80b5-0cc0cc11f9a2"
	I0314 19:42:21.867693    8428 command_runner.go:130] > Mar 14 19:41:33 multinode-442000 kubelet[1523]: E0314 19:41:33.479299    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-d22jc" podUID="2a563b3f-a175-4dc2-9f0b-67dbaefbfaac"
	I0314 19:42:21.867693    8428 command_runner.go:130] > Mar 14 19:41:33 multinode-442000 kubelet[1523]: E0314 19:41:33.479835    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-7446n" podUID="6ca0ace6-596a-4504-80b5-0cc0cc11f9a2"
	I0314 19:42:21.867693    8428 command_runner.go:130] > Mar 14 19:41:35 multinode-442000 kubelet[1523]: E0314 19:41:35.479778    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-7446n" podUID="6ca0ace6-596a-4504-80b5-0cc0cc11f9a2"
	I0314 19:42:21.867693    8428 command_runner.go:130] > Mar 14 19:41:35 multinode-442000 kubelet[1523]: E0314 19:41:35.480230    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-d22jc" podUID="2a563b3f-a175-4dc2-9f0b-67dbaefbfaac"
	I0314 19:42:21.867693    8428 command_runner.go:130] > Mar 14 19:41:37 multinode-442000 kubelet[1523]: E0314 19:41:37.480388    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-d22jc" podUID="2a563b3f-a175-4dc2-9f0b-67dbaefbfaac"
	I0314 19:42:21.867693    8428 command_runner.go:130] > Mar 14 19:41:37 multinode-442000 kubelet[1523]: E0314 19:41:37.480921    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-7446n" podUID="6ca0ace6-596a-4504-80b5-0cc0cc11f9a2"
	I0314 19:42:21.867693    8428 command_runner.go:130] > Mar 14 19:41:38 multinode-442000 kubelet[1523]: E0314 19:41:38.089907    1523 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0314 19:42:21.868716    8428 command_runner.go:130] > Mar 14 19:41:38 multinode-442000 kubelet[1523]: E0314 19:41:38.090056    1523 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2a563b3f-a175-4dc2-9f0b-67dbaefbfaac-config-volume podName:2a563b3f-a175-4dc2-9f0b-67dbaefbfaac nodeName:}" failed. No retries permitted until 2024-03-14 19:42:10.090036325 +0000 UTC m=+69.911660546 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/2a563b3f-a175-4dc2-9f0b-67dbaefbfaac-config-volume") pod "coredns-5dd5756b68-d22jc" (UID: "2a563b3f-a175-4dc2-9f0b-67dbaefbfaac") : object "kube-system"/"coredns" not registered
	I0314 19:42:21.868766    8428 command_runner.go:130] > Mar 14 19:41:38 multinode-442000 kubelet[1523]: E0314 19:41:38.191172    1523 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0314 19:42:21.868766    8428 command_runner.go:130] > Mar 14 19:41:38 multinode-442000 kubelet[1523]: E0314 19:41:38.191351    1523 projected.go:198] Error preparing data for projected volume kube-api-access-6hh9s for pod default/busybox-5b5d89c9d6-7446n: object "default"/"kube-root-ca.crt" not registered
	I0314 19:42:21.868766    8428 command_runner.go:130] > Mar 14 19:41:38 multinode-442000 kubelet[1523]: E0314 19:41:38.191425    1523 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6ca0ace6-596a-4504-80b5-0cc0cc11f9a2-kube-api-access-6hh9s podName:6ca0ace6-596a-4504-80b5-0cc0cc11f9a2 nodeName:}" failed. No retries permitted until 2024-03-14 19:42:10.191406835 +0000 UTC m=+70.013031056 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-6hh9s" (UniqueName: "kubernetes.io/projected/6ca0ace6-596a-4504-80b5-0cc0cc11f9a2-kube-api-access-6hh9s") pod "busybox-5b5d89c9d6-7446n" (UID: "6ca0ace6-596a-4504-80b5-0cc0cc11f9a2") : object "default"/"kube-root-ca.crt" not registered
	I0314 19:42:21.868766    8428 command_runner.go:130] > Mar 14 19:41:38 multinode-442000 kubelet[1523]: I0314 19:41:38.578418    1523 scope.go:117] "RemoveContainer" containerID="07c2872c48edaa090b20d66267963c0d69c5c9eb97824b199af2d7e611ac596a"
	I0314 19:42:21.868766    8428 command_runner.go:130] > Mar 14 19:41:38 multinode-442000 kubelet[1523]: I0314 19:41:38.578814    1523 scope.go:117] "RemoveContainer" containerID="2876622a2618d9b60f7cb4f182054a8b2d30209e3bd14c5d4afe515101547bc8"
	I0314 19:42:21.868766    8428 command_runner.go:130] > Mar 14 19:41:38 multinode-442000 kubelet[1523]: E0314 19:41:38.579025    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(65d76566-4401-4b28-8452-10ed98624901)\"" pod="kube-system/storage-provisioner" podUID="65d76566-4401-4b28-8452-10ed98624901"
	I0314 19:42:21.868766    8428 command_runner.go:130] > Mar 14 19:41:39 multinode-442000 kubelet[1523]: E0314 19:41:39.479691    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-d22jc" podUID="2a563b3f-a175-4dc2-9f0b-67dbaefbfaac"
	I0314 19:42:21.868766    8428 command_runner.go:130] > Mar 14 19:41:39 multinode-442000 kubelet[1523]: E0314 19:41:39.479909    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-7446n" podUID="6ca0ace6-596a-4504-80b5-0cc0cc11f9a2"
	I0314 19:42:21.868766    8428 command_runner.go:130] > Mar 14 19:41:41 multinode-442000 kubelet[1523]: E0314 19:41:41.479574    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-7446n" podUID="6ca0ace6-596a-4504-80b5-0cc0cc11f9a2"
	I0314 19:42:21.868766    8428 command_runner.go:130] > Mar 14 19:41:41 multinode-442000 kubelet[1523]: E0314 19:41:41.480003    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-d22jc" podUID="2a563b3f-a175-4dc2-9f0b-67dbaefbfaac"
	I0314 19:42:21.868766    8428 command_runner.go:130] > Mar 14 19:41:41 multinode-442000 kubelet[1523]: I0314 19:41:41.518811    1523 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	I0314 19:42:21.868766    8428 command_runner.go:130] > Mar 14 19:41:53 multinode-442000 kubelet[1523]: I0314 19:41:53.480206    1523 scope.go:117] "RemoveContainer" containerID="2876622a2618d9b60f7cb4f182054a8b2d30209e3bd14c5d4afe515101547bc8"
	I0314 19:42:21.868766    8428 command_runner.go:130] > Mar 14 19:42:00 multinode-442000 kubelet[1523]: I0314 19:42:00.447192    1523 scope.go:117] "RemoveContainer" containerID="9585e3eb2ead2f471eb0d22c8e29e4bfd954095774af365d80329ea39fff78e1"
	I0314 19:42:21.868766    8428 command_runner.go:130] > Mar 14 19:42:00 multinode-442000 kubelet[1523]: I0314 19:42:00.490865    1523 scope.go:117] "RemoveContainer" containerID="cd640f130e429bd4182c258358ec791604b8f307f9c45f2e3880e9b1a7df666a"
	I0314 19:42:21.868766    8428 command_runner.go:130] > Mar 14 19:42:00 multinode-442000 kubelet[1523]: E0314 19:42:00.516969    1523 iptables.go:575] "Could not set up iptables canary" err=<
	I0314 19:42:21.868766    8428 command_runner.go:130] > Mar 14 19:42:00 multinode-442000 kubelet[1523]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	I0314 19:42:21.869306    8428 command_runner.go:130] > Mar 14 19:42:00 multinode-442000 kubelet[1523]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	I0314 19:42:21.869306    8428 command_runner.go:130] > Mar 14 19:42:00 multinode-442000 kubelet[1523]:         Perhaps ip6tables or your kernel needs to be upgraded.
	I0314 19:42:21.869306    8428 command_runner.go:130] > Mar 14 19:42:00 multinode-442000 kubelet[1523]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	I0314 19:42:21.869306    8428 command_runner.go:130] > Mar 14 19:42:11 multinode-442000 kubelet[1523]: I0314 19:42:11.167906    1523 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="89f326046d00d990fbe8611867f6438ef498caad91d78b4f265633a7cd56307f"
	I0314 19:42:21.869306    8428 command_runner.go:130] > Mar 14 19:42:11 multinode-442000 kubelet[1523]: I0314 19:42:11.214897    1523 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cddebe360bf3a58d057146523ff9f043ddb40843d3e55a24f8f364524780a439"
	I0314 19:42:21.911288    8428 logs.go:123] Gathering logs for kube-apiserver [a598d24960de] ...
	I0314 19:42:21.911288    8428 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a598d24960de"
	I0314 19:42:21.941844    8428 command_runner.go:130] ! I0314 19:41:02.580148       1 options.go:220] external host was not specified, using 172.17.93.236
	I0314 19:42:21.941937    8428 command_runner.go:130] ! I0314 19:41:02.584195       1 server.go:148] Version: v1.28.4
	I0314 19:42:21.941937    8428 command_runner.go:130] ! I0314 19:41:02.584361       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0314 19:42:21.942223    8428 command_runner.go:130] ! I0314 19:41:03.945945       1 shared_informer.go:311] Waiting for caches to sync for node_authorizer
	I0314 19:42:21.942280    8428 command_runner.go:130] ! I0314 19:41:03.963375       1 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0314 19:42:21.942388    8428 command_runner.go:130] ! I0314 19:41:03.963415       1 plugins.go:161] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0314 19:42:21.942388    8428 command_runner.go:130] ! I0314 19:41:03.963973       1 instance.go:298] Using reconciler: lease
	I0314 19:42:21.942447    8428 command_runner.go:130] ! I0314 19:41:04.031000       1 handler.go:232] Adding GroupVersion apiextensions.k8s.io v1 to ResourceManager
	I0314 19:42:21.942474    8428 command_runner.go:130] ! W0314 19:41:04.031118       1 genericapiserver.go:744] Skipping API apiextensions.k8s.io/v1beta1 because it has no resources.
	I0314 19:42:21.942474    8428 command_runner.go:130] ! I0314 19:41:04.342643       1 handler.go:232] Adding GroupVersion  v1 to ResourceManager
	I0314 19:42:21.942474    8428 command_runner.go:130] ! I0314 19:41:04.343120       1 instance.go:709] API group "internal.apiserver.k8s.io" is not enabled, skipping.
	I0314 19:42:21.942558    8428 command_runner.go:130] ! I0314 19:41:04.862959       1 instance.go:709] API group "resource.k8s.io" is not enabled, skipping.
	I0314 19:42:21.942558    8428 command_runner.go:130] ! I0314 19:41:04.875745       1 handler.go:232] Adding GroupVersion authentication.k8s.io v1 to ResourceManager
	I0314 19:42:21.942558    8428 command_runner.go:130] ! W0314 19:41:04.875858       1 genericapiserver.go:744] Skipping API authentication.k8s.io/v1beta1 because it has no resources.
	I0314 19:42:21.942641    8428 command_runner.go:130] ! W0314 19:41:04.875867       1 genericapiserver.go:744] Skipping API authentication.k8s.io/v1alpha1 because it has no resources.
	I0314 19:42:21.942641    8428 command_runner.go:130] ! I0314 19:41:04.876422       1 handler.go:232] Adding GroupVersion authorization.k8s.io v1 to ResourceManager
	I0314 19:42:21.942641    8428 command_runner.go:130] ! W0314 19:41:04.876506       1 genericapiserver.go:744] Skipping API authorization.k8s.io/v1beta1 because it has no resources.
	I0314 19:42:21.942693    8428 command_runner.go:130] ! I0314 19:41:04.877676       1 handler.go:232] Adding GroupVersion autoscaling v2 to ResourceManager
	I0314 19:42:21.942723    8428 command_runner.go:130] ! I0314 19:41:04.878707       1 handler.go:232] Adding GroupVersion autoscaling v1 to ResourceManager
	I0314 19:42:21.942723    8428 command_runner.go:130] ! W0314 19:41:04.878804       1 genericapiserver.go:744] Skipping API autoscaling/v2beta1 because it has no resources.
	I0314 19:42:21.942806    8428 command_runner.go:130] ! W0314 19:41:04.878812       1 genericapiserver.go:744] Skipping API autoscaling/v2beta2 because it has no resources.
	I0314 19:42:21.942806    8428 command_runner.go:130] ! I0314 19:41:04.881331       1 handler.go:232] Adding GroupVersion batch v1 to ResourceManager
	I0314 19:42:21.942864    8428 command_runner.go:130] ! W0314 19:41:04.881418       1 genericapiserver.go:744] Skipping API batch/v1beta1 because it has no resources.
	I0314 19:42:21.942890    8428 command_runner.go:130] ! I0314 19:41:04.882613       1 handler.go:232] Adding GroupVersion certificates.k8s.io v1 to ResourceManager
	I0314 19:42:21.942890    8428 command_runner.go:130] ! W0314 19:41:04.882706       1 genericapiserver.go:744] Skipping API certificates.k8s.io/v1beta1 because it has no resources.
	I0314 19:42:21.942947    8428 command_runner.go:130] ! W0314 19:41:04.882714       1 genericapiserver.go:744] Skipping API certificates.k8s.io/v1alpha1 because it has no resources.
	I0314 19:42:21.943000    8428 command_runner.go:130] ! I0314 19:41:04.883473       1 handler.go:232] Adding GroupVersion coordination.k8s.io v1 to ResourceManager
	I0314 19:42:21.943048    8428 command_runner.go:130] ! W0314 19:41:04.883562       1 genericapiserver.go:744] Skipping API coordination.k8s.io/v1beta1 because it has no resources.
	I0314 19:42:21.943048    8428 command_runner.go:130] ! W0314 19:41:04.883619       1 genericapiserver.go:744] Skipping API discovery.k8s.io/v1beta1 because it has no resources.
	I0314 19:42:21.943112    8428 command_runner.go:130] ! I0314 19:41:04.884340       1 handler.go:232] Adding GroupVersion discovery.k8s.io v1 to ResourceManager
	I0314 19:42:21.943136    8428 command_runner.go:130] ! I0314 19:41:04.886289       1 handler.go:232] Adding GroupVersion networking.k8s.io v1 to ResourceManager
	I0314 19:42:21.943165    8428 command_runner.go:130] ! W0314 19:41:04.886373       1 genericapiserver.go:744] Skipping API networking.k8s.io/v1beta1 because it has no resources.
	I0314 19:42:21.943165    8428 command_runner.go:130] ! W0314 19:41:04.886382       1 genericapiserver.go:744] Skipping API networking.k8s.io/v1alpha1 because it has no resources.
	I0314 19:42:21.943165    8428 command_runner.go:130] ! I0314 19:41:04.886877       1 handler.go:232] Adding GroupVersion node.k8s.io v1 to ResourceManager
	I0314 19:42:21.943165    8428 command_runner.go:130] ! W0314 19:41:04.886971       1 genericapiserver.go:744] Skipping API node.k8s.io/v1beta1 because it has no resources.
	I0314 19:42:21.943165    8428 command_runner.go:130] ! W0314 19:41:04.886979       1 genericapiserver.go:744] Skipping API node.k8s.io/v1alpha1 because it has no resources.
	I0314 19:42:21.943165    8428 command_runner.go:130] ! I0314 19:41:04.888213       1 handler.go:232] Adding GroupVersion policy v1 to ResourceManager
	I0314 19:42:21.943165    8428 command_runner.go:130] ! W0314 19:41:04.888261       1 genericapiserver.go:744] Skipping API policy/v1beta1 because it has no resources.
	I0314 19:42:21.943165    8428 command_runner.go:130] ! I0314 19:41:04.903461       1 handler.go:232] Adding GroupVersion rbac.authorization.k8s.io v1 to ResourceManager
	I0314 19:42:21.943165    8428 command_runner.go:130] ! W0314 19:41:04.903509       1 genericapiserver.go:744] Skipping API rbac.authorization.k8s.io/v1beta1 because it has no resources.
	I0314 19:42:21.943165    8428 command_runner.go:130] ! W0314 19:41:04.903517       1 genericapiserver.go:744] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
	I0314 19:42:21.943165    8428 command_runner.go:130] ! I0314 19:41:04.906409       1 handler.go:232] Adding GroupVersion scheduling.k8s.io v1 to ResourceManager
	I0314 19:42:21.943165    8428 command_runner.go:130] ! W0314 19:41:04.906458       1 genericapiserver.go:744] Skipping API scheduling.k8s.io/v1beta1 because it has no resources.
	I0314 19:42:21.943165    8428 command_runner.go:130] ! W0314 19:41:04.906466       1 genericapiserver.go:744] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
	I0314 19:42:21.943165    8428 command_runner.go:130] ! I0314 19:41:04.915366       1 handler.go:232] Adding GroupVersion storage.k8s.io v1 to ResourceManager
	I0314 19:42:21.943165    8428 command_runner.go:130] ! W0314 19:41:04.915463       1 genericapiserver.go:744] Skipping API storage.k8s.io/v1beta1 because it has no resources.
	I0314 19:42:21.943165    8428 command_runner.go:130] ! W0314 19:41:04.915472       1 genericapiserver.go:744] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
	I0314 19:42:21.943165    8428 command_runner.go:130] ! I0314 19:41:04.916839       1 handler.go:232] Adding GroupVersion flowcontrol.apiserver.k8s.io v1beta3 to ResourceManager
	I0314 19:42:21.943165    8428 command_runner.go:130] ! I0314 19:41:04.918318       1 handler.go:232] Adding GroupVersion flowcontrol.apiserver.k8s.io v1beta2 to ResourceManager
	I0314 19:42:21.943165    8428 command_runner.go:130] ! W0314 19:41:04.918410       1 genericapiserver.go:744] Skipping API flowcontrol.apiserver.k8s.io/v1beta1 because it has no resources.
	I0314 19:42:21.943165    8428 command_runner.go:130] ! W0314 19:41:04.918418       1 genericapiserver.go:744] Skipping API flowcontrol.apiserver.k8s.io/v1alpha1 because it has no resources.
	I0314 19:42:21.943165    8428 command_runner.go:130] ! I0314 19:41:04.922469       1 handler.go:232] Adding GroupVersion apps v1 to ResourceManager
	I0314 19:42:21.943165    8428 command_runner.go:130] ! W0314 19:41:04.922563       1 genericapiserver.go:744] Skipping API apps/v1beta2 because it has no resources.
	I0314 19:42:21.943165    8428 command_runner.go:130] ! W0314 19:41:04.922576       1 genericapiserver.go:744] Skipping API apps/v1beta1 because it has no resources.
	I0314 19:42:21.943165    8428 command_runner.go:130] ! I0314 19:41:04.923589       1 handler.go:232] Adding GroupVersion admissionregistration.k8s.io v1 to ResourceManager
	I0314 19:42:21.943703    8428 command_runner.go:130] ! W0314 19:41:04.923671       1 genericapiserver.go:744] Skipping API admissionregistration.k8s.io/v1beta1 because it has no resources.
	I0314 19:42:21.943703    8428 command_runner.go:130] ! W0314 19:41:04.923678       1 genericapiserver.go:744] Skipping API admissionregistration.k8s.io/v1alpha1 because it has no resources.
	I0314 19:42:21.943703    8428 command_runner.go:130] ! I0314 19:41:04.924323       1 handler.go:232] Adding GroupVersion events.k8s.io v1 to ResourceManager
	I0314 19:42:21.943703    8428 command_runner.go:130] ! W0314 19:41:04.924409       1 genericapiserver.go:744] Skipping API events.k8s.io/v1beta1 because it has no resources.
	I0314 19:42:21.943703    8428 command_runner.go:130] ! I0314 19:41:04.946149       1 handler.go:232] Adding GroupVersion apiregistration.k8s.io v1 to ResourceManager
	I0314 19:42:21.943809    8428 command_runner.go:130] ! W0314 19:41:04.946188       1 genericapiserver.go:744] Skipping API apiregistration.k8s.io/v1beta1 because it has no resources.
	I0314 19:42:21.943837    8428 command_runner.go:130] ! I0314 19:41:05.649195       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0314 19:42:21.943837    8428 command_runner.go:130] ! I0314 19:41:05.649351       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0314 19:42:21.943927    8428 command_runner.go:130] ! I0314 19:41:05.650113       1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	I0314 19:42:21.943927    8428 command_runner.go:130] ! I0314 19:41:05.651281       1 secure_serving.go:213] Serving securely on [::]:8443
	I0314 19:42:21.943927    8428 command_runner.go:130] ! I0314 19:41:05.651311       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0314 19:42:21.943983    8428 command_runner.go:130] ! I0314 19:41:05.651726       1 handler_discovery.go:412] Starting ResourceDiscoveryManager
	I0314 19:42:21.944086    8428 command_runner.go:130] ! I0314 19:41:05.651907       1 dynamic_serving_content.go:132] "Starting controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	I0314 19:42:21.944086    8428 command_runner.go:130] ! I0314 19:41:05.654468       1 controller.go:80] Starting OpenAPI V3 AggregationController
	I0314 19:42:21.944086    8428 command_runner.go:130] ! I0314 19:41:05.654814       1 gc_controller.go:78] Starting apiserver lease garbage collector
	I0314 19:42:21.944086    8428 command_runner.go:130] ! I0314 19:41:05.655201       1 gc_controller.go:78] Starting apiserver lease garbage collector
	I0314 19:42:21.944086    8428 command_runner.go:130] ! I0314 19:41:05.656049       1 apf_controller.go:372] Starting API Priority and Fairness config controller
	I0314 19:42:21.944086    8428 command_runner.go:130] ! I0314 19:41:05.656308       1 available_controller.go:423] Starting AvailableConditionController
	I0314 19:42:21.944086    8428 command_runner.go:130] ! I0314 19:41:05.656404       1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
	I0314 19:42:21.944086    8428 command_runner.go:130] ! I0314 19:41:05.651597       1 apiservice_controller.go:97] Starting APIServiceRegistrationController
	I0314 19:42:21.944086    8428 command_runner.go:130] ! I0314 19:41:05.656599       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
	I0314 19:42:21.944086    8428 command_runner.go:130] ! I0314 19:41:05.658623       1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
	I0314 19:42:21.944086    8428 command_runner.go:130] ! I0314 19:41:05.658785       1 shared_informer.go:311] Waiting for caches to sync for cluster_authentication_trust_controller
	I0314 19:42:21.944086    8428 command_runner.go:130] ! I0314 19:41:05.659483       1 system_namespaces_controller.go:67] Starting system namespaces controller
	I0314 19:42:21.944086    8428 command_runner.go:130] ! I0314 19:41:05.661076       1 aggregator.go:164] waiting for initial CRD sync...
	I0314 19:42:21.944086    8428 command_runner.go:130] ! I0314 19:41:05.662487       1 customresource_discovery_controller.go:289] Starting DiscoveryController
	I0314 19:42:21.944086    8428 command_runner.go:130] ! I0314 19:41:05.662789       1 controller.go:78] Starting OpenAPI AggregationController
	I0314 19:42:21.944086    8428 command_runner.go:130] ! I0314 19:41:05.727194       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0314 19:42:21.944086    8428 command_runner.go:130] ! I0314 19:41:05.728512       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0314 19:42:21.944086    8428 command_runner.go:130] ! I0314 19:41:05.729067       1 controller.go:116] Starting legacy_token_tracking_controller
	I0314 19:42:21.944086    8428 command_runner.go:130] ! I0314 19:41:05.729317       1 shared_informer.go:311] Waiting for caches to sync for configmaps
	I0314 19:42:21.944086    8428 command_runner.go:130] ! I0314 19:41:05.729432       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0314 19:42:21.944086    8428 command_runner.go:130] ! I0314 19:41:05.729507       1 shared_informer.go:311] Waiting for caches to sync for crd-autoregister
	I0314 19:42:21.944086    8428 command_runner.go:130] ! I0314 19:41:05.729606       1 controller.go:134] Starting OpenAPI controller
	I0314 19:42:21.944086    8428 command_runner.go:130] ! I0314 19:41:05.729710       1 controller.go:85] Starting OpenAPI V3 controller
	I0314 19:42:21.944086    8428 command_runner.go:130] ! I0314 19:41:05.729812       1 naming_controller.go:291] Starting NamingConditionController
	I0314 19:42:21.944086    8428 command_runner.go:130] ! I0314 19:41:05.729911       1 establishing_controller.go:76] Starting EstablishingController
	I0314 19:42:21.944086    8428 command_runner.go:130] ! I0314 19:41:05.730411       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0314 19:42:21.944613    8428 command_runner.go:130] ! I0314 19:41:05.730521       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0314 19:42:21.944613    8428 command_runner.go:130] ! I0314 19:41:05.730616       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0314 19:42:21.944613    8428 command_runner.go:130] ! I0314 19:41:05.799477       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0314 19:42:21.944613    8428 command_runner.go:130] ! I0314 19:41:05.813580       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0314 19:42:21.944701    8428 command_runner.go:130] ! I0314 19:41:05.830168       1 shared_informer.go:318] Caches are synced for configmaps
	I0314 19:42:21.944701    8428 command_runner.go:130] ! I0314 19:41:05.830217       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0314 19:42:21.944701    8428 command_runner.go:130] ! I0314 19:41:05.830281       1 aggregator.go:166] initial CRD sync complete...
	I0314 19:42:21.944783    8428 command_runner.go:130] ! I0314 19:41:05.830289       1 autoregister_controller.go:141] Starting autoregister controller
	I0314 19:42:21.944846    8428 command_runner.go:130] ! I0314 19:41:05.830295       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0314 19:42:21.944869    8428 command_runner.go:130] ! I0314 19:41:05.830301       1 cache.go:39] Caches are synced for autoregister controller
	I0314 19:42:21.944926    8428 command_runner.go:130] ! I0314 19:41:05.846941       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0314 19:42:21.944977    8428 command_runner.go:130] ! I0314 19:41:05.857057       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0314 19:42:21.945012    8428 command_runner.go:130] ! I0314 19:41:05.858966       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0314 19:42:21.945036    8428 command_runner.go:130] ! I0314 19:41:05.865554       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I0314 19:42:21.945092    8428 command_runner.go:130] ! I0314 19:41:05.865721       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I0314 19:42:21.945115    8428 command_runner.go:130] ! I0314 19:41:06.667315       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0314 19:42:21.945142    8428 command_runner.go:130] ! W0314 19:41:07.118314       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [172.17.93.236]
	I0314 19:42:21.945142    8428 command_runner.go:130] ! I0314 19:41:07.120612       1 controller.go:624] quota admission added evaluator for: endpoints
	I0314 19:42:21.945142    8428 command_runner.go:130] ! I0314 19:41:07.135973       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0314 19:42:21.945142    8428 command_runner.go:130] ! I0314 19:41:09.049225       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0314 19:42:21.945142    8428 command_runner.go:130] ! I0314 19:41:09.264220       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0314 19:42:21.945142    8428 command_runner.go:130] ! I0314 19:41:09.277110       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0314 19:42:21.945142    8428 command_runner.go:130] ! I0314 19:41:09.393446       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0314 19:42:21.945142    8428 command_runner.go:130] ! I0314 19:41:09.422214       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0314 19:42:21.952646    8428 logs.go:123] Gathering logs for kube-scheduler [dbb603289bf1] ...
	I0314 19:42:21.952646    8428 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbb603289bf1"
	I0314 19:42:21.979163    8428 command_runner.go:130] ! I0314 19:18:59.007917       1 serving.go:348] Generated self-signed cert in-memory
	I0314 19:42:21.979655    8428 command_runner.go:130] ! W0314 19:19:00.211611       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	I0314 19:42:21.979655    8428 command_runner.go:130] ! W0314 19:19:00.212802       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	I0314 19:42:21.979742    8428 command_runner.go:130] ! W0314 19:19:00.212990       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	I0314 19:42:21.979742    8428 command_runner.go:130] ! W0314 19:19:00.213108       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0314 19:42:21.979742    8428 command_runner.go:130] ! I0314 19:19:00.283055       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.4"
	I0314 19:42:21.979742    8428 command_runner.go:130] ! I0314 19:19:00.284207       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0314 19:42:21.979742    8428 command_runner.go:130] ! I0314 19:19:00.288027       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0314 19:42:21.979849    8428 command_runner.go:130] ! I0314 19:19:00.288233       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0314 19:42:21.979849    8428 command_runner.go:130] ! I0314 19:19:00.288206       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0314 19:42:21.979929    8428 command_runner.go:130] ! I0314 19:19:00.290233       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0314 19:42:21.979929    8428 command_runner.go:130] ! W0314 19:19:00.293166       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0314 19:42:21.980006    8428 command_runner.go:130] ! E0314 19:19:00.293367       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0314 19:42:21.980085    8428 command_runner.go:130] ! W0314 19:19:00.311723       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0314 19:42:21.980085    8428 command_runner.go:130] ! E0314 19:19:00.311803       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0314 19:42:21.980160    8428 command_runner.go:130] ! W0314 19:19:00.312480       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0314 19:42:21.980235    8428 command_runner.go:130] ! E0314 19:19:00.317665       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0314 19:42:21.980235    8428 command_runner.go:130] ! W0314 19:19:00.313212       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0314 19:42:21.980313    8428 command_runner.go:130] ! W0314 19:19:00.313379       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0314 19:42:21.980313    8428 command_runner.go:130] ! W0314 19:19:00.313450       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0314 19:42:21.980388    8428 command_runner.go:130] ! W0314 19:19:00.313586       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0314 19:42:21.980463    8428 command_runner.go:130] ! W0314 19:19:00.313632       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0314 19:42:21.980463    8428 command_runner.go:130] ! W0314 19:19:00.313705       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0314 19:42:21.980538    8428 command_runner.go:130] ! W0314 19:19:00.313774       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0314 19:42:21.980538    8428 command_runner.go:130] ! W0314 19:19:00.313864       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0314 19:42:21.980612    8428 command_runner.go:130] ! W0314 19:19:00.313910       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0314 19:42:21.980685    8428 command_runner.go:130] ! W0314 19:19:00.313978       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0314 19:42:21.980685    8428 command_runner.go:130] ! W0314 19:19:00.314056       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0314 19:42:21.980761    8428 command_runner.go:130] ! W0314 19:19:00.314091       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0314 19:42:21.980835    8428 command_runner.go:130] ! E0314 19:19:00.318101       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0314 19:42:21.980835    8428 command_runner.go:130] ! E0314 19:19:00.318394       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0314 19:42:21.980909    8428 command_runner.go:130] ! E0314 19:19:00.318606       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0314 19:42:21.980983    8428 command_runner.go:130] ! E0314 19:19:00.318728       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0314 19:42:21.981058    8428 command_runner.go:130] ! E0314 19:19:00.318953       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0314 19:42:21.981058    8428 command_runner.go:130] ! E0314 19:19:00.319076       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0314 19:42:21.981131    8428 command_runner.go:130] ! E0314 19:19:00.319318       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0314 19:42:21.981131    8428 command_runner.go:130] ! E0314 19:19:00.319575       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0314 19:42:21.981205    8428 command_runner.go:130] ! E0314 19:19:00.319588       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0314 19:42:21.981278    8428 command_runner.go:130] ! E0314 19:19:00.319719       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0314 19:42:21.981278    8428 command_runner.go:130] ! E0314 19:19:00.319732       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0314 19:42:21.981357    8428 command_runner.go:130] ! E0314 19:19:00.319788       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0314 19:42:21.981431    8428 command_runner.go:130] ! W0314 19:19:01.268901       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0314 19:42:21.981431    8428 command_runner.go:130] ! E0314 19:19:01.269219       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0314 19:42:21.981506    8428 command_runner.go:130] ! W0314 19:19:01.309661       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0314 19:42:21.981583    8428 command_runner.go:130] ! E0314 19:19:01.309894       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0314 19:42:21.981583    8428 command_runner.go:130] ! W0314 19:19:01.318104       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0314 19:42:21.981666    8428 command_runner.go:130] ! E0314 19:19:01.318410       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0314 19:42:21.981721    8428 command_runner.go:130] ! W0314 19:19:01.382148       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0314 19:42:21.981755    8428 command_runner.go:130] ! E0314 19:19:01.382194       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0314 19:42:21.981796    8428 command_runner.go:130] ! W0314 19:19:01.454259       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0314 19:42:21.981796    8428 command_runner.go:130] ! E0314 19:19:01.454398       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0314 19:42:21.981796    8428 command_runner.go:130] ! W0314 19:19:01.505982       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0314 19:42:21.981796    8428 command_runner.go:130] ! E0314 19:19:01.506182       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0314 19:42:21.981796    8428 command_runner.go:130] ! W0314 19:19:01.640521       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0314 19:42:21.981796    8428 command_runner.go:130] ! E0314 19:19:01.640836       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0314 19:42:21.981796    8428 command_runner.go:130] ! W0314 19:19:01.681052       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0314 19:42:21.981796    8428 command_runner.go:130] ! E0314 19:19:01.681953       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0314 19:42:21.981796    8428 command_runner.go:130] ! W0314 19:19:01.732243       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0314 19:42:21.981796    8428 command_runner.go:130] ! E0314 19:19:01.732288       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0314 19:42:21.981796    8428 command_runner.go:130] ! W0314 19:19:01.767241       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0314 19:42:21.981796    8428 command_runner.go:130] ! E0314 19:19:01.767329       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0314 19:42:21.982324    8428 command_runner.go:130] ! W0314 19:19:01.783665       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0314 19:42:21.982404    8428 command_runner.go:130] ! E0314 19:19:01.783845       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0314 19:42:21.982437    8428 command_runner.go:130] ! W0314 19:19:01.812936       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0314 19:42:21.982467    8428 command_runner.go:130] ! E0314 19:19:01.813027       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0314 19:42:21.982467    8428 command_runner.go:130] ! W0314 19:19:01.821109       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0314 19:42:21.982467    8428 command_runner.go:130] ! E0314 19:19:01.821267       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0314 19:42:21.982467    8428 command_runner.go:130] ! W0314 19:19:01.843311       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0314 19:42:21.982467    8428 command_runner.go:130] ! E0314 19:19:01.843339       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0314 19:42:21.982467    8428 command_runner.go:130] ! W0314 19:19:01.914649       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0314 19:42:21.982467    8428 command_runner.go:130] ! E0314 19:19:01.914986       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0314 19:42:21.982467    8428 command_runner.go:130] ! I0314 19:19:04.090863       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0314 19:42:21.982467    8428 command_runner.go:130] ! I0314 19:38:43.236637       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0314 19:42:21.982467    8428 command_runner.go:130] ! I0314 19:38:43.237145       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0314 19:42:21.982467    8428 command_runner.go:130] ! E0314 19:38:43.237439       1 run.go:74] "command failed" err="finished without leader elect"
	I0314 19:42:21.993743    8428 logs.go:123] Gathering logs for dmesg ...
	I0314 19:42:21.993743    8428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:42:22.014134    8428 command_runner.go:130] > [Mar14 19:39] You have booted with nomodeset. This means your GPU drivers are DISABLED
	I0314 19:42:22.014134    8428 command_runner.go:130] > [  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	I0314 19:42:22.014134    8428 command_runner.go:130] > [  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	I0314 19:42:22.014134    8428 command_runner.go:130] > [  +0.111500] RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible!
	I0314 19:42:22.014134    8428 command_runner.go:130] > [  +0.025646] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	I0314 19:42:22.014134    8428 command_runner.go:130] > [  +0.000006] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	I0314 19:42:22.014134    8428 command_runner.go:130] > [  +0.000001] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	I0314 19:42:22.014134    8428 command_runner.go:130] > [  +0.051209] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	I0314 19:42:22.014134    8428 command_runner.go:130] > [  +0.017569] * Found PM-Timer Bug on the chipset. Due to workarounds for a bug,
	I0314 19:42:22.014134    8428 command_runner.go:130] >               * this clock source is slow. Consider trying other clock sources
	I0314 19:42:22.014134    8428 command_runner.go:130] > [  +5.774438] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	I0314 19:42:22.014134    8428 command_runner.go:130] > [  +0.663188] psmouse serio1: trackpoint: failed to get extended button data, assuming 3 buttons
	I0314 19:42:22.014134    8428 command_runner.go:130] > [  +1.473946] systemd-fstab-generator[113]: Ignoring "noauto" option for root device
	I0314 19:42:22.014134    8428 command_runner.go:130] > [  +5.849126] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	I0314 19:42:22.015143    8428 command_runner.go:130] > [  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	I0314 19:42:22.015143    8428 command_runner.go:130] > [  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	I0314 19:42:22.015143    8428 command_runner.go:130] > [Mar14 19:40] systemd-fstab-generator[630]: Ignoring "noauto" option for root device
	I0314 19:42:22.015143    8428 command_runner.go:130] > [  +0.179743] systemd-fstab-generator[642]: Ignoring "noauto" option for root device
	I0314 19:42:22.015143    8428 command_runner.go:130] > [ +24.853688] systemd-fstab-generator[971]: Ignoring "noauto" option for root device
	I0314 19:42:22.015143    8428 command_runner.go:130] > [  +0.096946] kauditd_printk_skb: 73 callbacks suppressed
	I0314 19:42:22.015143    8428 command_runner.go:130] > [  +0.497369] systemd-fstab-generator[1009]: Ignoring "noauto" option for root device
	I0314 19:42:22.015143    8428 command_runner.go:130] > [  +0.185545] systemd-fstab-generator[1021]: Ignoring "noauto" option for root device
	I0314 19:42:22.015143    8428 command_runner.go:130] > [  +0.215423] systemd-fstab-generator[1035]: Ignoring "noauto" option for root device
	I0314 19:42:22.015143    8428 command_runner.go:130] > [  +2.887443] systemd-fstab-generator[1220]: Ignoring "noauto" option for root device
	I0314 19:42:22.015143    8428 command_runner.go:130] > [  +0.193519] systemd-fstab-generator[1232]: Ignoring "noauto" option for root device
	I0314 19:42:22.015143    8428 command_runner.go:130] > [  +0.182072] systemd-fstab-generator[1244]: Ignoring "noauto" option for root device
	I0314 19:42:22.015143    8428 command_runner.go:130] > [  +0.258988] systemd-fstab-generator[1259]: Ignoring "noauto" option for root device
	I0314 19:42:22.015143    8428 command_runner.go:130] > [  +0.819687] systemd-fstab-generator[1381]: Ignoring "noauto" option for root device
	I0314 19:42:22.015143    8428 command_runner.go:130] > [  +0.099817] kauditd_printk_skb: 205 callbacks suppressed
	I0314 19:42:22.015143    8428 command_runner.go:130] > [  +2.940519] systemd-fstab-generator[1516]: Ignoring "noauto" option for root device
	I0314 19:42:22.015143    8428 command_runner.go:130] > [Mar14 19:41] kauditd_printk_skb: 84 callbacks suppressed
	I0314 19:42:22.015143    8428 command_runner.go:130] > [  +4.042735] systemd-fstab-generator[3087]: Ignoring "noauto" option for root device
	I0314 19:42:22.015143    8428 command_runner.go:130] > [  +7.733278] kauditd_printk_skb: 70 callbacks suppressed
	I0314 19:42:22.017600    8428 logs.go:123] Gathering logs for coredns [8899bc003893] ...
	I0314 19:42:22.017600    8428 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8899bc003893"
	I0314 19:42:22.046053    8428 command_runner.go:130] > .:53
	I0314 19:42:22.046053    8428 command_runner.go:130] > [INFO] plugin/reload: Running configuration SHA512 = d518b2f22d7013b4ce33ee954d9f8802810eac8bed02a1cf0be20d76208a6f83258316421f15df605ab13f1704501370ffcd7655fbac5818a200880248c94b94
	I0314 19:42:22.046053    8428 command_runner.go:130] > CoreDNS-1.10.1
	I0314 19:42:22.046053    8428 command_runner.go:130] > linux/amd64, go1.20, 055b2c3
	I0314 19:42:22.046053    8428 command_runner.go:130] > [INFO] 127.0.0.1:56069 - 18242 "HINFO IN 687842018263708116.264844942244880205. udp 55 false 512" NXDOMAIN qr,rd,ra 130 0.040568923s
	I0314 19:42:22.046053    8428 command_runner.go:130] > [INFO] 10.244.0.3:42598 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000297623s
	I0314 19:42:22.046053    8428 command_runner.go:130] > [INFO] 10.244.0.3:49284 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.094729955s
	I0314 19:42:22.046053    8428 command_runner.go:130] > [INFO] 10.244.0.3:58753 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd 60 0.047978925s
	I0314 19:42:22.046053    8428 command_runner.go:130] > [INFO] 10.244.0.3:60240 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 1.250879171s
	I0314 19:42:22.046053    8428 command_runner.go:130] > [INFO] 10.244.1.2:35705 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000107809s
	I0314 19:42:22.046053    8428 command_runner.go:130] > [INFO] 10.244.1.2:38792 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 140 0.00013461s
	I0314 19:42:22.046053    8428 command_runner.go:130] > [INFO] 10.244.1.2:53339 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd 60 0.000060304s
	I0314 19:42:22.046053    8428 command_runner.go:130] > [INFO] 10.244.1.2:55975 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,aa,rd,ra 140 0.000059805s
	I0314 19:42:22.046053    8428 command_runner.go:130] > [INFO] 10.244.0.3:55630 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000117109s
	I0314 19:42:22.046053    8428 command_runner.go:130] > [INFO] 10.244.0.3:50181 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.122219329s
	I0314 19:42:22.046053    8428 command_runner.go:130] > [INFO] 10.244.0.3:58918 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000194615s
	I0314 19:42:22.046053    8428 command_runner.go:130] > [INFO] 10.244.0.3:48641 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00012501s
	I0314 19:42:22.046053    8428 command_runner.go:130] > [INFO] 10.244.0.3:57540 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.0346353s
	I0314 19:42:22.046053    8428 command_runner.go:130] > [INFO] 10.244.0.3:59969 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000278722s
	I0314 19:42:22.046053    8428 command_runner.go:130] > [INFO] 10.244.0.3:51295 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000167413s
	I0314 19:42:22.046053    8428 command_runner.go:130] > [INFO] 10.244.0.3:45005 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000148512s
	I0314 19:42:22.046053    8428 command_runner.go:130] > [INFO] 10.244.1.2:51938 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000100608s
	I0314 19:42:22.046053    8428 command_runner.go:130] > [INFO] 10.244.1.2:46248 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.00024762s
	I0314 19:42:22.046053    8428 command_runner.go:130] > [INFO] 10.244.1.2:46501 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000100408s
	I0314 19:42:22.046053    8428 command_runner.go:130] > [INFO] 10.244.1.2:52414 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000056704s
	I0314 19:42:22.046053    8428 command_runner.go:130] > [INFO] 10.244.1.2:44908 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000121409s
	I0314 19:42:22.046053    8428 command_runner.go:130] > [INFO] 10.244.1.2:49578 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00011941s
	I0314 19:42:22.046053    8428 command_runner.go:130] > [INFO] 10.244.1.2:51057 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000060205s
	I0314 19:42:22.046053    8428 command_runner.go:130] > [INFO] 10.244.1.2:56240 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000055805s
	I0314 19:42:22.046053    8428 command_runner.go:130] > [INFO] 10.244.0.3:32901 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000172914s
	I0314 19:42:22.046053    8428 command_runner.go:130] > [INFO] 10.244.0.3:41115 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000149912s
	I0314 19:42:22.046574    8428 command_runner.go:130] > [INFO] 10.244.0.3:40494 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00013161s
	I0314 19:42:22.046667    8428 command_runner.go:130] > [INFO] 10.244.0.3:40575 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000077106s
	I0314 19:42:22.046759    8428 command_runner.go:130] > [INFO] 10.244.1.2:55307 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000194115s
	I0314 19:42:22.046814    8428 command_runner.go:130] > [INFO] 10.244.1.2:46435 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00025832s
	I0314 19:42:22.046814    8428 command_runner.go:130] > [INFO] 10.244.1.2:52095 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000156813s
	I0314 19:42:22.046814    8428 command_runner.go:130] > [INFO] 10.244.1.2:57849 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00012701s
	I0314 19:42:22.046814    8428 command_runner.go:130] > [INFO] 10.244.0.3:47270 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000244119s
	I0314 19:42:22.046814    8428 command_runner.go:130] > [INFO] 10.244.0.3:59009 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000411532s
	I0314 19:42:22.046814    8428 command_runner.go:130] > [INFO] 10.244.0.3:40925 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000108108s
	I0314 19:42:22.046814    8428 command_runner.go:130] > [INFO] 10.244.0.3:56417 - 5 "PTR IN 1.80.17.172.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000067706s
	I0314 19:42:22.046814    8428 command_runner.go:130] > [INFO] 10.244.1.2:36896 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000108409s
	I0314 19:42:22.046814    8428 command_runner.go:130] > [INFO] 10.244.1.2:38949 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000118209s
	I0314 19:42:22.046814    8428 command_runner.go:130] > [INFO] 10.244.1.2:56933 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000156413s
	I0314 19:42:22.046814    8428 command_runner.go:130] > [INFO] 10.244.1.2:35971 - 5 "PTR IN 1.80.17.172.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000072406s
	I0314 19:42:22.046814    8428 command_runner.go:130] > [INFO] SIGTERM: Shutting down servers then terminating
	I0314 19:42:22.046814    8428 command_runner.go:130] > [INFO] plugin/health: Going into lameduck mode for 5s
	I0314 19:42:22.049606    8428 logs.go:123] Gathering logs for kindnet [1a321c0e8997] ...
	I0314 19:42:22.049606    8428 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a321c0e8997"
	I0314 19:42:22.078790    8428 command_runner.go:130] ! I0314 19:27:36.366640       1 main.go:227] handling current node
	I0314 19:42:22.078871    8428 command_runner.go:130] ! I0314 19:27:36.366652       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:22.078871    8428 command_runner.go:130] ! I0314 19:27:36.366658       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:22.078910    8428 command_runner.go:130] ! I0314 19:27:36.366818       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:22.078947    8428 command_runner.go:130] ! I0314 19:27:36.366827       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:22.078947    8428 command_runner.go:130] ! I0314 19:27:46.378468       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:22.078982    8428 command_runner.go:130] ! I0314 19:27:46.378496       1 main.go:227] handling current node
	I0314 19:42:22.078982    8428 command_runner.go:130] ! I0314 19:27:46.378506       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:22.078982    8428 command_runner.go:130] ! I0314 19:27:46.378513       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:22.078982    8428 command_runner.go:130] ! I0314 19:27:46.379039       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:22.078982    8428 command_runner.go:130] ! I0314 19:27:46.379130       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:22.079139    8428 command_runner.go:130] ! I0314 19:27:56.393642       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:22.079214    8428 command_runner.go:130] ! I0314 19:27:56.393700       1 main.go:227] handling current node
	I0314 19:42:22.079214    8428 command_runner.go:130] ! I0314 19:27:56.393723       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:22.079248    8428 command_runner.go:130] ! I0314 19:27:56.393733       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:22.079248    8428 command_runner.go:130] ! I0314 19:27:56.394716       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:22.079293    8428 command_runner.go:130] ! I0314 19:27:56.394779       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:22.079293    8428 command_runner.go:130] ! I0314 19:28:06.403171       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:22.079293    8428 command_runner.go:130] ! I0314 19:28:06.403199       1 main.go:227] handling current node
	I0314 19:42:22.079293    8428 command_runner.go:130] ! I0314 19:28:06.403212       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:22.079351    8428 command_runner.go:130] ! I0314 19:28:06.403219       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:22.079351    8428 command_runner.go:130] ! I0314 19:28:06.403663       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:22.079351    8428 command_runner.go:130] ! I0314 19:28:06.403834       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:22.079406    8428 command_runner.go:130] ! I0314 19:28:16.415146       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:22.079406    8428 command_runner.go:130] ! I0314 19:28:16.415237       1 main.go:227] handling current node
	I0314 19:42:22.079406    8428 command_runner.go:130] ! I0314 19:28:16.415250       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:22.079406    8428 command_runner.go:130] ! I0314 19:28:16.415260       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:22.079464    8428 command_runner.go:130] ! I0314 19:28:16.415497       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:22.079464    8428 command_runner.go:130] ! I0314 19:28:16.415703       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:22.079491    8428 command_runner.go:130] ! I0314 19:28:26.430257       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:22.079491    8428 command_runner.go:130] ! I0314 19:28:26.430350       1 main.go:227] handling current node
	I0314 19:42:22.079552    8428 command_runner.go:130] ! I0314 19:28:26.430364       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:22.079552    8428 command_runner.go:130] ! I0314 19:28:26.430372       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:22.079552    8428 command_runner.go:130] ! I0314 19:28:26.430709       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:22.079552    8428 command_runner.go:130] ! I0314 19:28:26.430804       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:22.079552    8428 command_runner.go:130] ! I0314 19:28:36.445854       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:22.079552    8428 command_runner.go:130] ! I0314 19:28:36.445897       1 main.go:227] handling current node
	I0314 19:42:22.079552    8428 command_runner.go:130] ! I0314 19:28:36.445915       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:22.079552    8428 command_runner.go:130] ! I0314 19:28:36.446285       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:22.079552    8428 command_runner.go:130] ! I0314 19:28:36.446702       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:22.079552    8428 command_runner.go:130] ! I0314 19:28:36.446731       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:22.079552    8428 command_runner.go:130] ! I0314 19:28:46.461369       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:22.079552    8428 command_runner.go:130] ! I0314 19:28:46.462057       1 main.go:227] handling current node
	I0314 19:42:22.079552    8428 command_runner.go:130] ! I0314 19:28:46.462235       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:22.079552    8428 command_runner.go:130] ! I0314 19:28:46.462250       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:22.079552    8428 command_runner.go:130] ! I0314 19:28:46.462593       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:22.079552    8428 command_runner.go:130] ! I0314 19:28:46.462770       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:22.079552    8428 command_runner.go:130] ! I0314 19:28:56.477451       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:22.079552    8428 command_runner.go:130] ! I0314 19:28:56.477483       1 main.go:227] handling current node
	I0314 19:42:22.079552    8428 command_runner.go:130] ! I0314 19:28:56.477496       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:22.079552    8428 command_runner.go:130] ! I0314 19:28:56.477508       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:22.079552    8428 command_runner.go:130] ! I0314 19:28:56.478007       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:22.079552    8428 command_runner.go:130] ! I0314 19:28:56.478089       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:22.079552    8428 command_runner.go:130] ! I0314 19:29:06.484423       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:22.079552    8428 command_runner.go:130] ! I0314 19:29:06.484497       1 main.go:227] handling current node
	I0314 19:42:22.079552    8428 command_runner.go:130] ! I0314 19:29:06.484559       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:22.079552    8428 command_runner.go:130] ! I0314 19:29:06.484624       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:22.079552    8428 command_runner.go:130] ! I0314 19:29:06.484852       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:22.079552    8428 command_runner.go:130] ! I0314 19:29:06.484945       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:22.079552    8428 command_runner.go:130] ! I0314 19:29:16.500812       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:22.079552    8428 command_runner.go:130] ! I0314 19:29:16.500909       1 main.go:227] handling current node
	I0314 19:42:22.079552    8428 command_runner.go:130] ! I0314 19:29:16.500924       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:22.079552    8428 command_runner.go:130] ! I0314 19:29:16.500932       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:22.079552    8428 command_runner.go:130] ! I0314 19:29:16.501505       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:22.080080    8428 command_runner.go:130] ! I0314 19:29:16.501585       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:22.080080    8428 command_runner.go:130] ! I0314 19:29:26.508494       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:22.080080    8428 command_runner.go:130] ! I0314 19:29:26.508585       1 main.go:227] handling current node
	I0314 19:42:22.080124    8428 command_runner.go:130] ! I0314 19:29:26.508601       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:22.080124    8428 command_runner.go:130] ! I0314 19:29:26.508609       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:22.080124    8428 command_runner.go:130] ! I0314 19:29:26.508822       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:22.080176    8428 command_runner.go:130] ! I0314 19:29:26.508837       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:22.080210    8428 command_runner.go:130] ! I0314 19:29:36.517002       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:22.080239    8428 command_runner.go:130] ! I0314 19:29:36.517123       1 main.go:227] handling current node
	I0314 19:42:22.080239    8428 command_runner.go:130] ! I0314 19:29:36.517142       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:22.080239    8428 command_runner.go:130] ! I0314 19:29:36.517155       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:22.080239    8428 command_runner.go:130] ! I0314 19:29:36.517648       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:22.080239    8428 command_runner.go:130] ! I0314 19:29:36.517836       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:22.080239    8428 command_runner.go:130] ! I0314 19:29:46.530826       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:22.080239    8428 command_runner.go:130] ! I0314 19:29:46.530962       1 main.go:227] handling current node
	I0314 19:42:22.080239    8428 command_runner.go:130] ! I0314 19:29:46.530978       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:22.080239    8428 command_runner.go:130] ! I0314 19:29:46.531314       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:22.080239    8428 command_runner.go:130] ! I0314 19:29:46.531557       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:22.080239    8428 command_runner.go:130] ! I0314 19:29:46.531706       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:22.080239    8428 command_runner.go:130] ! I0314 19:29:56.551916       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:22.080239    8428 command_runner.go:130] ! I0314 19:29:56.551953       1 main.go:227] handling current node
	I0314 19:42:22.080239    8428 command_runner.go:130] ! I0314 19:29:56.551965       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:22.080239    8428 command_runner.go:130] ! I0314 19:29:56.551971       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:22.080239    8428 command_runner.go:130] ! I0314 19:29:56.552084       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:22.080239    8428 command_runner.go:130] ! I0314 19:29:56.552107       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:22.080239    8428 command_runner.go:130] ! I0314 19:30:06.560066       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:22.080239    8428 command_runner.go:130] ! I0314 19:30:06.560115       1 main.go:227] handling current node
	I0314 19:42:22.080239    8428 command_runner.go:130] ! I0314 19:30:06.560129       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:22.080239    8428 command_runner.go:130] ! I0314 19:30:06.560136       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:22.080239    8428 command_runner.go:130] ! I0314 19:30:06.560429       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:22.080239    8428 command_runner.go:130] ! I0314 19:30:06.560534       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:22.080239    8428 command_runner.go:130] ! I0314 19:30:16.573690       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:22.080239    8428 command_runner.go:130] ! I0314 19:30:16.573731       1 main.go:227] handling current node
	I0314 19:42:22.080239    8428 command_runner.go:130] ! I0314 19:30:16.573978       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:22.080239    8428 command_runner.go:130] ! I0314 19:30:16.574067       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:22.080239    8428 command_runner.go:130] ! I0314 19:30:16.574385       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:22.080239    8428 command_runner.go:130] ! I0314 19:30:16.574414       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:22.080239    8428 command_runner.go:130] ! I0314 19:30:26.589277       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:22.080239    8428 command_runner.go:130] ! I0314 19:30:26.589488       1 main.go:227] handling current node
	I0314 19:42:22.080239    8428 command_runner.go:130] ! I0314 19:30:26.589534       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:22.080239    8428 command_runner.go:130] ! I0314 19:30:26.589557       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:22.080239    8428 command_runner.go:130] ! I0314 19:30:26.589802       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:22.080239    8428 command_runner.go:130] ! I0314 19:30:26.589885       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:22.080766    8428 command_runner.go:130] ! I0314 19:30:36.605356       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:22.080766    8428 command_runner.go:130] ! I0314 19:30:36.605400       1 main.go:227] handling current node
	I0314 19:42:22.080806    8428 command_runner.go:130] ! I0314 19:30:36.605412       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:22.080806    8428 command_runner.go:130] ! I0314 19:30:36.605418       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:22.080806    8428 command_runner.go:130] ! I0314 19:30:36.605556       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:22.080806    8428 command_runner.go:130] ! I0314 19:30:36.605625       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:22.080806    8428 command_runner.go:130] ! I0314 19:30:46.612911       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:22.080806    8428 command_runner.go:130] ! I0314 19:30:46.613010       1 main.go:227] handling current node
	I0314 19:42:22.080806    8428 command_runner.go:130] ! I0314 19:30:46.613025       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:22.080806    8428 command_runner.go:130] ! I0314 19:30:46.613034       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:22.080806    8428 command_runner.go:130] ! I0314 19:30:46.613445       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:22.080806    8428 command_runner.go:130] ! I0314 19:30:46.615380       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:22.080806    8428 command_runner.go:130] ! I0314 19:30:56.630605       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:22.080806    8428 command_runner.go:130] ! I0314 19:30:56.630965       1 main.go:227] handling current node
	I0314 19:42:22.080806    8428 command_runner.go:130] ! I0314 19:30:56.631076       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:22.080806    8428 command_runner.go:130] ! I0314 19:30:56.631132       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:22.080806    8428 command_runner.go:130] ! I0314 19:30:56.631442       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:22.080806    8428 command_runner.go:130] ! I0314 19:30:56.631542       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:22.080806    8428 command_runner.go:130] ! I0314 19:31:06.643588       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:22.080806    8428 command_runner.go:130] ! I0314 19:31:06.643631       1 main.go:227] handling current node
	I0314 19:42:22.080806    8428 command_runner.go:130] ! I0314 19:31:06.643643       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:22.080806    8428 command_runner.go:130] ! I0314 19:31:06.643650       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:22.080806    8428 command_runner.go:130] ! I0314 19:31:06.644160       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:22.080806    8428 command_runner.go:130] ! I0314 19:31:06.644255       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:22.080806    8428 command_runner.go:130] ! I0314 19:31:16.650940       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:22.080806    8428 command_runner.go:130] ! I0314 19:31:16.651187       1 main.go:227] handling current node
	I0314 19:42:22.080806    8428 command_runner.go:130] ! I0314 19:31:16.651208       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:22.080806    8428 command_runner.go:130] ! I0314 19:31:16.651236       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:22.080806    8428 command_runner.go:130] ! I0314 19:31:16.651354       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:22.080806    8428 command_runner.go:130] ! I0314 19:31:16.651374       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:22.080806    8428 command_runner.go:130] ! I0314 19:31:26.665304       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:22.080806    8428 command_runner.go:130] ! I0314 19:31:26.665403       1 main.go:227] handling current node
	I0314 19:42:22.080806    8428 command_runner.go:130] ! I0314 19:31:26.665418       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:22.080806    8428 command_runner.go:130] ! I0314 19:31:26.665427       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:22.080806    8428 command_runner.go:130] ! I0314 19:31:26.665674       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:22.080806    8428 command_runner.go:130] ! I0314 19:31:26.665859       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:22.080806    8428 command_runner.go:130] ! I0314 19:31:36.681645       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:22.081331    8428 command_runner.go:130] ! I0314 19:31:36.681680       1 main.go:227] handling current node
	I0314 19:42:22.081373    8428 command_runner.go:130] ! I0314 19:31:36.681695       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:22.081373    8428 command_runner.go:130] ! I0314 19:31:36.681704       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:22.081413    8428 command_runner.go:130] ! I0314 19:31:36.682032       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:22.081413    8428 command_runner.go:130] ! I0314 19:31:36.682062       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:22.081472    8428 command_runner.go:130] ! I0314 19:31:46.697305       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:22.081472    8428 command_runner.go:130] ! I0314 19:31:46.697415       1 main.go:227] handling current node
	I0314 19:42:22.081527    8428 command_runner.go:130] ! I0314 19:31:46.697432       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:22.081527    8428 command_runner.go:130] ! I0314 19:31:46.697444       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:22.081527    8428 command_runner.go:130] ! I0314 19:31:46.697965       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:22.081609    8428 command_runner.go:130] ! I0314 19:31:46.698093       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:22.081631    8428 command_runner.go:130] ! I0314 19:31:56.705518       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:22.081631    8428 command_runner.go:130] ! I0314 19:31:56.705613       1 main.go:227] handling current node
	I0314 19:42:22.081631    8428 command_runner.go:130] ! I0314 19:31:56.705627       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:22.081631    8428 command_runner.go:130] ! I0314 19:31:56.705635       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:22.081631    8428 command_runner.go:130] ! I0314 19:31:56.706151       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:22.081631    8428 command_runner.go:130] ! I0314 19:31:56.706269       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:22.081631    8428 command_runner.go:130] ! I0314 19:32:06.716977       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:22.081631    8428 command_runner.go:130] ! I0314 19:32:06.717087       1 main.go:227] handling current node
	I0314 19:42:22.081631    8428 command_runner.go:130] ! I0314 19:32:06.717105       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:22.081631    8428 command_runner.go:130] ! I0314 19:32:06.717116       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:22.081631    8428 command_runner.go:130] ! I0314 19:32:06.717701       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:22.081631    8428 command_runner.go:130] ! I0314 19:32:06.717870       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:22.081631    8428 command_runner.go:130] ! I0314 19:32:16.738903       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:22.081631    8428 command_runner.go:130] ! I0314 19:32:16.738946       1 main.go:227] handling current node
	I0314 19:42:22.081631    8428 command_runner.go:130] ! I0314 19:32:16.738962       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:22.081631    8428 command_runner.go:130] ! I0314 19:32:16.738971       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:22.081631    8428 command_runner.go:130] ! I0314 19:32:16.739310       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:22.081631    8428 command_runner.go:130] ! I0314 19:32:16.739420       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:22.081631    8428 command_runner.go:130] ! I0314 19:32:26.749067       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:22.081631    8428 command_runner.go:130] ! I0314 19:32:26.749521       1 main.go:227] handling current node
	I0314 19:42:22.081631    8428 command_runner.go:130] ! I0314 19:32:26.749656       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:22.081631    8428 command_runner.go:130] ! I0314 19:32:26.749670       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:22.081631    8428 command_runner.go:130] ! I0314 19:32:26.750040       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:22.081631    8428 command_runner.go:130] ! I0314 19:32:26.750074       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:22.081631    8428 command_runner.go:130] ! I0314 19:32:36.765313       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:22.081631    8428 command_runner.go:130] ! I0314 19:32:36.765423       1 main.go:227] handling current node
	I0314 19:42:22.081631    8428 command_runner.go:130] ! I0314 19:32:36.765442       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:22.081631    8428 command_runner.go:130] ! I0314 19:32:36.765453       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:22.081631    8428 command_runner.go:130] ! I0314 19:32:36.766102       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:22.081631    8428 command_runner.go:130] ! I0314 19:32:36.766130       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:22.081631    8428 command_runner.go:130] ! I0314 19:32:46.781715       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:22.081631    8428 command_runner.go:130] ! I0314 19:32:46.781800       1 main.go:227] handling current node
	I0314 19:42:22.081631    8428 command_runner.go:130] ! I0314 19:32:46.782151       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:22.082159    8428 command_runner.go:130] ! I0314 19:32:46.782168       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:22.082159    8428 command_runner.go:130] ! I0314 19:32:46.782370       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:22.082207    8428 command_runner.go:130] ! I0314 19:32:46.782396       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:22.082231    8428 command_runner.go:130] ! I0314 19:32:56.797473       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:22.082231    8428 command_runner.go:130] ! I0314 19:32:56.797568       1 main.go:227] handling current node
	I0314 19:42:22.082231    8428 command_runner.go:130] ! I0314 19:32:56.797583       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:22.082231    8428 command_runner.go:130] ! I0314 19:32:56.797621       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:22.082231    8428 command_runner.go:130] ! I0314 19:32:56.797733       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:22.082231    8428 command_runner.go:130] ! I0314 19:32:56.797772       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:22.082231    8428 command_runner.go:130] ! I0314 19:33:06.803421       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:22.082231    8428 command_runner.go:130] ! I0314 19:33:06.803513       1 main.go:227] handling current node
	I0314 19:42:22.082231    8428 command_runner.go:130] ! I0314 19:33:06.803527       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:22.082231    8428 command_runner.go:130] ! I0314 19:33:06.803534       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:22.082231    8428 command_runner.go:130] ! I0314 19:33:06.804158       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:22.082231    8428 command_runner.go:130] ! I0314 19:33:06.804237       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:22.082231    8428 command_runner.go:130] ! I0314 19:33:16.818983       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:22.082231    8428 command_runner.go:130] ! I0314 19:33:16.819134       1 main.go:227] handling current node
	I0314 19:42:22.082231    8428 command_runner.go:130] ! I0314 19:33:16.819149       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:22.082231    8428 command_runner.go:130] ! I0314 19:33:16.819157       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:22.082231    8428 command_runner.go:130] ! I0314 19:33:16.819421       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:22.082231    8428 command_runner.go:130] ! I0314 19:33:16.819491       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:22.082231    8428 command_runner.go:130] ! I0314 19:33:26.826209       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:22.082231    8428 command_runner.go:130] ! I0314 19:33:26.826474       1 main.go:227] handling current node
	I0314 19:42:22.082231    8428 command_runner.go:130] ! I0314 19:33:26.826509       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:22.082231    8428 command_runner.go:130] ! I0314 19:33:26.826519       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:22.082231    8428 command_runner.go:130] ! I0314 19:33:26.826794       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:22.082231    8428 command_runner.go:130] ! I0314 19:33:26.826886       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:22.082231    8428 command_runner.go:130] ! I0314 19:33:36.839979       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:22.082231    8428 command_runner.go:130] ! I0314 19:33:36.840555       1 main.go:227] handling current node
	I0314 19:42:22.082231    8428 command_runner.go:130] ! I0314 19:33:36.840828       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:22.082231    8428 command_runner.go:130] ! I0314 19:33:36.840855       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:22.082231    8428 command_runner.go:130] ! I0314 19:33:36.841055       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:22.082231    8428 command_runner.go:130] ! I0314 19:33:36.841183       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:22.082231    8428 command_runner.go:130] ! I0314 19:33:46.854483       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:22.082231    8428 command_runner.go:130] ! I0314 19:33:46.854585       1 main.go:227] handling current node
	I0314 19:42:22.082231    8428 command_runner.go:130] ! I0314 19:33:46.854600       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:22.082231    8428 command_runner.go:130] ! I0314 19:33:46.854608       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:22.082758    8428 command_runner.go:130] ! I0314 19:33:46.855303       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:22.082758    8428 command_runner.go:130] ! I0314 19:33:46.855389       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:22.082799    8428 command_runner.go:130] ! I0314 19:33:56.867052       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:22.082799    8428 command_runner.go:130] ! I0314 19:33:56.867136       1 main.go:227] handling current node
	I0314 19:42:22.082799    8428 command_runner.go:130] ! I0314 19:33:56.867150       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:22.082799    8428 command_runner.go:130] ! I0314 19:33:56.867158       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:22.082799    8428 command_runner.go:130] ! I0314 19:33:56.867493       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:22.082799    8428 command_runner.go:130] ! I0314 19:33:56.867886       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:22.082799    8428 command_runner.go:130] ! I0314 19:34:06.874298       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:22.082799    8428 command_runner.go:130] ! I0314 19:34:06.874391       1 main.go:227] handling current node
	I0314 19:42:22.082799    8428 command_runner.go:130] ! I0314 19:34:06.874405       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:22.082799    8428 command_runner.go:130] ! I0314 19:34:06.874413       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:22.082799    8428 command_runner.go:130] ! I0314 19:34:06.874932       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:22.082799    8428 command_runner.go:130] ! I0314 19:34:06.874962       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:22.082799    8428 command_runner.go:130] ! I0314 19:34:16.890513       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:22.082799    8428 command_runner.go:130] ! I0314 19:34:16.890589       1 main.go:227] handling current node
	I0314 19:42:22.082799    8428 command_runner.go:130] ! I0314 19:34:16.890604       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:22.082799    8428 command_runner.go:130] ! I0314 19:34:16.890612       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:22.082799    8428 command_runner.go:130] ! I0314 19:34:16.890870       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:22.082799    8428 command_runner.go:130] ! I0314 19:34:16.890953       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:22.082799    8428 command_runner.go:130] ! I0314 19:34:26.908423       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:22.082799    8428 command_runner.go:130] ! I0314 19:34:26.908576       1 main.go:227] handling current node
	I0314 19:42:22.082799    8428 command_runner.go:130] ! I0314 19:34:26.908597       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:22.082799    8428 command_runner.go:130] ! I0314 19:34:26.908606       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:22.082799    8428 command_runner.go:130] ! I0314 19:34:26.909103       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:22.082799    8428 command_runner.go:130] ! I0314 19:34:26.909271       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:22.082799    8428 command_runner.go:130] ! I0314 19:34:36.915794       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:22.082799    8428 command_runner.go:130] ! I0314 19:34:36.915910       1 main.go:227] handling current node
	I0314 19:42:22.082799    8428 command_runner.go:130] ! I0314 19:34:36.915926       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:22.082799    8428 command_runner.go:130] ! I0314 19:34:36.915935       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:22.082799    8428 command_runner.go:130] ! I0314 19:34:36.916282       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:22.082799    8428 command_runner.go:130] ! I0314 19:34:36.916372       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:22.082799    8428 command_runner.go:130] ! I0314 19:34:46.931699       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:22.082799    8428 command_runner.go:130] ! I0314 19:34:46.931833       1 main.go:227] handling current node
	I0314 19:42:22.082799    8428 command_runner.go:130] ! I0314 19:34:46.931849       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:22.082799    8428 command_runner.go:130] ! I0314 19:34:46.931858       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:22.083324    8428 command_runner.go:130] ! I0314 19:34:46.932099       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:22.083324    8428 command_runner.go:130] ! I0314 19:34:46.932124       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:22.083324    8428 command_runner.go:130] ! I0314 19:34:56.946470       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:22.083408    8428 command_runner.go:130] ! I0314 19:34:56.946565       1 main.go:227] handling current node
	I0314 19:42:22.083408    8428 command_runner.go:130] ! I0314 19:34:56.946580       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:22.083408    8428 command_runner.go:130] ! I0314 19:34:56.946588       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:22.083408    8428 command_runner.go:130] ! I0314 19:34:56.946812       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:22.083408    8428 command_runner.go:130] ! I0314 19:34:56.946927       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:22.083499    8428 command_runner.go:130] ! I0314 19:35:06.960844       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:22.083499    8428 command_runner.go:130] ! I0314 19:35:06.960939       1 main.go:227] handling current node
	I0314 19:42:22.083499    8428 command_runner.go:130] ! I0314 19:35:06.960954       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:22.083499    8428 command_runner.go:130] ! I0314 19:35:06.960962       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:22.083581    8428 command_runner.go:130] ! I0314 19:35:06.961467       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:22.083581    8428 command_runner.go:130] ! I0314 19:35:06.961574       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:22.083581    8428 command_runner.go:130] ! I0314 19:35:16.981993       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:22.083581    8428 command_runner.go:130] ! I0314 19:35:16.982080       1 main.go:227] handling current node
	I0314 19:42:22.083665    8428 command_runner.go:130] ! I0314 19:35:16.982095       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:22.083665    8428 command_runner.go:130] ! I0314 19:35:16.982103       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:22.083665    8428 command_runner.go:130] ! I0314 19:35:16.982594       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:22.083665    8428 command_runner.go:130] ! I0314 19:35:16.982673       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:22.083748    8428 command_runner.go:130] ! I0314 19:35:26.993848       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:22.083748    8428 command_runner.go:130] ! I0314 19:35:26.993940       1 main.go:227] handling current node
	I0314 19:42:22.083748    8428 command_runner.go:130] ! I0314 19:35:26.993955       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:22.083748    8428 command_runner.go:130] ! I0314 19:35:26.993963       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:22.083748    8428 command_runner.go:130] ! I0314 19:35:26.994360       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:22.083829    8428 command_runner.go:130] ! I0314 19:35:26.994437       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:22.083829    8428 command_runner.go:130] ! I0314 19:35:37.008613       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:22.083829    8428 command_runner.go:130] ! I0314 19:35:37.008706       1 main.go:227] handling current node
	I0314 19:42:22.083829    8428 command_runner.go:130] ! I0314 19:35:37.008720       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:22.083829    8428 command_runner.go:130] ! I0314 19:35:37.008727       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:22.083918    8428 command_runner.go:130] ! I0314 19:35:37.009233       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:22.083918    8428 command_runner.go:130] ! I0314 19:35:37.009320       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:22.083918    8428 command_runner.go:130] ! I0314 19:35:47.018420       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:22.083918    8428 command_runner.go:130] ! I0314 19:35:47.018526       1 main.go:227] handling current node
	I0314 19:42:22.083999    8428 command_runner.go:130] ! I0314 19:35:47.018541       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:22.083999    8428 command_runner.go:130] ! I0314 19:35:47.018549       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:22.083999    8428 command_runner.go:130] ! I0314 19:35:47.018669       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:22.083999    8428 command_runner.go:130] ! I0314 19:35:47.018680       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:22.083999    8428 command_runner.go:130] ! I0314 19:35:57.025132       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:22.084079    8428 command_runner.go:130] ! I0314 19:35:57.025207       1 main.go:227] handling current node
	I0314 19:42:22.084079    8428 command_runner.go:130] ! I0314 19:35:57.025220       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:22.084079    8428 command_runner.go:130] ! I0314 19:35:57.025228       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:22.084079    8428 command_runner.go:130] ! I0314 19:35:57.026009       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:22.084161    8428 command_runner.go:130] ! I0314 19:35:57.026145       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:22.084161    8428 command_runner.go:130] ! I0314 19:36:07.042281       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:22.084161    8428 command_runner.go:130] ! I0314 19:36:07.042353       1 main.go:227] handling current node
	I0314 19:42:22.084161    8428 command_runner.go:130] ! I0314 19:36:07.042367       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:22.084161    8428 command_runner.go:130] ! I0314 19:36:07.042375       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:22.084240    8428 command_runner.go:130] ! I0314 19:36:07.042493       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:22.084240    8428 command_runner.go:130] ! I0314 19:36:07.042500       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:22.084240    8428 command_runner.go:130] ! I0314 19:36:17.055539       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:22.084329    8428 command_runner.go:130] ! I0314 19:36:17.055567       1 main.go:227] handling current node
	I0314 19:42:22.084329    8428 command_runner.go:130] ! I0314 19:36:17.055581       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:22.084329    8428 command_runner.go:130] ! I0314 19:36:17.055588       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:22.084329    8428 command_runner.go:130] ! I0314 19:36:17.056312       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:22.084329    8428 command_runner.go:130] ! I0314 19:36:17.056341       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:22.084408    8428 command_runner.go:130] ! I0314 19:36:27.067921       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:22.084434    8428 command_runner.go:130] ! I0314 19:36:27.067961       1 main.go:227] handling current node
	I0314 19:42:22.084461    8428 command_runner.go:130] ! I0314 19:36:27.069052       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:22.084490    8428 command_runner.go:130] ! I0314 19:36:27.069179       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:22.084490    8428 command_runner.go:130] ! I0314 19:36:27.069306       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:22.084523    8428 command_runner.go:130] ! I0314 19:36:27.069332       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:22.084545    8428 command_runner.go:130] ! I0314 19:36:37.082322       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:22.084545    8428 command_runner.go:130] ! I0314 19:36:37.082413       1 main.go:227] handling current node
	I0314 19:42:22.084545    8428 command_runner.go:130] ! I0314 19:36:37.082429       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:22.084545    8428 command_runner.go:130] ! I0314 19:36:37.082437       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:22.084545    8428 command_runner.go:130] ! I0314 19:36:37.082972       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:22.084545    8428 command_runner.go:130] ! I0314 19:36:37.083000       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:22.084545    8428 command_runner.go:130] ! I0314 19:36:47.099685       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:22.084545    8428 command_runner.go:130] ! I0314 19:36:47.099830       1 main.go:227] handling current node
	I0314 19:42:22.084545    8428 command_runner.go:130] ! I0314 19:36:47.099862       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:22.084545    8428 command_runner.go:130] ! I0314 19:36:47.099982       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:22.084545    8428 command_runner.go:130] ! I0314 19:36:57.107274       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:22.084545    8428 command_runner.go:130] ! I0314 19:36:57.107368       1 main.go:227] handling current node
	I0314 19:42:22.084545    8428 command_runner.go:130] ! I0314 19:36:57.107382       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:22.084545    8428 command_runner.go:130] ! I0314 19:36:57.107390       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:22.084545    8428 command_runner.go:130] ! I0314 19:36:57.107827       1 main.go:223] Handling node with IPs: map[172.17.84.215:{}]
	I0314 19:42:22.084545    8428 command_runner.go:130] ! I0314 19:36:57.107942       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.3.0/24] 
	I0314 19:42:22.084545    8428 command_runner.go:130] ! I0314 19:36:57.108076       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 172.17.84.215 Flags: [] Table: 0} 
	I0314 19:42:22.084545    8428 command_runner.go:130] ! I0314 19:37:07.120709       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:22.084545    8428 command_runner.go:130] ! I0314 19:37:07.121059       1 main.go:227] handling current node
	I0314 19:42:22.084545    8428 command_runner.go:130] ! I0314 19:37:07.121098       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:22.084545    8428 command_runner.go:130] ! I0314 19:37:07.121109       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:22.084545    8428 command_runner.go:130] ! I0314 19:37:07.121440       1 main.go:223] Handling node with IPs: map[172.17.84.215:{}]
	I0314 19:42:22.084545    8428 command_runner.go:130] ! I0314 19:37:07.121455       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.3.0/24] 
	I0314 19:42:22.084545    8428 command_runner.go:130] ! I0314 19:37:17.137704       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:22.084545    8428 command_runner.go:130] ! I0314 19:37:17.137784       1 main.go:227] handling current node
	I0314 19:42:22.084545    8428 command_runner.go:130] ! I0314 19:37:17.137796       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:22.084545    8428 command_runner.go:130] ! I0314 19:37:17.137803       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:22.084545    8428 command_runner.go:130] ! I0314 19:37:17.138265       1 main.go:223] Handling node with IPs: map[172.17.84.215:{}]
	I0314 19:42:22.084545    8428 command_runner.go:130] ! I0314 19:37:17.138298       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.3.0/24] 
	I0314 19:42:22.084545    8428 command_runner.go:130] ! I0314 19:37:27.144505       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:22.084545    8428 command_runner.go:130] ! I0314 19:37:27.144594       1 main.go:227] handling current node
	I0314 19:42:22.084545    8428 command_runner.go:130] ! I0314 19:37:27.144607       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:22.084545    8428 command_runner.go:130] ! I0314 19:37:27.144615       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:22.084545    8428 command_runner.go:130] ! I0314 19:37:27.145062       1 main.go:223] Handling node with IPs: map[172.17.84.215:{}]
	I0314 19:42:22.085071    8428 command_runner.go:130] ! I0314 19:37:27.145092       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.3.0/24] 
	I0314 19:42:22.085071    8428 command_runner.go:130] ! I0314 19:37:37.154684       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:22.085071    8428 command_runner.go:130] ! I0314 19:37:37.154836       1 main.go:227] handling current node
	I0314 19:42:22.085071    8428 command_runner.go:130] ! I0314 19:37:37.154851       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:22.085071    8428 command_runner.go:130] ! I0314 19:37:37.154860       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:22.085160    8428 command_runner.go:130] ! I0314 19:37:37.155452       1 main.go:223] Handling node with IPs: map[172.17.84.215:{}]
	I0314 19:42:22.085160    8428 command_runner.go:130] ! I0314 19:37:37.155614       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.3.0/24] 
	I0314 19:42:22.085160    8428 command_runner.go:130] ! I0314 19:37:47.168249       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:22.085160    8428 command_runner.go:130] ! I0314 19:37:47.168338       1 main.go:227] handling current node
	I0314 19:42:22.085160    8428 command_runner.go:130] ! I0314 19:37:47.168352       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:22.085240    8428 command_runner.go:130] ! I0314 19:37:47.168360       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:22.085240    8428 command_runner.go:130] ! I0314 19:37:47.168976       1 main.go:223] Handling node with IPs: map[172.17.84.215:{}]
	I0314 19:42:22.085240    8428 command_runner.go:130] ! I0314 19:37:47.169064       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.3.0/24] 
	I0314 19:42:22.085240    8428 command_runner.go:130] ! I0314 19:37:57.176039       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:22.085322    8428 command_runner.go:130] ! I0314 19:37:57.176130       1 main.go:227] handling current node
	I0314 19:42:22.085322    8428 command_runner.go:130] ! I0314 19:37:57.176145       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:22.085322    8428 command_runner.go:130] ! I0314 19:37:57.176153       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:22.085402    8428 command_runner.go:130] ! I0314 19:37:57.176528       1 main.go:223] Handling node with IPs: map[172.17.84.215:{}]
	I0314 19:42:22.085402    8428 command_runner.go:130] ! I0314 19:37:57.176659       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.3.0/24] 
	I0314 19:42:22.085402    8428 command_runner.go:130] ! I0314 19:38:07.189890       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:22.085402    8428 command_runner.go:130] ! I0314 19:38:07.189993       1 main.go:227] handling current node
	I0314 19:42:22.085545    8428 command_runner.go:130] ! I0314 19:38:07.190008       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:22.085545    8428 command_runner.go:130] ! I0314 19:38:07.190016       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:22.085545    8428 command_runner.go:130] ! I0314 19:38:07.190217       1 main.go:223] Handling node with IPs: map[172.17.84.215:{}]
	I0314 19:42:22.085545    8428 command_runner.go:130] ! I0314 19:38:07.190245       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.3.0/24] 
	I0314 19:42:22.085545    8428 command_runner.go:130] ! I0314 19:38:17.196541       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:22.085545    8428 command_runner.go:130] ! I0314 19:38:17.196633       1 main.go:227] handling current node
	I0314 19:42:22.085640    8428 command_runner.go:130] ! I0314 19:38:17.196647       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:22.085640    8428 command_runner.go:130] ! I0314 19:38:17.196655       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:22.085640    8428 command_runner.go:130] ! I0314 19:38:17.196888       1 main.go:223] Handling node with IPs: map[172.17.84.215:{}]
	I0314 19:42:22.085640    8428 command_runner.go:130] ! I0314 19:38:17.197012       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.3.0/24] 
	I0314 19:42:22.085721    8428 command_runner.go:130] ! I0314 19:38:27.217365       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:22.085721    8428 command_runner.go:130] ! I0314 19:38:27.217460       1 main.go:227] handling current node
	I0314 19:42:22.085721    8428 command_runner.go:130] ! I0314 19:38:27.217475       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:22.085800    8428 command_runner.go:130] ! I0314 19:38:27.217483       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:22.085800    8428 command_runner.go:130] ! I0314 19:38:27.217621       1 main.go:223] Handling node with IPs: map[172.17.84.215:{}]
	I0314 19:42:22.085800    8428 command_runner.go:130] ! I0314 19:38:27.217634       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.3.0/24] 
	I0314 19:42:22.085800    8428 command_runner.go:130] ! I0314 19:38:37.229941       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:22.085800    8428 command_runner.go:130] ! I0314 19:38:37.230048       1 main.go:227] handling current node
	I0314 19:42:22.085881    8428 command_runner.go:130] ! I0314 19:38:37.230062       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:22.085881    8428 command_runner.go:130] ! I0314 19:38:37.230070       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:22.085881    8428 command_runner.go:130] ! I0314 19:38:37.230268       1 main.go:223] Handling node with IPs: map[172.17.84.215:{}]
	I0314 19:42:22.085961    8428 command_runner.go:130] ! I0314 19:38:37.230338       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.3.0/24] 
	I0314 19:42:22.102667    8428 logs.go:123] Gathering logs for kube-proxy [497007582e44] ...
	I0314 19:42:22.102667    8428 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 497007582e44"
	I0314 19:42:22.132564    8428 command_runner.go:130] ! I0314 19:41:08.342277       1 server_others.go:69] "Using iptables proxy"
	I0314 19:42:22.132953    8428 command_runner.go:130] ! I0314 19:41:08.381589       1 node.go:141] Successfully retrieved node IP: 172.17.93.236
	I0314 19:42:22.132953    8428 command_runner.go:130] ! I0314 19:41:08.703360       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0314 19:42:22.132953    8428 command_runner.go:130] ! I0314 19:41:08.703384       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0314 19:42:22.132953    8428 command_runner.go:130] ! I0314 19:41:08.724122       1 server_others.go:152] "Using iptables Proxier"
	I0314 19:42:22.133043    8428 command_runner.go:130] ! I0314 19:41:08.726554       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0314 19:42:22.133043    8428 command_runner.go:130] ! I0314 19:41:08.729424       1 server.go:846] "Version info" version="v1.28.4"
	I0314 19:42:22.133043    8428 command_runner.go:130] ! I0314 19:41:08.729460       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0314 19:42:22.133043    8428 command_runner.go:130] ! I0314 19:41:08.732062       1 config.go:188] "Starting service config controller"
	I0314 19:42:22.133043    8428 command_runner.go:130] ! I0314 19:41:08.732501       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0314 19:42:22.133043    8428 command_runner.go:130] ! I0314 19:41:08.732571       1 config.go:97] "Starting endpoint slice config controller"
	I0314 19:42:22.133126    8428 command_runner.go:130] ! I0314 19:41:08.732581       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0314 19:42:22.133170    8428 command_runner.go:130] ! I0314 19:41:08.733523       1 config.go:315] "Starting node config controller"
	I0314 19:42:22.133170    8428 command_runner.go:130] ! I0314 19:41:08.733550       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0314 19:42:22.133170    8428 command_runner.go:130] ! I0314 19:41:08.832968       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0314 19:42:22.133170    8428 command_runner.go:130] ! I0314 19:41:08.833049       1 shared_informer.go:318] Caches are synced for service config
	I0314 19:42:22.133170    8428 command_runner.go:130] ! I0314 19:41:08.835501       1 shared_informer.go:318] Caches are synced for node config
	I0314 19:42:22.137287    8428 logs.go:123] Gathering logs for kube-controller-manager [16b80f73683d] ...
	I0314 19:42:22.137376    8428 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16b80f73683d"
	I0314 19:42:22.167451    8428 command_runner.go:130] ! I0314 19:18:57.791996       1 serving.go:348] Generated self-signed cert in-memory
	I0314 19:42:22.167451    8428 command_runner.go:130] ! I0314 19:18:58.802083       1 controllermanager.go:189] "Starting" version="v1.28.4"
	I0314 19:42:22.167451    8428 command_runner.go:130] ! I0314 19:18:58.802123       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0314 19:42:22.167451    8428 command_runner.go:130] ! I0314 19:18:58.803952       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0314 19:42:22.167451    8428 command_runner.go:130] ! I0314 19:18:58.804068       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0314 19:42:22.167451    8428 command_runner.go:130] ! I0314 19:18:58.807259       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0314 19:42:22.167451    8428 command_runner.go:130] ! I0314 19:18:58.807321       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0314 19:42:22.167451    8428 command_runner.go:130] ! I0314 19:19:03.211766       1 shared_informer.go:311] Waiting for caches to sync for tokens
	I0314 19:42:22.167451    8428 command_runner.go:130] ! I0314 19:19:03.241058       1 controllermanager.go:642] "Started controller" controller="endpoints-controller"
	I0314 19:42:22.167451    8428 command_runner.go:130] ! I0314 19:19:03.241394       1 endpoints_controller.go:174] "Starting endpoint controller"
	I0314 19:42:22.167451    8428 command_runner.go:130] ! I0314 19:19:03.241421       1 shared_informer.go:311] Waiting for caches to sync for endpoint
	I0314 19:42:22.167451    8428 command_runner.go:130] ! I0314 19:19:03.277645       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="limitranges"
	I0314 19:42:22.167451    8428 command_runner.go:130] ! I0314 19:19:03.277842       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="serviceaccounts"
	I0314 19:42:22.167451    8428 command_runner.go:130] ! I0314 19:19:03.277987       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="statefulsets.apps"
	I0314 19:42:22.167451    8428 command_runner.go:130] ! I0314 19:19:03.278099       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="roles.rbac.authorization.k8s.io"
	I0314 19:42:22.167451    8428 command_runner.go:130] ! I0314 19:19:03.278176       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="jobs.batch"
	I0314 19:42:22.167451    8428 command_runner.go:130] ! I0314 19:19:03.278283       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="ingresses.networking.k8s.io"
	I0314 19:42:22.167451    8428 command_runner.go:130] ! I0314 19:19:03.278389       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="leases.coordination.k8s.io"
	I0314 19:42:22.167451    8428 command_runner.go:130] ! I0314 19:19:03.278566       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="rolebindings.rbac.authorization.k8s.io"
	I0314 19:42:22.167981    8428 command_runner.go:130] ! W0314 19:19:03.278710       1 shared_informer.go:593] resyncPeriod 13h23m0.648968128s is smaller than resyncCheckPeriod 15h46m21.421594093s and the informer has already started. Changing it to 15h46m21.421594093s
	I0314 19:42:22.167981    8428 command_runner.go:130] ! I0314 19:19:03.278915       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="controllerrevisions.apps"
	I0314 19:42:22.167981    8428 command_runner.go:130] ! I0314 19:19:03.279052       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="daemonsets.apps"
	I0314 19:42:22.168063    8428 command_runner.go:130] ! I0314 19:19:03.279196       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="networkpolicies.networking.k8s.io"
	I0314 19:42:22.168063    8428 command_runner.go:130] ! I0314 19:19:03.279291       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="poddisruptionbudgets.policy"
	I0314 19:42:22.168063    8428 command_runner.go:130] ! I0314 19:19:03.279313       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="endpoints"
	I0314 19:42:22.168148    8428 command_runner.go:130] ! I0314 19:19:03.279560       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="replicasets.apps"
	I0314 19:42:22.168148    8428 command_runner.go:130] ! I0314 19:19:03.279688       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="horizontalpodautoscalers.autoscaling"
	I0314 19:42:22.168148    8428 command_runner.go:130] ! I0314 19:19:03.279834       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="cronjobs.batch"
	I0314 19:42:22.168224    8428 command_runner.go:130] ! I0314 19:19:03.279857       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="deployments.apps"
	I0314 19:42:22.168224    8428 command_runner.go:130] ! I0314 19:19:03.279927       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="csistoragecapacities.storage.k8s.io"
	I0314 19:42:22.168224    8428 command_runner.go:130] ! I0314 19:19:03.280011       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="endpointslices.discovery.k8s.io"
	I0314 19:42:22.168224    8428 command_runner.go:130] ! I0314 19:19:03.280106       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="podtemplates"
	I0314 19:42:22.168301    8428 command_runner.go:130] ! I0314 19:19:03.280148       1 controllermanager.go:642] "Started controller" controller="resourcequota-controller"
	I0314 19:42:22.168301    8428 command_runner.go:130] ! I0314 19:19:03.280224       1 resource_quota_controller.go:294] "Starting resource quota controller"
	I0314 19:42:22.168301    8428 command_runner.go:130] ! I0314 19:19:03.280306       1 shared_informer.go:311] Waiting for caches to sync for resource quota
	I0314 19:42:22.168301    8428 command_runner.go:130] ! I0314 19:19:03.280392       1 resource_quota_monitor.go:305] "QuotaMonitor running"
	I0314 19:42:22.168301    8428 command_runner.go:130] ! I0314 19:19:03.297527       1 controllermanager.go:642] "Started controller" controller="serviceaccount-controller"
	I0314 19:42:22.168376    8428 command_runner.go:130] ! I0314 19:19:03.297675       1 serviceaccounts_controller.go:111] "Starting service account controller"
	I0314 19:42:22.168376    8428 command_runner.go:130] ! I0314 19:19:03.297706       1 shared_informer.go:311] Waiting for caches to sync for service account
	I0314 19:42:22.168376    8428 command_runner.go:130] ! I0314 19:19:03.310691       1 node_lifecycle_controller.go:431] "Controller will reconcile labels"
	I0314 19:42:22.168376    8428 command_runner.go:130] ! I0314 19:19:03.310864       1 controllermanager.go:642] "Started controller" controller="node-lifecycle-controller"
	I0314 19:42:22.168376    8428 command_runner.go:130] ! I0314 19:19:03.311121       1 node_lifecycle_controller.go:465] "Sending events to api server"
	I0314 19:42:22.168376    8428 command_runner.go:130] ! I0314 19:19:03.311163       1 node_lifecycle_controller.go:476] "Starting node controller"
	I0314 19:42:22.168459    8428 command_runner.go:130] ! I0314 19:19:03.311170       1 shared_informer.go:311] Waiting for caches to sync for taint
	I0314 19:42:22.168459    8428 command_runner.go:130] ! I0314 19:19:03.312491       1 shared_informer.go:318] Caches are synced for tokens
	I0314 19:42:22.168459    8428 command_runner.go:130] ! I0314 19:19:03.324271       1 controllermanager.go:642] "Started controller" controller="persistentvolume-attach-detach-controller"
	I0314 19:42:22.168459    8428 command_runner.go:130] ! I0314 19:19:03.324640       1 attach_detach_controller.go:337] "Starting attach detach controller"
	I0314 19:42:22.168459    8428 command_runner.go:130] ! I0314 19:19:03.324856       1 shared_informer.go:311] Waiting for caches to sync for attach detach
	I0314 19:42:22.168535    8428 command_runner.go:130] ! I0314 19:19:03.341489       1 controllermanager.go:642] "Started controller" controller="certificatesigningrequest-cleaner-controller"
	I0314 19:42:22.168535    8428 command_runner.go:130] ! I0314 19:19:03.341829       1 cleaner.go:83] "Starting CSR cleaner controller"
	I0314 19:42:22.168535    8428 command_runner.go:130] ! I0314 19:19:03.359979       1 controllermanager.go:642] "Started controller" controller="bootstrap-signer-controller"
	I0314 19:42:22.168610    8428 command_runner.go:130] ! I0314 19:19:03.360131       1 shared_informer.go:311] Waiting for caches to sync for bootstrap_signer
	I0314 19:42:22.168610    8428 command_runner.go:130] ! I0314 19:19:03.373006       1 controllermanager.go:642] "Started controller" controller="persistentvolume-binder-controller"
	I0314 19:42:22.168610    8428 command_runner.go:130] ! I0314 19:19:03.373343       1 pv_controller_base.go:319] "Starting persistent volume controller"
	I0314 19:42:22.168610    8428 command_runner.go:130] ! I0314 19:19:03.373606       1 shared_informer.go:311] Waiting for caches to sync for persistent volume
	I0314 19:42:22.168610    8428 command_runner.go:130] ! I0314 19:19:03.385026       1 controllermanager.go:642] "Started controller" controller="persistentvolumeclaim-protection-controller"
	I0314 19:42:22.168688    8428 command_runner.go:130] ! I0314 19:19:03.385081       1 pvc_protection_controller.go:102] "Starting PVC protection controller"
	I0314 19:42:22.168688    8428 command_runner.go:130] ! I0314 19:19:03.385807       1 shared_informer.go:311] Waiting for caches to sync for PVC protection
	I0314 19:42:22.168688    8428 command_runner.go:130] ! I0314 19:19:03.399556       1 controllermanager.go:642] "Started controller" controller="token-cleaner-controller"
	I0314 19:42:22.168688    8428 command_runner.go:130] ! I0314 19:19:03.399796       1 core.go:228] "Warning: configure-cloud-routes is set, but no cloud provider specified. Will not configure cloud provider routes."
	I0314 19:42:22.168762    8428 command_runner.go:130] ! I0314 19:19:03.399936       1 controllermanager.go:620] "Warning: skipping controller" controller="node-route-controller"
	I0314 19:42:22.168762    8428 command_runner.go:130] ! I0314 19:19:03.400078       1 tokencleaner.go:112] "Starting token cleaner controller"
	I0314 19:42:22.168762    8428 command_runner.go:130] ! I0314 19:19:03.400349       1 shared_informer.go:311] Waiting for caches to sync for token_cleaner
	I0314 19:42:22.168762    8428 command_runner.go:130] ! I0314 19:19:03.400489       1 shared_informer.go:318] Caches are synced for token_cleaner
	I0314 19:42:22.168762    8428 command_runner.go:130] ! I0314 19:19:03.521977       1 controllermanager.go:642] "Started controller" controller="persistentvolume-protection-controller"
	I0314 19:42:22.168837    8428 command_runner.go:130] ! I0314 19:19:03.522076       1 pv_protection_controller.go:78] "Starting PV protection controller"
	I0314 19:42:22.168837    8428 command_runner.go:130] ! I0314 19:19:03.522086       1 shared_informer.go:311] Waiting for caches to sync for PV protection
	I0314 19:42:22.168837    8428 command_runner.go:130] ! I0314 19:19:03.567446       1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-kubelet-serving"
	I0314 19:42:22.168837    8428 command_runner.go:130] ! I0314 19:19:03.567574       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I0314 19:42:22.168919    8428 command_runner.go:130] ! I0314 19:19:03.567615       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0314 19:42:22.168919    8428 command_runner.go:130] ! I0314 19:19:03.568792       1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-kubelet-client"
	I0314 19:42:22.168919    8428 command_runner.go:130] ! I0314 19:19:03.568891       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I0314 19:42:22.168996    8428 command_runner.go:130] ! I0314 19:19:03.569119       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0314 19:42:22.168996    8428 command_runner.go:130] ! I0314 19:19:03.570147       1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-kube-apiserver-client"
	I0314 19:42:22.168996    8428 command_runner.go:130] ! I0314 19:19:03.570261       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I0314 19:42:22.168996    8428 command_runner.go:130] ! I0314 19:19:03.570356       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0314 19:42:22.168996    8428 command_runner.go:130] ! I0314 19:19:03.571403       1 controllermanager.go:642] "Started controller" controller="certificatesigningrequest-signing-controller"
	I0314 19:42:22.169074    8428 command_runner.go:130] ! I0314 19:19:03.571529       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0314 19:42:22.169108    8428 command_runner.go:130] ! I0314 19:19:03.571434       1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-legacy-unknown"
	I0314 19:42:22.169108    8428 command_runner.go:130] ! I0314 19:19:03.572095       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0314 19:42:22.169136    8428 command_runner.go:130] ! I0314 19:19:03.723142       1 controllermanager.go:642] "Started controller" controller="ttl-controller"
	I0314 19:42:22.169172    8428 command_runner.go:130] ! I0314 19:19:03.723289       1 ttl_controller.go:124] "Starting TTL controller"
	I0314 19:42:22.169197    8428 command_runner.go:130] ! I0314 19:19:03.723300       1 shared_informer.go:311] Waiting for caches to sync for TTL
	I0314 19:42:22.169197    8428 command_runner.go:130] ! I0314 19:19:13.784656       1 range_allocator.go:111] "No Secondary Service CIDR provided. Skipping filtering out secondary service addresses"
	I0314 19:42:22.169197    8428 command_runner.go:130] ! I0314 19:19:13.784710       1 controllermanager.go:642] "Started controller" controller="node-ipam-controller"
	I0314 19:42:22.169197    8428 command_runner.go:130] ! I0314 19:19:13.784891       1 node_ipam_controller.go:162] "Starting ipam controller"
	I0314 19:42:22.169262    8428 command_runner.go:130] ! I0314 19:19:13.784975       1 shared_informer.go:311] Waiting for caches to sync for node
	I0314 19:42:22.169262    8428 command_runner.go:130] ! I0314 19:19:13.813537       1 controllermanager.go:642] "Started controller" controller="namespace-controller"
	I0314 19:42:22.169262    8428 command_runner.go:130] ! I0314 19:19:13.814099       1 namespace_controller.go:197] "Starting namespace controller"
	I0314 19:42:22.169262    8428 command_runner.go:130] ! I0314 19:19:13.814528       1 shared_informer.go:311] Waiting for caches to sync for namespace
	I0314 19:42:22.169340    8428 command_runner.go:130] ! I0314 19:19:13.831516       1 controllermanager.go:642] "Started controller" controller="garbage-collector-controller"
	I0314 19:42:22.169340    8428 command_runner.go:130] ! I0314 19:19:13.831928       1 garbagecollector.go:155] "Starting controller" controller="garbagecollector"
	I0314 19:42:22.169340    8428 command_runner.go:130] ! I0314 19:19:13.832023       1 shared_informer.go:311] Waiting for caches to sync for garbage collector
	I0314 19:42:22.169340    8428 command_runner.go:130] ! I0314 19:19:13.832052       1 graph_builder.go:294] "Running" component="GraphBuilder"
	I0314 19:42:22.169340    8428 command_runner.go:130] ! I0314 19:19:13.876141       1 controllermanager.go:642] "Started controller" controller="horizontal-pod-autoscaler-controller"
	I0314 19:42:22.169414    8428 command_runner.go:130] ! I0314 19:19:13.876437       1 horizontal.go:200] "Starting HPA controller"
	I0314 19:42:22.169414    8428 command_runner.go:130] ! I0314 19:19:13.876448       1 shared_informer.go:311] Waiting for caches to sync for HPA
	I0314 19:42:22.169414    8428 command_runner.go:130] ! I0314 19:19:13.892498       1 controllermanager.go:642] "Started controller" controller="disruption-controller"
	I0314 19:42:22.169414    8428 command_runner.go:130] ! I0314 19:19:13.892891       1 disruption.go:433] "Sending events to api server."
	I0314 19:42:22.169414    8428 command_runner.go:130] ! I0314 19:19:13.893092       1 disruption.go:444] "Starting disruption controller"
	I0314 19:42:22.169494    8428 command_runner.go:130] ! I0314 19:19:13.893185       1 shared_informer.go:311] Waiting for caches to sync for disruption
	I0314 19:42:22.169494    8428 command_runner.go:130] ! I0314 19:19:13.895299       1 controllermanager.go:642] "Started controller" controller="certificatesigningrequest-approving-controller"
	I0314 19:42:22.169494    8428 command_runner.go:130] ! I0314 19:19:13.895861       1 certificate_controller.go:115] "Starting certificate controller" name="csrapproving"
	I0314 19:42:22.169494    8428 command_runner.go:130] ! I0314 19:19:13.896105       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrapproving
	I0314 19:42:22.169494    8428 command_runner.go:130] ! I0314 19:19:13.908480       1 controllermanager.go:642] "Started controller" controller="endpointslice-mirroring-controller"
	I0314 19:42:22.169569    8428 command_runner.go:130] ! I0314 19:19:13.908861       1 endpointslicemirroring_controller.go:223] "Starting EndpointSliceMirroring controller"
	I0314 19:42:22.169569    8428 command_runner.go:130] ! I0314 19:19:13.908873       1 shared_informer.go:311] Waiting for caches to sync for endpoint_slice_mirroring
	I0314 19:42:22.169569    8428 command_runner.go:130] ! I0314 19:19:13.929369       1 controllermanager.go:642] "Started controller" controller="replicationcontroller-controller"
	I0314 19:42:22.169569    8428 command_runner.go:130] ! I0314 19:19:13.929803       1 replica_set.go:214] "Starting controller" name="replicationcontroller"
	I0314 19:42:22.169644    8428 command_runner.go:130] ! I0314 19:19:13.930050       1 shared_informer.go:311] Waiting for caches to sync for ReplicationController
	I0314 19:42:22.169644    8428 command_runner.go:130] ! I0314 19:19:13.974683       1 controllermanager.go:642] "Started controller" controller="replicaset-controller"
	I0314 19:42:22.169644    8428 command_runner.go:130] ! I0314 19:19:13.974899       1 replica_set.go:214] "Starting controller" name="replicaset"
	I0314 19:42:22.169644    8428 command_runner.go:130] ! I0314 19:19:13.975108       1 shared_informer.go:311] Waiting for caches to sync for ReplicaSet
	I0314 19:42:22.169720    8428 command_runner.go:130] ! E0314 19:19:14.134866       1 core.go:92] "Failed to start service controller" err="WARNING: no cloud provider provided, services of type LoadBalancer will fail"
	I0314 19:42:22.169720    8428 command_runner.go:130] ! I0314 19:19:14.135266       1 controllermanager.go:620] "Warning: skipping controller" controller="service-lb-controller"
	I0314 19:42:22.169720    8428 command_runner.go:130] ! E0314 19:19:14.170400       1 core.go:213] "Failed to start cloud node lifecycle controller" err="no cloud provider provided"
	I0314 19:42:22.169720    8428 command_runner.go:130] ! I0314 19:19:14.170426       1 controllermanager.go:620] "Warning: skipping controller" controller="cloud-node-lifecycle-controller"
	I0314 19:42:22.169795    8428 command_runner.go:130] ! I0314 19:19:14.324676       1 controllermanager.go:642] "Started controller" controller="ttl-after-finished-controller"
	I0314 19:42:22.169795    8428 command_runner.go:130] ! I0314 19:19:14.324865       1 ttlafterfinished_controller.go:109] "Starting TTL after finished controller"
	I0314 19:42:22.169795    8428 command_runner.go:130] ! I0314 19:19:14.325169       1 shared_informer.go:311] Waiting for caches to sync for TTL after finished
	I0314 19:42:22.169795    8428 command_runner.go:130] ! I0314 19:19:14.474401       1 controllermanager.go:642] "Started controller" controller="ephemeral-volume-controller"
	I0314 19:42:22.169795    8428 command_runner.go:130] ! I0314 19:19:14.474562       1 controller.go:169] "Starting ephemeral volume controller"
	I0314 19:42:22.169871    8428 command_runner.go:130] ! I0314 19:19:14.474660       1 shared_informer.go:311] Waiting for caches to sync for ephemeral
	I0314 19:42:22.169871    8428 command_runner.go:130] ! I0314 19:19:14.633668       1 controllermanager.go:642] "Started controller" controller="endpointslice-controller"
	I0314 19:42:22.169871    8428 command_runner.go:130] ! I0314 19:19:14.633821       1 endpointslice_controller.go:264] "Starting endpoint slice controller"
	I0314 19:42:22.169955    8428 command_runner.go:130] ! I0314 19:19:14.633832       1 shared_informer.go:311] Waiting for caches to sync for endpoint_slice
	I0314 19:42:22.169955    8428 command_runner.go:130] ! I0314 19:19:14.773955       1 controllermanager.go:642] "Started controller" controller="pod-garbage-collector-controller"
	I0314 19:42:22.169955    8428 command_runner.go:130] ! I0314 19:19:14.774019       1 gc_controller.go:101] "Starting GC controller"
	I0314 19:42:22.170048    8428 command_runner.go:130] ! I0314 19:19:14.774027       1 shared_informer.go:311] Waiting for caches to sync for GC
	I0314 19:42:22.170048    8428 command_runner.go:130] ! I0314 19:19:14.925568       1 controllermanager.go:642] "Started controller" controller="daemonset-controller"
	I0314 19:42:22.170048    8428 command_runner.go:130] ! I0314 19:19:14.925814       1 daemon_controller.go:291] "Starting daemon sets controller"
	I0314 19:42:22.170048    8428 command_runner.go:130] ! I0314 19:19:14.925828       1 shared_informer.go:311] Waiting for caches to sync for daemon sets
	I0314 19:42:22.170048    8428 command_runner.go:130] ! I0314 19:19:15.075328       1 controllermanager.go:642] "Started controller" controller="job-controller"
	I0314 19:42:22.170135    8428 command_runner.go:130] ! I0314 19:19:15.075556       1 job_controller.go:226] "Starting job controller"
	I0314 19:42:22.170135    8428 command_runner.go:130] ! I0314 19:19:15.075634       1 shared_informer.go:311] Waiting for caches to sync for job
	I0314 19:42:22.170224    8428 command_runner.go:130] ! I0314 19:19:15.225929       1 controllermanager.go:642] "Started controller" controller="persistentvolume-expander-controller"
	I0314 19:42:22.170299    8428 command_runner.go:130] ! I0314 19:19:15.226065       1 expand_controller.go:328] "Starting expand controller"
	I0314 19:42:22.170299    8428 command_runner.go:130] ! I0314 19:19:15.226077       1 shared_informer.go:311] Waiting for caches to sync for expand
	I0314 19:42:22.170337    8428 command_runner.go:130] ! I0314 19:19:15.378471       1 controllermanager.go:642] "Started controller" controller="deployment-controller"
	I0314 19:42:22.170337    8428 command_runner.go:130] ! I0314 19:19:15.378640       1 deployment_controller.go:168] "Starting controller" controller="deployment"
	I0314 19:42:22.170337    8428 command_runner.go:130] ! I0314 19:19:15.379237       1 shared_informer.go:311] Waiting for caches to sync for deployment
	I0314 19:42:22.170337    8428 command_runner.go:130] ! I0314 19:19:15.525089       1 controllermanager.go:642] "Started controller" controller="statefulset-controller"
	I0314 19:42:22.170337    8428 command_runner.go:130] ! I0314 19:19:15.525565       1 stateful_set.go:161] "Starting stateful set controller"
	I0314 19:42:22.170427    8428 command_runner.go:130] ! I0314 19:19:15.525643       1 shared_informer.go:311] Waiting for caches to sync for stateful set
	I0314 19:42:22.170427    8428 command_runner.go:130] ! I0314 19:19:15.679545       1 controllermanager.go:642] "Started controller" controller="cronjob-controller"
	I0314 19:42:22.170427    8428 command_runner.go:130] ! I0314 19:19:15.679611       1 cronjob_controllerv2.go:139] "Starting cronjob controller v2"
	I0314 19:42:22.170427    8428 command_runner.go:130] ! I0314 19:19:15.679619       1 shared_informer.go:311] Waiting for caches to sync for cronjob
	I0314 19:42:22.170503    8428 command_runner.go:130] ! I0314 19:19:15.825516       1 controllermanager.go:642] "Started controller" controller="clusterrole-aggregation-controller"
	I0314 19:42:22.170503    8428 command_runner.go:130] ! I0314 19:19:15.825908       1 clusterroleaggregation_controller.go:189] "Starting ClusterRoleAggregator controller"
	I0314 19:42:22.170503    8428 command_runner.go:130] ! I0314 19:19:15.825920       1 shared_informer.go:311] Waiting for caches to sync for ClusterRoleAggregator
	I0314 19:42:22.170581    8428 command_runner.go:130] ! I0314 19:19:15.976308       1 controllermanager.go:642] "Started controller" controller="root-ca-certificate-publisher-controller"
	I0314 19:42:22.170581    8428 command_runner.go:130] ! I0314 19:19:15.976673       1 publisher.go:102] "Starting root CA cert publisher controller"
	I0314 19:42:22.170581    8428 command_runner.go:130] ! I0314 19:19:15.976858       1 shared_informer.go:311] Waiting for caches to sync for crt configmap
	I0314 19:42:22.170581    8428 command_runner.go:130] ! I0314 19:19:15.993409       1 shared_informer.go:311] Waiting for caches to sync for resource quota
	I0314 19:42:22.170581    8428 command_runner.go:130] ! I0314 19:19:16.017841       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-442000\" does not exist"
	I0314 19:42:22.170658    8428 command_runner.go:130] ! I0314 19:19:16.022817       1 shared_informer.go:311] Waiting for caches to sync for garbage collector
	I0314 19:42:22.170658    8428 command_runner.go:130] ! I0314 19:19:16.023332       1 shared_informer.go:318] Caches are synced for TTL
	I0314 19:42:22.170658    8428 command_runner.go:130] ! I0314 19:19:16.025413       1 shared_informer.go:318] Caches are synced for TTL after finished
	I0314 19:42:22.170658    8428 command_runner.go:130] ! I0314 19:19:16.025667       1 shared_informer.go:318] Caches are synced for stateful set
	I0314 19:42:22.170658    8428 command_runner.go:130] ! I0314 19:19:16.025909       1 shared_informer.go:318] Caches are synced for daemon sets
	I0314 19:42:22.170736    8428 command_runner.go:130] ! I0314 19:19:16.026194       1 shared_informer.go:318] Caches are synced for expand
	I0314 19:42:22.170736    8428 command_runner.go:130] ! I0314 19:19:16.030689       1 shared_informer.go:318] Caches are synced for ReplicationController
	I0314 19:42:22.170736    8428 command_runner.go:130] ! I0314 19:19:16.042937       1 shared_informer.go:318] Caches are synced for endpoint
	I0314 19:42:22.170736    8428 command_runner.go:130] ! I0314 19:19:16.063170       1 shared_informer.go:318] Caches are synced for bootstrap_signer
	I0314 19:42:22.170736    8428 command_runner.go:130] ! I0314 19:19:16.069816       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-client
	I0314 19:42:22.170812    8428 command_runner.go:130] ! I0314 19:19:16.069953       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-serving
	I0314 19:42:22.170812    8428 command_runner.go:130] ! I0314 19:19:16.071382       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0314 19:42:22.170812    8428 command_runner.go:130] ! I0314 19:19:16.072881       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-legacy-unknown
	I0314 19:42:22.170812    8428 command_runner.go:130] ! I0314 19:19:16.075260       1 shared_informer.go:318] Caches are synced for GC
	I0314 19:42:22.170812    8428 command_runner.go:130] ! I0314 19:19:16.075273       1 shared_informer.go:318] Caches are synced for ReplicaSet
	I0314 19:42:22.170891    8428 command_runner.go:130] ! I0314 19:19:16.075312       1 shared_informer.go:318] Caches are synced for ephemeral
	I0314 19:42:22.170891    8428 command_runner.go:130] ! I0314 19:19:16.076852       1 shared_informer.go:318] Caches are synced for HPA
	I0314 19:42:22.170891    8428 command_runner.go:130] ! I0314 19:19:16.077008       1 shared_informer.go:318] Caches are synced for crt configmap
	I0314 19:42:22.170891    8428 command_runner.go:130] ! I0314 19:19:16.077022       1 shared_informer.go:318] Caches are synced for job
	I0314 19:42:22.170891    8428 command_runner.go:130] ! I0314 19:19:16.079681       1 shared_informer.go:318] Caches are synced for deployment
	I0314 19:42:22.170891    8428 command_runner.go:130] ! I0314 19:19:16.079893       1 shared_informer.go:318] Caches are synced for cronjob
	I0314 19:42:22.170891    8428 command_runner.go:130] ! I0314 19:19:16.085788       1 shared_informer.go:318] Caches are synced for node
	I0314 19:42:22.170966    8428 command_runner.go:130] ! I0314 19:19:16.085869       1 range_allocator.go:174] "Sending events to api server"
	I0314 19:42:22.170966    8428 command_runner.go:130] ! I0314 19:19:16.085937       1 range_allocator.go:178] "Starting range CIDR allocator"
	I0314 19:42:22.170966    8428 command_runner.go:130] ! I0314 19:19:16.085945       1 shared_informer.go:311] Waiting for caches to sync for cidrallocator
	I0314 19:42:22.170966    8428 command_runner.go:130] ! I0314 19:19:16.085951       1 shared_informer.go:318] Caches are synced for cidrallocator
	I0314 19:42:22.170966    8428 command_runner.go:130] ! I0314 19:19:16.086224       1 shared_informer.go:318] Caches are synced for PVC protection
	I0314 19:42:22.171041    8428 command_runner.go:130] ! I0314 19:19:16.093730       1 shared_informer.go:318] Caches are synced for disruption
	I0314 19:42:22.171041    8428 command_runner.go:130] ! I0314 19:19:16.093802       1 shared_informer.go:318] Caches are synced for resource quota
	I0314 19:42:22.171041    8428 command_runner.go:130] ! I0314 19:19:16.097148       1 shared_informer.go:318] Caches are synced for certificate-csrapproving
	I0314 19:42:22.171041    8428 command_runner.go:130] ! I0314 19:19:16.098688       1 shared_informer.go:318] Caches are synced for service account
	I0314 19:42:22.171117    8428 command_runner.go:130] ! I0314 19:19:16.102404       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-442000" podCIDRs=["10.244.0.0/24"]
	I0314 19:42:22.171117    8428 command_runner.go:130] ! I0314 19:19:16.112396       1 shared_informer.go:318] Caches are synced for taint
	I0314 19:42:22.171117    8428 command_runner.go:130] ! I0314 19:19:16.112849       1 node_lifecycle_controller.go:1225] "Initializing eviction metric for zone" zone=""
	I0314 19:42:22.171117    8428 command_runner.go:130] ! I0314 19:19:16.113070       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-442000"
	I0314 19:42:22.171117    8428 command_runner.go:130] ! I0314 19:19:16.113155       1 node_lifecycle_controller.go:1029] "Controller detected that all Nodes are not-Ready. Entering master disruption mode"
	I0314 19:42:22.171196    8428 command_runner.go:130] ! I0314 19:19:16.112659       1 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring
	I0314 19:42:22.171196    8428 command_runner.go:130] ! I0314 19:19:16.113865       1 taint_manager.go:205] "Starting NoExecuteTaintManager"
	I0314 19:42:22.171196    8428 command_runner.go:130] ! I0314 19:19:16.113966       1 taint_manager.go:210] "Sending events to api server"
	I0314 19:42:22.171196    8428 command_runner.go:130] ! I0314 19:19:16.115068       1 shared_informer.go:318] Caches are synced for namespace
	I0314 19:42:22.171196    8428 command_runner.go:130] ! I0314 19:19:16.118281       1 event.go:307] "Event occurred" object="multinode-442000" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-442000 event: Registered Node multinode-442000 in Controller"
	I0314 19:42:22.171271    8428 command_runner.go:130] ! I0314 19:19:16.134584       1 shared_informer.go:318] Caches are synced for endpoint_slice
	I0314 19:42:22.171271    8428 command_runner.go:130] ! I0314 19:19:16.151625       1 event.go:307] "Event occurred" object="kube-system/kube-scheduler-multinode-442000" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0314 19:42:22.171271    8428 command_runner.go:130] ! I0314 19:19:16.171551       1 event.go:307] "Event occurred" object="kube-system/etcd-multinode-442000" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0314 19:42:22.171349    8428 command_runner.go:130] ! I0314 19:19:16.174341       1 event.go:307] "Event occurred" object="kube-system/kube-apiserver-multinode-442000" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0314 19:42:22.171349    8428 command_runner.go:130] ! I0314 19:19:16.174358       1 event.go:307] "Event occurred" object="kube-system/kube-controller-manager-multinode-442000" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0314 19:42:22.171349    8428 command_runner.go:130] ! I0314 19:19:16.184987       1 shared_informer.go:318] Caches are synced for resource quota
	I0314 19:42:22.171349    8428 command_runner.go:130] ! I0314 19:19:16.223118       1 shared_informer.go:318] Caches are synced for PV protection
	I0314 19:42:22.171430    8428 command_runner.go:130] ! I0314 19:19:16.225526       1 shared_informer.go:318] Caches are synced for attach detach
	I0314 19:42:22.171430    8428 command_runner.go:130] ! I0314 19:19:16.225950       1 shared_informer.go:318] Caches are synced for ClusterRoleAggregator
	I0314 19:42:22.171430    8428 command_runner.go:130] ! I0314 19:19:16.274020       1 shared_informer.go:318] Caches are synced for persistent volume
	I0314 19:42:22.171430    8428 command_runner.go:130] ! I0314 19:19:16.320250       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-7b9lf"
	I0314 19:42:22.171504    8428 command_runner.go:130] ! I0314 19:19:16.328650       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-cg28g"
	I0314 19:42:22.171504    8428 command_runner.go:130] ! I0314 19:19:16.626855       1 shared_informer.go:318] Caches are synced for garbage collector
	I0314 19:42:22.171504    8428 command_runner.go:130] ! I0314 19:19:16.633099       1 shared_informer.go:318] Caches are synced for garbage collector
	I0314 19:42:22.171504    8428 command_runner.go:130] ! I0314 19:19:16.633344       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0314 19:42:22.171582    8428 command_runner.go:130] ! I0314 19:19:16.789964       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5dd5756b68 to 2"
	I0314 19:42:22.171582    8428 command_runner.go:130] ! I0314 19:19:17.099870       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-pvxpr"
	I0314 19:42:22.171582    8428 command_runner.go:130] ! I0314 19:19:17.114819       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-d22jc"
	I0314 19:42:22.171659    8428 command_runner.go:130] ! I0314 19:19:17.146456       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="355.713874ms"
	I0314 19:42:22.171659    8428 command_runner.go:130] ! I0314 19:19:17.166202       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="19.688691ms"
	I0314 19:42:22.171659    8428 command_runner.go:130] ! I0314 19:19:17.169087       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="2.771063ms"
	I0314 19:42:22.171734    8428 command_runner.go:130] ! I0314 19:19:18.399096       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I0314 19:42:22.171734    8428 command_runner.go:130] ! I0314 19:19:18.448322       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-pvxpr"
	I0314 19:42:22.171734    8428 command_runner.go:130] ! I0314 19:19:18.482373       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="82.944747ms"
	I0314 19:42:22.171811    8428 command_runner.go:130] ! I0314 19:19:18.500300       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="17.716936ms"
	I0314 19:42:22.171811    8428 command_runner.go:130] ! I0314 19:19:18.500887       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="99.317µs"
	I0314 19:42:22.171811    8428 command_runner.go:130] ! I0314 19:19:26.475232       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="98.515µs"
	I0314 19:42:22.171811    8428 command_runner.go:130] ! I0314 19:19:26.505160       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="58.309µs"
	I0314 19:42:22.171811    8428 command_runner.go:130] ! I0314 19:19:28.423231       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="23.310782ms"
	I0314 19:42:22.171893    8428 command_runner.go:130] ! I0314 19:19:28.423925       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="72.006µs"
	I0314 19:42:22.171926    8428 command_runner.go:130] ! I0314 19:19:31.116802       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I0314 19:42:22.171953    8428 command_runner.go:130] ! I0314 19:22:02.467925       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-442000-m02\" does not exist"
	I0314 19:42:22.171953    8428 command_runner.go:130] ! I0314 19:22:02.479576       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-442000-m02" podCIDRs=["10.244.1.0/24"]
	I0314 19:42:22.172012    8428 command_runner.go:130] ! I0314 19:22:02.507610       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-72dzs"
	I0314 19:42:22.172034    8428 command_runner.go:130] ! I0314 19:22:02.511169       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-c7m4p"
	I0314 19:42:22.172034    8428 command_runner.go:130] ! I0314 19:22:06.145908       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-442000-m02"
	I0314 19:42:22.172034    8428 command_runner.go:130] ! I0314 19:22:06.146201       1 event.go:307] "Event occurred" object="multinode-442000-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-442000-m02 event: Registered Node multinode-442000-m02 in Controller"
	I0314 19:42:22.172034    8428 command_runner.go:130] ! I0314 19:22:20.862710       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-442000-m02"
	I0314 19:42:22.172034    8428 command_runner.go:130] ! I0314 19:22:45.188036       1 event.go:307] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-5b5d89c9d6 to 2"
	I0314 19:42:22.172034    8428 command_runner.go:130] ! I0314 19:22:45.218022       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5b5d89c9d6-8drpb"
	I0314 19:42:22.172034    8428 command_runner.go:130] ! I0314 19:22:45.241867       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5b5d89c9d6-7446n"
	I0314 19:42:22.172034    8428 command_runner.go:130] ! I0314 19:22:45.267427       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="80.313691ms"
	I0314 19:42:22.172034    8428 command_runner.go:130] ! I0314 19:22:45.292961       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="25.159362ms"
	I0314 19:42:22.172034    8428 command_runner.go:130] ! I0314 19:22:45.311264       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="18.241692ms"
	I0314 19:42:22.172034    8428 command_runner.go:130] ! I0314 19:22:45.311407       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="93.911µs"
	I0314 19:42:22.172034    8428 command_runner.go:130] ! I0314 19:22:48.320252       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="21.515467ms"
	I0314 19:42:22.172034    8428 command_runner.go:130] ! I0314 19:22:48.320403       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="46.303µs"
	I0314 19:42:22.172034    8428 command_runner.go:130] ! I0314 19:22:48.344640       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="8.018521ms"
	I0314 19:42:22.172034    8428 command_runner.go:130] ! I0314 19:22:48.344838       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="42.804µs"
	I0314 19:42:22.172034    8428 command_runner.go:130] ! I0314 19:26:25.208780       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-442000-m02"
	I0314 19:42:22.172034    8428 command_runner.go:130] ! I0314 19:26:25.214591       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-442000-m03\" does not exist"
	I0314 19:42:22.172034    8428 command_runner.go:130] ! I0314 19:26:25.248082       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-442000-m03" podCIDRs=["10.244.2.0/24"]
	I0314 19:42:22.172034    8428 command_runner.go:130] ! I0314 19:26:25.265233       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-r7zdb"
	I0314 19:42:22.172034    8428 command_runner.go:130] ! I0314 19:26:25.273144       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-w2qls"
	I0314 19:42:22.172034    8428 command_runner.go:130] ! I0314 19:26:26.207170       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-442000-m03"
	I0314 19:42:22.172034    8428 command_runner.go:130] ! I0314 19:26:26.207236       1 event.go:307] "Event occurred" object="multinode-442000-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-442000-m03 event: Registered Node multinode-442000-m03 in Controller"
	I0314 19:42:22.172034    8428 command_runner.go:130] ! I0314 19:26:43.758846       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-442000-m02"
	I0314 19:42:22.172034    8428 command_runner.go:130] ! I0314 19:33:46.333556       1 event.go:307] "Event occurred" object="multinode-442000-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-442000-m03 status is now: NodeNotReady"
	I0314 19:42:22.172034    8428 command_runner.go:130] ! I0314 19:33:46.333891       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-442000-m02"
	I0314 19:42:22.172559    8428 command_runner.go:130] ! I0314 19:33:46.348976       1 event.go:307] "Event occurred" object="kube-system/kindnet-r7zdb" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0314 19:42:22.172559    8428 command_runner.go:130] ! I0314 19:33:46.370200       1 event.go:307] "Event occurred" object="kube-system/kube-proxy-w2qls" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0314 19:42:22.172559    8428 command_runner.go:130] ! I0314 19:36:39.868492       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-442000-m02"
	I0314 19:42:22.172636    8428 command_runner.go:130] ! I0314 19:36:41.400896       1 event.go:307] "Event occurred" object="multinode-442000-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RemovingNode" message="Node multinode-442000-m03 event: Removing Node multinode-442000-m03 from Controller"
	I0314 19:42:22.172671    8428 command_runner.go:130] ! I0314 19:36:47.335802       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-442000-m03\" does not exist"
	I0314 19:42:22.172671    8428 command_runner.go:130] ! I0314 19:36:47.336128       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-442000-m02"
	I0314 19:42:22.172671    8428 command_runner.go:130] ! I0314 19:36:47.352987       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-442000-m03" podCIDRs=["10.244.3.0/24"]
	I0314 19:42:22.172671    8428 command_runner.go:130] ! I0314 19:36:51.403261       1 event.go:307] "Event occurred" object="multinode-442000-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-442000-m03 event: Registered Node multinode-442000-m03 in Controller"
	I0314 19:42:22.172671    8428 command_runner.go:130] ! I0314 19:36:54.976864       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-442000-m02"
	I0314 19:42:22.172671    8428 command_runner.go:130] ! I0314 19:38:21.463528       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-442000-m02"
	I0314 19:42:22.172671    8428 command_runner.go:130] ! I0314 19:38:21.463818       1 event.go:307] "Event occurred" object="multinode-442000-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-442000-m03 status is now: NodeNotReady"
	I0314 19:42:22.172671    8428 command_runner.go:130] ! I0314 19:38:21.486796       1 event.go:307] "Event occurred" object="kube-system/kindnet-r7zdb" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0314 19:42:22.172671    8428 command_runner.go:130] ! I0314 19:38:21.501217       1 event.go:307] "Event occurred" object="kube-system/kube-proxy-w2qls" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0314 19:42:24.692959    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/namespaces/kube-system/pods
	I0314 19:42:24.692959    8428 round_trippers.go:469] Request Headers:
	I0314 19:42:24.692959    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:42:24.692959    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:42:24.698307    8428 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0314 19:42:24.698307    8428 round_trippers.go:577] Response Headers:
	I0314 19:42:24.698307    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:42:24.698307    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:42:24 GMT
	I0314 19:42:24.698307    8428 round_trippers.go:580]     Audit-Id: cfbdcadb-0d12-4859-82dd-7b35a841e2c4
	I0314 19:42:24.698307    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:42:24.698307    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:42:24.698307    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:42:24.699161    8428 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1921"},"items":[{"metadata":{"name":"coredns-5dd5756b68-d22jc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2a563b3f-a175-4dc2-9f0b-67dbaefbfaac","resourceVersion":"1908","creationTimestamp":"2024-03-14T19:19:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"65689a14-ed43-48af-8104-53ea2e3991f3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:19:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"65689a14-ed43-48af-8104-53ea2e3991f3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 83007 chars]
	I0314 19:42:24.703266    8428 system_pods.go:59] 12 kube-system pods found
	I0314 19:42:24.703266    8428 system_pods.go:61] "coredns-5dd5756b68-d22jc" [2a563b3f-a175-4dc2-9f0b-67dbaefbfaac] Running
	I0314 19:42:24.703266    8428 system_pods.go:61] "etcd-multinode-442000" [106cc31d-907f-4853-9e8d-f13c8ac4e398] Running
	I0314 19:42:24.703266    8428 system_pods.go:61] "kindnet-7b9lf" [677b9084-0026-4b21-b041-445940624ed7] Running
	I0314 19:42:24.703266    8428 system_pods.go:61] "kindnet-c7m4p" [926a47cb-e444-455d-8b74-d17a229020a1] Running
	I0314 19:42:24.703266    8428 system_pods.go:61] "kindnet-r7zdb" [69b103aa-023b-4243-ba7b-875106aac183] Running
	I0314 19:42:24.703266    8428 system_pods.go:61] "kube-apiserver-multinode-442000" [ebdd5ddf-2b02-4315-bc64-1b10c383d507] Running
	I0314 19:42:24.703266    8428 system_pods.go:61] "kube-controller-manager-multinode-442000" [b16fc874-ef74-44ca-a54f-bb678bf982df] Running
	I0314 19:42:24.703266    8428 system_pods.go:61] "kube-proxy-72dzs" [80b840b0-3803-4102-a966-ea73aed74f49] Running
	I0314 19:42:24.703266    8428 system_pods.go:61] "kube-proxy-cg28g" [c7f798bf-6722-4731-af8d-ccd5703d116e] Running
	I0314 19:42:24.703266    8428 system_pods.go:61] "kube-proxy-w2qls" [7a53e602-282e-4b63-a993-a5d23d3c615f] Running
	I0314 19:42:24.703266    8428 system_pods.go:61] "kube-scheduler-multinode-442000" [76b10598-fe0d-4a14-a8e4-a32221fbb68f] Running
	I0314 19:42:24.703266    8428 system_pods.go:61] "storage-provisioner" [65d76566-4401-4b28-8452-10ed98624901] Running
	I0314 19:42:24.703266    8428 system_pods.go:74] duration metric: took 3.7500593s to wait for pod list to return data ...
	I0314 19:42:24.703266    8428 default_sa.go:34] waiting for default service account to be created ...
	I0314 19:42:24.703266    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/namespaces/default/serviceaccounts
	I0314 19:42:24.703266    8428 round_trippers.go:469] Request Headers:
	I0314 19:42:24.703266    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:42:24.703266    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:42:24.706404    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:42:24.706404    8428 round_trippers.go:577] Response Headers:
	I0314 19:42:24.706404    8428 round_trippers.go:580]     Content-Length: 262
	I0314 19:42:24.706404    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:42:24 GMT
	I0314 19:42:24.706404    8428 round_trippers.go:580]     Audit-Id: f0249156-d4bf-4c39-be8d-dcff9f92224b
	I0314 19:42:24.706404    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:42:24.706404    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:42:24.706404    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:42:24.706404    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:42:24.706404    8428 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"1921"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"31dfe296-58ba-4a37-a509-52c518a0c41a","resourceVersion":"365","creationTimestamp":"2024-03-14T19:19:16Z"}}]}
	I0314 19:42:24.707321    8428 default_sa.go:45] found service account: "default"
	I0314 19:42:24.707321    8428 default_sa.go:55] duration metric: took 4.0542ms for default service account to be created ...
	I0314 19:42:24.707321    8428 system_pods.go:116] waiting for k8s-apps to be running ...
	I0314 19:42:24.707598    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/namespaces/kube-system/pods
	I0314 19:42:24.707629    8428 round_trippers.go:469] Request Headers:
	I0314 19:42:24.707629    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:42:24.707629    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:42:24.711291    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:42:24.711291    8428 round_trippers.go:577] Response Headers:
	I0314 19:42:24.711291    8428 round_trippers.go:580]     Audit-Id: 05087bd0-2c43-4c05-ad11-6387d183ed88
	I0314 19:42:24.712291    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:42:24.712291    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:42:24.712291    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:42:24.712291    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:42:24.712291    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:42:24 GMT
	I0314 19:42:24.713345    8428 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1921"},"items":[{"metadata":{"name":"coredns-5dd5756b68-d22jc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2a563b3f-a175-4dc2-9f0b-67dbaefbfaac","resourceVersion":"1908","creationTimestamp":"2024-03-14T19:19:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"65689a14-ed43-48af-8104-53ea2e3991f3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:19:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"65689a14-ed43-48af-8104-53ea2e3991f3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 83007 chars]
	I0314 19:42:24.715897    8428 system_pods.go:86] 12 kube-system pods found
	I0314 19:42:24.715897    8428 system_pods.go:89] "coredns-5dd5756b68-d22jc" [2a563b3f-a175-4dc2-9f0b-67dbaefbfaac] Running
	I0314 19:42:24.715897    8428 system_pods.go:89] "etcd-multinode-442000" [106cc31d-907f-4853-9e8d-f13c8ac4e398] Running
	I0314 19:42:24.715897    8428 system_pods.go:89] "kindnet-7b9lf" [677b9084-0026-4b21-b041-445940624ed7] Running
	I0314 19:42:24.715897    8428 system_pods.go:89] "kindnet-c7m4p" [926a47cb-e444-455d-8b74-d17a229020a1] Running
	I0314 19:42:24.715897    8428 system_pods.go:89] "kindnet-r7zdb" [69b103aa-023b-4243-ba7b-875106aac183] Running
	I0314 19:42:24.715897    8428 system_pods.go:89] "kube-apiserver-multinode-442000" [ebdd5ddf-2b02-4315-bc64-1b10c383d507] Running
	I0314 19:42:24.715897    8428 system_pods.go:89] "kube-controller-manager-multinode-442000" [b16fc874-ef74-44ca-a54f-bb678bf982df] Running
	I0314 19:42:24.715897    8428 system_pods.go:89] "kube-proxy-72dzs" [80b840b0-3803-4102-a966-ea73aed74f49] Running
	I0314 19:42:24.715897    8428 system_pods.go:89] "kube-proxy-cg28g" [c7f798bf-6722-4731-af8d-ccd5703d116e] Running
	I0314 19:42:24.715897    8428 system_pods.go:89] "kube-proxy-w2qls" [7a53e602-282e-4b63-a993-a5d23d3c615f] Running
	I0314 19:42:24.715897    8428 system_pods.go:89] "kube-scheduler-multinode-442000" [76b10598-fe0d-4a14-a8e4-a32221fbb68f] Running
	I0314 19:42:24.715897    8428 system_pods.go:89] "storage-provisioner" [65d76566-4401-4b28-8452-10ed98624901] Running
	I0314 19:42:24.715897    8428 system_pods.go:126] duration metric: took 8.5757ms to wait for k8s-apps to be running ...
	I0314 19:42:24.715897    8428 system_svc.go:44] waiting for kubelet service to be running ....
	I0314 19:42:24.724908    8428 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0314 19:42:24.748846    8428 system_svc.go:56] duration metric: took 32.9463ms WaitForService to wait for kubelet
	I0314 19:42:24.748966    8428 kubeadm.go:576] duration metric: took 1m13.90952s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0314 19:42:24.748966    8428 node_conditions.go:102] verifying NodePressure condition ...
	I0314 19:42:24.748966    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes
	I0314 19:42:24.748966    8428 round_trippers.go:469] Request Headers:
	I0314 19:42:24.748966    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:42:24.748966    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:42:24.753758    8428 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 19:42:24.753758    8428 round_trippers.go:577] Response Headers:
	I0314 19:42:24.753758    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:42:24.753758    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:42:25 GMT
	I0314 19:42:24.753838    8428 round_trippers.go:580]     Audit-Id: 163913f8-3487-4480-96f8-d468a3f40123
	I0314 19:42:24.753838    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:42:24.753838    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:42:24.753838    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:42:24.754206    8428 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1921"},"items":[{"metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1868","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 16256 chars]
	I0314 19:42:24.755363    8428 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0314 19:42:24.755435    8428 node_conditions.go:123] node cpu capacity is 2
	I0314 19:42:24.755435    8428 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0314 19:42:24.755435    8428 node_conditions.go:123] node cpu capacity is 2
	I0314 19:42:24.755435    8428 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0314 19:42:24.755435    8428 node_conditions.go:123] node cpu capacity is 2
	I0314 19:42:24.755508    8428 node_conditions.go:105] duration metric: took 6.5414ms to run NodePressure ...
	I0314 19:42:24.755508    8428 start.go:240] waiting for startup goroutines ...
	I0314 19:42:24.755508    8428 start.go:245] waiting for cluster config update ...
	I0314 19:42:24.755508    8428 start.go:254] writing updated cluster config ...
	I0314 19:42:24.761079    8428 out.go:177] 
	I0314 19:42:24.767119    8428 config.go:182] Loaded profile config "ha-832100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0314 19:42:24.772407    8428 config.go:182] Loaded profile config "multinode-442000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0314 19:42:24.772407    8428 profile.go:142] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-442000\config.json ...
	I0314 19:42:24.777556    8428 out.go:177] * Starting "multinode-442000-m02" worker node in "multinode-442000" cluster
	I0314 19:42:24.781938    8428 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0314 19:42:24.781938    8428 cache.go:56] Caching tarball of preloaded images
	I0314 19:42:24.781938    8428 preload.go:173] Found C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0314 19:42:24.781938    8428 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0314 19:42:24.781938    8428 profile.go:142] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-442000\config.json ...
	I0314 19:42:24.787627    8428 start.go:360] acquireMachinesLock for multinode-442000-m02: {Name:mk814f158b6187cc9297257c36fdbe0d2871c950 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0314 19:42:24.787627    8428 start.go:364] duration metric: took 0s to acquireMachinesLock for "multinode-442000-m02"
	I0314 19:42:24.787627    8428 start.go:96] Skipping create...Using existing machine configuration
	I0314 19:42:24.787627    8428 fix.go:54] fixHost starting: m02
	I0314 19:42:24.787627    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000-m02 ).state
	I0314 19:42:26.808599    8428 main.go:141] libmachine: [stdout =====>] : Off
	
	I0314 19:42:26.809623    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:42:26.809675    8428 fix.go:112] recreateIfNeeded on multinode-442000-m02: state=Stopped err=<nil>
	W0314 19:42:26.809790    8428 fix.go:138] unexpected machine state, will restart: <nil>
	I0314 19:42:26.814342    8428 out.go:177] * Restarting existing hyperv VM for "multinode-442000-m02" ...
	I0314 19:42:26.816679    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-442000-m02
	I0314 19:42:29.726066    8428 main.go:141] libmachine: [stdout =====>] : 
	I0314 19:42:29.726287    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:42:29.726287    8428 main.go:141] libmachine: Waiting for host to start...
	I0314 19:42:29.726287    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000-m02 ).state
	I0314 19:42:31.802428    8428 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:42:31.802649    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:42:31.802718    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-442000-m02 ).networkadapters[0]).ipaddresses[0]
	I0314 19:42:34.120121    8428 main.go:141] libmachine: [stdout =====>] : 
	I0314 19:42:34.120121    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:42:35.120652    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000-m02 ).state
	I0314 19:42:37.172337    8428 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:42:37.172770    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:42:37.172836    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-442000-m02 ).networkadapters[0]).ipaddresses[0]
	I0314 19:42:39.446961    8428 main.go:141] libmachine: [stdout =====>] : 
	I0314 19:42:39.446995    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:42:40.454908    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000-m02 ).state
	I0314 19:42:42.476048    8428 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:42:42.476163    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:42:42.476240    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-442000-m02 ).networkadapters[0]).ipaddresses[0]
	I0314 19:42:44.783167    8428 main.go:141] libmachine: [stdout =====>] : 
	I0314 19:42:44.783167    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:42:45.791551    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000-m02 ).state
	I0314 19:42:47.813359    8428 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:42:47.813359    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:42:47.814171    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-442000-m02 ).networkadapters[0]).ipaddresses[0]
	I0314 19:42:50.074989    8428 main.go:141] libmachine: [stdout =====>] : 
	I0314 19:42:50.074989    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:42:51.087339    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000-m02 ).state
	I0314 19:42:53.129558    8428 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:42:53.129841    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:42:53.129841    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-442000-m02 ).networkadapters[0]).ipaddresses[0]
	I0314 19:42:55.505515    8428 main.go:141] libmachine: [stdout =====>] : 172.17.93.200
	
	I0314 19:42:55.505554    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:42:55.507361    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000-m02 ).state
	I0314 19:42:57.475661    8428 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:42:57.475661    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:42:57.476014    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-442000-m02 ).networkadapters[0]).ipaddresses[0]
	I0314 19:42:59.841070    8428 main.go:141] libmachine: [stdout =====>] : 172.17.93.200
	
	I0314 19:42:59.841070    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:42:59.841070    8428 profile.go:142] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-442000\config.json ...
	I0314 19:42:59.843425    8428 machine.go:94] provisionDockerMachine start ...
	I0314 19:42:59.843425    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000-m02 ).state
	I0314 19:43:01.777806    8428 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:43:01.777806    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:43:01.777964    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-442000-m02 ).networkadapters[0]).ipaddresses[0]
	I0314 19:43:04.152668    8428 main.go:141] libmachine: [stdout =====>] : 172.17.93.200
	
	I0314 19:43:04.152668    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:43:04.156507    8428 main.go:141] libmachine: Using SSH client type: native
	I0314 19:43:04.156654    8428 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xac9f80] 0xaccb60 <nil>  [] 0s} 172.17.93.200 22 <nil> <nil>}
	I0314 19:43:04.156654    8428 main.go:141] libmachine: About to run SSH command:
	hostname
	I0314 19:43:04.281475    8428 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0314 19:43:04.281475    8428 buildroot.go:166] provisioning hostname "multinode-442000-m02"
	I0314 19:43:04.281475    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000-m02 ).state
	I0314 19:43:06.260591    8428 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:43:06.260591    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:43:06.261410    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-442000-m02 ).networkadapters[0]).ipaddresses[0]
	I0314 19:43:08.594834    8428 main.go:141] libmachine: [stdout =====>] : 172.17.93.200
	
	I0314 19:43:08.594834    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:43:08.598894    8428 main.go:141] libmachine: Using SSH client type: native
	I0314 19:43:08.599265    8428 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xac9f80] 0xaccb60 <nil>  [] 0s} 172.17.93.200 22 <nil> <nil>}
	I0314 19:43:08.599265    8428 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-442000-m02 && echo "multinode-442000-m02" | sudo tee /etc/hostname
	I0314 19:43:08.759647    8428 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-442000-m02
	
	I0314 19:43:08.759647    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000-m02 ).state
	I0314 19:43:10.753569    8428 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:43:10.753659    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:43:10.753826    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-442000-m02 ).networkadapters[0]).ipaddresses[0]
	I0314 19:43:13.116567    8428 main.go:141] libmachine: [stdout =====>] : 172.17.93.200
	
	I0314 19:43:13.116765    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:43:13.124233    8428 main.go:141] libmachine: Using SSH client type: native
	I0314 19:43:13.124233    8428 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xac9f80] 0xaccb60 <nil>  [] 0s} 172.17.93.200 22 <nil> <nil>}
	I0314 19:43:13.124233    8428 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-442000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-442000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-442000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0314 19:43:13.271548    8428 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0314 19:43:13.271636    8428 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube7\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube7\minikube-integration\.minikube}
	I0314 19:43:13.271702    8428 buildroot.go:174] setting up certificates
	I0314 19:43:13.271748    8428 provision.go:84] configureAuth start
	I0314 19:43:13.271857    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000-m02 ).state
	I0314 19:43:15.244755    8428 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:43:15.245188    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:43:15.245261    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-442000-m02 ).networkadapters[0]).ipaddresses[0]
	I0314 19:43:17.590466    8428 main.go:141] libmachine: [stdout =====>] : 172.17.93.200
	
	I0314 19:43:17.591513    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:43:17.591513    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000-m02 ).state
	I0314 19:43:19.582246    8428 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:43:19.583345    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:43:19.583376    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-442000-m02 ).networkadapters[0]).ipaddresses[0]
	I0314 19:43:21.917219    8428 main.go:141] libmachine: [stdout =====>] : 172.17.93.200
	
	I0314 19:43:21.917745    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:43:21.917745    8428 provision.go:143] copyHostCerts
	I0314 19:43:21.917745    8428 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem
	I0314 19:43:21.917745    8428 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem, removing ...
	I0314 19:43:21.917745    8428 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.pem
	I0314 19:43:21.918380    8428 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0314 19:43:21.919206    8428 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem
	I0314 19:43:21.919364    8428 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem, removing ...
	I0314 19:43:21.919445    8428 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cert.pem
	I0314 19:43:21.919594    8428 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0314 19:43:21.920372    8428 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem
	I0314 19:43:21.920608    8428 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem, removing ...
	I0314 19:43:21.920608    8428 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\key.pem
	I0314 19:43:21.920608    8428 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem (1679 bytes)
	I0314 19:43:21.921328    8428 provision.go:117] generating server cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-442000-m02 san=[127.0.0.1 172.17.93.200 localhost minikube multinode-442000-m02]
	I0314 19:43:22.223608    8428 provision.go:177] copyRemoteCerts
	I0314 19:43:22.233198    8428 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0314 19:43:22.233198    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000-m02 ).state
	I0314 19:43:24.189337    8428 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:43:24.189337    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:43:24.189337    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-442000-m02 ).networkadapters[0]).ipaddresses[0]
	I0314 19:43:26.509019    8428 main.go:141] libmachine: [stdout =====>] : 172.17.93.200
	
	I0314 19:43:26.509413    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:43:26.509697    8428 sshutil.go:53] new ssh client: &{IP:172.17.93.200 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-442000-m02\id_rsa Username:docker}
	I0314 19:43:26.609851    8428 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.3761668s)
	I0314 19:43:26.609890    8428 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0314 19:43:26.610218    8428 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0314 19:43:26.652101    8428 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0314 19:43:26.652248    8428 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1229 bytes)
	I0314 19:43:26.693962    8428 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0314 19:43:26.694363    8428 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0314 19:43:26.735455    8428 provision.go:87] duration metric: took 13.4626972s to configureAuth
	I0314 19:43:26.735455    8428 buildroot.go:189] setting minikube options for container-runtime
	I0314 19:43:26.735455    8428 config.go:182] Loaded profile config "multinode-442000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0314 19:43:26.735455    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000-m02 ).state
	I0314 19:43:28.704426    8428 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:43:28.704426    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:43:28.704689    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-442000-m02 ).networkadapters[0]).ipaddresses[0]
	I0314 19:43:31.087452    8428 main.go:141] libmachine: [stdout =====>] : 172.17.93.200
	
	I0314 19:43:31.087452    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:43:31.091352    8428 main.go:141] libmachine: Using SSH client type: native
	I0314 19:43:31.091874    8428 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xac9f80] 0xaccb60 <nil>  [] 0s} 172.17.93.200 22 <nil> <nil>}
	I0314 19:43:31.091874    8428 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0314 19:43:31.229188    8428 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0314 19:43:31.229188    8428 buildroot.go:70] root file system type: tmpfs
	I0314 19:43:31.229732    8428 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0314 19:43:31.229849    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000-m02 ).state
	I0314 19:43:33.210256    8428 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:43:33.210256    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:43:33.210256    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-442000-m02 ).networkadapters[0]).ipaddresses[0]
	I0314 19:43:35.543113    8428 main.go:141] libmachine: [stdout =====>] : 172.17.93.200
	
	I0314 19:43:35.543113    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:43:35.548106    8428 main.go:141] libmachine: Using SSH client type: native
	I0314 19:43:35.548508    8428 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xac9f80] 0xaccb60 <nil>  [] 0s} 172.17.93.200 22 <nil> <nil>}
	I0314 19:43:35.548508    8428 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.17.93.236"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0314 19:43:35.712813    8428 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.17.93.236
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0314 19:43:35.712813    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000-m02 ).state
	I0314 19:43:37.672506    8428 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:43:37.688954    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:43:37.689089    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-442000-m02 ).networkadapters[0]).ipaddresses[0]
	I0314 19:43:40.056738    8428 main.go:141] libmachine: [stdout =====>] : 172.17.93.200
	
	I0314 19:43:40.056738    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:43:40.060403    8428 main.go:141] libmachine: Using SSH client type: native
	I0314 19:43:40.060802    8428 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xac9f80] 0xaccb60 <nil>  [] 0s} 172.17.93.200 22 <nil> <nil>}
	I0314 19:43:40.060802    8428 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0314 19:43:42.342578    8428 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0314 19:43:42.342578    8428 machine.go:97] duration metric: took 42.495965s to provisionDockerMachine
	I0314 19:43:42.342578    8428 start.go:293] postStartSetup for "multinode-442000-m02" (driver="hyperv")
	I0314 19:43:42.342578    8428 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0314 19:43:42.351826    8428 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0314 19:43:42.351826    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000-m02 ).state
	I0314 19:43:44.321500    8428 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:43:44.322439    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:43:44.322439    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-442000-m02 ).networkadapters[0]).ipaddresses[0]
	I0314 19:43:46.648572    8428 main.go:141] libmachine: [stdout =====>] : 172.17.93.200
	
	I0314 19:43:46.648621    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:43:46.648621    8428 sshutil.go:53] new ssh client: &{IP:172.17.93.200 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-442000-m02\id_rsa Username:docker}
	I0314 19:43:46.750526    8428 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.3983707s)
	I0314 19:43:46.758920    8428 ssh_runner.go:195] Run: cat /etc/os-release
	I0314 19:43:46.765583    8428 command_runner.go:130] > NAME=Buildroot
	I0314 19:43:46.765583    8428 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0314 19:43:46.765583    8428 command_runner.go:130] > ID=buildroot
	I0314 19:43:46.765583    8428 command_runner.go:130] > VERSION_ID=2023.02.9
	I0314 19:43:46.765583    8428 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0314 19:43:46.765583    8428 info.go:137] Remote host: Buildroot 2023.02.9
	I0314 19:43:46.765583    8428 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\addons for local assets ...
	I0314 19:43:46.766114    8428 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\files for local assets ...
	I0314 19:43:46.766728    8428 filesync.go:149] local asset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\110522.pem -> 110522.pem in /etc/ssl/certs
	I0314 19:43:46.766728    8428 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\110522.pem -> /etc/ssl/certs/110522.pem
	I0314 19:43:46.776371    8428 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0314 19:43:46.792827    8428 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\110522.pem --> /etc/ssl/certs/110522.pem (1708 bytes)
	I0314 19:43:46.834338    8428 start.go:296] duration metric: took 4.4914233s for postStartSetup
	I0314 19:43:46.834338    8428 fix.go:56] duration metric: took 1m22.0405476s for fixHost
	I0314 19:43:46.834338    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000-m02 ).state
	I0314 19:43:48.761489    8428 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:43:48.761583    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:43:48.761583    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-442000-m02 ).networkadapters[0]).ipaddresses[0]
	I0314 19:43:51.087077    8428 main.go:141] libmachine: [stdout =====>] : 172.17.93.200
	
	I0314 19:43:51.087514    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:43:51.091029    8428 main.go:141] libmachine: Using SSH client type: native
	I0314 19:43:51.091636    8428 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xac9f80] 0xaccb60 <nil>  [] 0s} 172.17.93.200 22 <nil> <nil>}
	I0314 19:43:51.091636    8428 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0314 19:43:51.221355    8428 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710445431.474296497
	
	I0314 19:43:51.221432    8428 fix.go:216] guest clock: 1710445431.474296497
	I0314 19:43:51.221432    8428 fix.go:229] Guest: 2024-03-14 19:43:51.474296497 +0000 UTC Remote: 2024-03-14 19:43:46.834338 +0000 UTC m=+284.346477901 (delta=4.639958497s)
	I0314 19:43:51.221507    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000-m02 ).state
	I0314 19:43:53.182528    8428 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:43:53.182562    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:43:53.182639    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-442000-m02 ).networkadapters[0]).ipaddresses[0]
	I0314 19:43:55.545891    8428 main.go:141] libmachine: [stdout =====>] : 172.17.93.200
	
	I0314 19:43:55.545891    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:43:55.549623    8428 main.go:141] libmachine: Using SSH client type: native
	I0314 19:43:55.550241    8428 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xac9f80] 0xaccb60 <nil>  [] 0s} 172.17.93.200 22 <nil> <nil>}
	I0314 19:43:55.550241    8428 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1710445431
	I0314 19:43:55.686821    8428 main.go:141] libmachine: SSH cmd err, output: <nil>: Thu Mar 14 19:43:51 UTC 2024
	
	I0314 19:43:55.686821    8428 fix.go:236] clock set: Thu Mar 14 19:43:51 UTC 2024
	 (err=<nil>)
	I0314 19:43:55.686821    8428 start.go:83] releasing machines lock for "multinode-442000-m02", held for 1m30.8923684s
	I0314 19:43:55.687970    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000-m02 ).state
	I0314 19:43:57.672870    8428 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:43:57.673525    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:43:57.673525    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-442000-m02 ).networkadapters[0]).ipaddresses[0]
	I0314 19:44:00.030849    8428 main.go:141] libmachine: [stdout =====>] : 172.17.93.200
	
	I0314 19:44:00.030849    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:44:00.033572    8428 out.go:177] * Found network options:
	I0314 19:44:00.035769    8428 out.go:177]   - NO_PROXY=172.17.93.236
	W0314 19:44:00.037724    8428 proxy.go:119] fail to check proxy env: Error ip not in block
	I0314 19:44:00.039504    8428 out.go:177]   - NO_PROXY=172.17.93.236
	W0314 19:44:00.041078    8428 proxy.go:119] fail to check proxy env: Error ip not in block
	W0314 19:44:00.042766    8428 proxy.go:119] fail to check proxy env: Error ip not in block
	I0314 19:44:00.044757    8428 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0314 19:44:00.044757    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000-m02 ).state
	I0314 19:44:00.051770    8428 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0314 19:44:00.051770    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000-m02 ).state
	I0314 19:44:02.045819    8428 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:44:02.045935    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:44:02.045993    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-442000-m02 ).networkadapters[0]).ipaddresses[0]
	I0314 19:44:02.059280    8428 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:44:02.059280    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:44:02.059280    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-442000-m02 ).networkadapters[0]).ipaddresses[0]
	I0314 19:44:04.505336    8428 main.go:141] libmachine: [stdout =====>] : 172.17.93.200
	
	I0314 19:44:04.505336    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:44:04.505336    8428 sshutil.go:53] new ssh client: &{IP:172.17.93.200 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-442000-m02\id_rsa Username:docker}
	I0314 19:44:04.518554    8428 main.go:141] libmachine: [stdout =====>] : 172.17.93.200
	
	I0314 19:44:04.518554    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:44:04.518554    8428 sshutil.go:53] new ssh client: &{IP:172.17.93.200 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-442000-m02\id_rsa Username:docker}
	I0314 19:44:04.598121    8428 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	I0314 19:44:04.598346    8428 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (4.5461369s)
	W0314 19:44:04.598346    8428 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0314 19:44:04.609505    8428 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0314 19:44:04.675195    8428 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0314 19:44:04.675292    8428 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.6301878s)
	I0314 19:44:04.675292    8428 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0314 19:44:04.675449    8428 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0314 19:44:04.675449    8428 start.go:494] detecting cgroup driver to use...
	I0314 19:44:04.675704    8428 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0314 19:44:04.707714    8428 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0314 19:44:04.717196    8428 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0314 19:44:04.744752    8428 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0314 19:44:04.763306    8428 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0314 19:44:04.772646    8428 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0314 19:44:04.800624    8428 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0314 19:44:04.828339    8428 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0314 19:44:04.854956    8428 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0314 19:44:04.881672    8428 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0314 19:44:04.907690    8428 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0314 19:44:04.933871    8428 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0314 19:44:04.950020    8428 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0314 19:44:04.958598    8428 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0314 19:44:04.983787    8428 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 19:44:05.171967    8428 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0314 19:44:05.201671    8428 start.go:494] detecting cgroup driver to use...
	I0314 19:44:05.216543    8428 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0314 19:44:05.244196    8428 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0314 19:44:05.244196    8428 command_runner.go:130] > [Unit]
	I0314 19:44:05.244196    8428 command_runner.go:130] > Description=Docker Application Container Engine
	I0314 19:44:05.244196    8428 command_runner.go:130] > Documentation=https://docs.docker.com
	I0314 19:44:05.244196    8428 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0314 19:44:05.244196    8428 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0314 19:44:05.244196    8428 command_runner.go:130] > StartLimitBurst=3
	I0314 19:44:05.244196    8428 command_runner.go:130] > StartLimitIntervalSec=60
	I0314 19:44:05.244196    8428 command_runner.go:130] > [Service]
	I0314 19:44:05.244196    8428 command_runner.go:130] > Type=notify
	I0314 19:44:05.244196    8428 command_runner.go:130] > Restart=on-failure
	I0314 19:44:05.244196    8428 command_runner.go:130] > Environment=NO_PROXY=172.17.93.236
	I0314 19:44:05.244196    8428 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0314 19:44:05.244196    8428 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0314 19:44:05.244196    8428 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0314 19:44:05.244196    8428 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0314 19:44:05.244196    8428 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0314 19:44:05.244196    8428 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0314 19:44:05.244196    8428 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0314 19:44:05.244196    8428 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0314 19:44:05.244196    8428 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0314 19:44:05.244196    8428 command_runner.go:130] > ExecStart=
	I0314 19:44:05.244196    8428 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0314 19:44:05.244196    8428 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0314 19:44:05.244196    8428 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0314 19:44:05.244196    8428 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0314 19:44:05.244196    8428 command_runner.go:130] > LimitNOFILE=infinity
	I0314 19:44:05.244720    8428 command_runner.go:130] > LimitNPROC=infinity
	I0314 19:44:05.244720    8428 command_runner.go:130] > LimitCORE=infinity
	I0314 19:44:05.244720    8428 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0314 19:44:05.244720    8428 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0314 19:44:05.244780    8428 command_runner.go:130] > TasksMax=infinity
	I0314 19:44:05.244780    8428 command_runner.go:130] > TimeoutStartSec=0
	I0314 19:44:05.244822    8428 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0314 19:44:05.244822    8428 command_runner.go:130] > Delegate=yes
	I0314 19:44:05.244822    8428 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0314 19:44:05.244886    8428 command_runner.go:130] > KillMode=process
	I0314 19:44:05.244886    8428 command_runner.go:130] > [Install]
	I0314 19:44:05.244925    8428 command_runner.go:130] > WantedBy=multi-user.target
	I0314 19:44:05.254966    8428 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0314 19:44:05.284772    8428 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0314 19:44:05.316522    8428 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0314 19:44:05.346740    8428 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0314 19:44:05.378469    8428 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0314 19:44:05.434710    8428 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0314 19:44:05.457345    8428 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0314 19:44:05.486496    8428 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0314 19:44:05.496594    8428 ssh_runner.go:195] Run: which cri-dockerd
	I0314 19:44:05.502693    8428 command_runner.go:130] > /usr/bin/cri-dockerd
	I0314 19:44:05.511454    8428 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0314 19:44:05.528357    8428 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0314 19:44:05.566730    8428 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0314 19:44:05.755177    8428 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0314 19:44:05.932341    8428 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0314 19:44:05.932451    8428 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0314 19:44:05.971592    8428 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 19:44:06.153863    8428 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0314 19:44:08.743376    8428 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5892643s)
	I0314 19:44:08.752821    8428 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0314 19:44:08.783374    8428 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0314 19:44:08.817883    8428 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0314 19:44:09.004360    8428 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0314 19:44:09.185525    8428 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 19:44:09.361058    8428 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0314 19:44:09.397440    8428 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0314 19:44:09.428488    8428 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 19:44:09.610459    8428 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0314 19:44:09.712439    8428 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0314 19:44:09.724634    8428 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0314 19:44:09.732955    8428 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0314 19:44:09.732955    8428 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0314 19:44:09.732955    8428 command_runner.go:130] > Device: 0,22	Inode: 846         Links: 1
	I0314 19:44:09.732955    8428 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0314 19:44:09.732955    8428 command_runner.go:130] > Access: 2024-03-14 19:44:09.889828811 +0000
	I0314 19:44:09.732955    8428 command_runner.go:130] > Modify: 2024-03-14 19:44:09.889828811 +0000
	I0314 19:44:09.732955    8428 command_runner.go:130] > Change: 2024-03-14 19:44:09.893829164 +0000
	I0314 19:44:09.733104    8428 command_runner.go:130] >  Birth: -
	I0314 19:44:09.733155    8428 start.go:562] Will wait 60s for crictl version
	I0314 19:44:09.741469    8428 ssh_runner.go:195] Run: which crictl
	I0314 19:44:09.746596    8428 command_runner.go:130] > /usr/bin/crictl
	I0314 19:44:09.756167    8428 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0314 19:44:09.828061    8428 command_runner.go:130] > Version:  0.1.0
	I0314 19:44:09.828061    8428 command_runner.go:130] > RuntimeName:  docker
	I0314 19:44:09.828061    8428 command_runner.go:130] > RuntimeVersion:  25.0.4
	I0314 19:44:09.828061    8428 command_runner.go:130] > RuntimeApiVersion:  v1
	I0314 19:44:09.828061    8428 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  25.0.4
	RuntimeApiVersion:  v1
	I0314 19:44:09.837622    8428 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0314 19:44:09.873435    8428 command_runner.go:130] > 25.0.4
	I0314 19:44:09.880979    8428 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0314 19:44:09.912289    8428 command_runner.go:130] > 25.0.4
	I0314 19:44:09.916104    8428 out.go:204] * Preparing Kubernetes v1.28.4 on Docker 25.0.4 ...
	I0314 19:44:09.918093    8428 out.go:177]   - env NO_PROXY=172.17.93.236
	I0314 19:44:09.920068    8428 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0314 19:44:09.924061    8428 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0314 19:44:09.924061    8428 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0314 19:44:09.924061    8428 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0314 19:44:09.924061    8428 ip.go:207] Found interface: {Index:7 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:82:e8:09 Flags:up|broadcast|multicast|running}
	I0314 19:44:09.926404    8428 ip.go:210] interface addr: fe80::e3be:cf7e:6bd2:b964/64
	I0314 19:44:09.926404    8428 ip.go:210] interface addr: 172.17.80.1/20
	I0314 19:44:09.937245    8428 ssh_runner.go:195] Run: grep 172.17.80.1	host.minikube.internal$ /etc/hosts
	I0314 19:44:09.942748    8428 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.17.80.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0314 19:44:09.963451    8428 mustload.go:65] Loading cluster: multinode-442000
	I0314 19:44:09.964043    8428 config.go:182] Loaded profile config "multinode-442000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0314 19:44:09.964509    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000 ).state
	I0314 19:44:12.011664    8428 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:44:12.011664    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:44:12.011664    8428 host.go:66] Checking if "multinode-442000" exists ...
	I0314 19:44:12.012607    8428 certs.go:68] Setting up C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-442000 for IP: 172.17.93.200
	I0314 19:44:12.012607    8428 certs.go:194] generating shared ca certs ...
	I0314 19:44:12.012607    8428 certs.go:226] acquiring lock for ca certs: {Name:mk3bd6e475aadf28590677101b36f61c132d4290 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 19:44:12.013204    8428 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key
	I0314 19:44:12.013421    8428 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key
	I0314 19:44:12.013626    8428 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0314 19:44:12.013844    8428 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0314 19:44:12.013986    8428 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0314 19:44:12.014022    8428 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0314 19:44:12.014022    8428 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\11052.pem (1338 bytes)
	W0314 19:44:12.014557    8428 certs.go:480] ignoring C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\11052_empty.pem, impossibly tiny 0 bytes
	I0314 19:44:12.014662    8428 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0314 19:44:12.014775    8428 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0314 19:44:12.014989    8428 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0314 19:44:12.015190    8428 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0314 19:44:12.015572    8428 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\110522.pem (1708 bytes)
	I0314 19:44:12.015673    8428 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\11052.pem -> /usr/share/ca-certificates/11052.pem
	I0314 19:44:12.015767    8428 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\110522.pem -> /usr/share/ca-certificates/110522.pem
	I0314 19:44:12.015900    8428 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0314 19:44:12.016007    8428 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0314 19:44:12.062723    8428 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0314 19:44:12.105466    8428 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0314 19:44:12.148126    8428 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0314 19:44:12.188631    8428 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\11052.pem --> /usr/share/ca-certificates/11052.pem (1338 bytes)
	I0314 19:44:12.236602    8428 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\110522.pem --> /usr/share/ca-certificates/110522.pem (1708 bytes)
	I0314 19:44:12.278564    8428 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0314 19:44:12.330250    8428 ssh_runner.go:195] Run: openssl version
	I0314 19:44:12.337936    8428 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0314 19:44:12.347970    8428 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11052.pem && ln -fs /usr/share/ca-certificates/11052.pem /etc/ssl/certs/11052.pem"
	I0314 19:44:12.376306    8428 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11052.pem
	I0314 19:44:12.383055    8428 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Mar 14 17:58 /usr/share/ca-certificates/11052.pem
	I0314 19:44:12.383055    8428 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 14 17:58 /usr/share/ca-certificates/11052.pem
	I0314 19:44:12.391962    8428 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11052.pem
	I0314 19:44:12.399937    8428 command_runner.go:130] > 51391683
	I0314 19:44:12.409261    8428 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11052.pem /etc/ssl/certs/51391683.0"
	I0314 19:44:12.436253    8428 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110522.pem && ln -fs /usr/share/ca-certificates/110522.pem /etc/ssl/certs/110522.pem"
	I0314 19:44:12.469463    8428 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/110522.pem
	I0314 19:44:12.477415    8428 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Mar 14 17:58 /usr/share/ca-certificates/110522.pem
	I0314 19:44:12.477415    8428 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 14 17:58 /usr/share/ca-certificates/110522.pem
	I0314 19:44:12.485416    8428 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110522.pem
	I0314 19:44:12.495082    8428 command_runner.go:130] > 3ec20f2e
	I0314 19:44:12.508688    8428 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110522.pem /etc/ssl/certs/3ec20f2e.0"
	I0314 19:44:12.544212    8428 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0314 19:44:12.572103    8428 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0314 19:44:12.578992    8428 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Mar 14 17:44 /usr/share/ca-certificates/minikubeCA.pem
	I0314 19:44:12.578992    8428 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 14 17:44 /usr/share/ca-certificates/minikubeCA.pem
	I0314 19:44:12.588463    8428 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0314 19:44:12.597619    8428 command_runner.go:130] > b5213941
	I0314 19:44:12.606348    8428 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0314 19:44:12.633790    8428 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0314 19:44:12.640836    8428 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0314 19:44:12.640970    8428 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0314 19:44:12.641173    8428 kubeadm.go:928] updating node {m02 172.17.93.200 8443 v1.28.4 docker false true} ...
	I0314 19:44:12.641223    8428 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-442000-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.17.93.200
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-442000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0314 19:44:12.650957    8428 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0314 19:44:12.668250    8428 command_runner.go:130] > kubeadm
	I0314 19:44:12.668273    8428 command_runner.go:130] > kubectl
	I0314 19:44:12.668273    8428 command_runner.go:130] > kubelet
	I0314 19:44:12.668343    8428 binaries.go:44] Found k8s binaries, skipping transfer
	I0314 19:44:12.677540    8428 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0314 19:44:12.695074    8428 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (320 bytes)
	I0314 19:44:12.726385    8428 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0314 19:44:12.762264    8428 ssh_runner.go:195] Run: grep 172.17.93.236	control-plane.minikube.internal$ /etc/hosts
	I0314 19:44:12.768995    8428 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.17.93.236	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0314 19:44:12.797587    8428 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 19:44:12.993042    8428 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0314 19:44:13.020969    8428 host.go:66] Checking if "multinode-442000" exists ...
	I0314 19:44:13.021509    8428 start.go:316] joinCluster: &{Name:multinode-442000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.
4 ClusterName:multinode-442000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.93.236 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.17.93.200 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.17.84.215 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:
false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 19:44:13.021509    8428 start.go:329] removing existing worker node "m02" before attempting to rejoin cluster: &{Name:m02 IP:172.17.93.200 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0314 19:44:13.021509    8428 host.go:66] Checking if "multinode-442000-m02" exists ...
	I0314 19:44:13.022125    8428 mustload.go:65] Loading cluster: multinode-442000
	I0314 19:44:13.022571    8428 config.go:182] Loaded profile config "multinode-442000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0314 19:44:13.022732    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000 ).state
	I0314 19:44:15.004316    8428 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:44:15.004743    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:44:15.004743    8428 host.go:66] Checking if "multinode-442000" exists ...
	I0314 19:44:15.005313    8428 api_server.go:166] Checking apiserver status ...
	I0314 19:44:15.014175    8428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:44:15.014175    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000 ).state
	I0314 19:44:17.011297    8428 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:44:17.011297    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:44:17.011974    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-442000 ).networkadapters[0]).ipaddresses[0]
	I0314 19:44:19.350271    8428 main.go:141] libmachine: [stdout =====>] : 172.17.93.236
	
	I0314 19:44:19.350271    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:44:19.351153    8428 sshutil.go:53] new ssh client: &{IP:172.17.93.236 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-442000\id_rsa Username:docker}
	I0314 19:44:19.467782    8428 command_runner.go:130] > 2008
	I0314 19:44:19.468498    8428 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (4.4539366s)
	I0314 19:44:19.479317    8428 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2008/cgroup
	W0314 19:44:19.497262    8428 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2008/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0314 19:44:19.508930    8428 ssh_runner.go:195] Run: ls
	I0314 19:44:19.518325    8428 api_server.go:253] Checking apiserver healthz at https://172.17.93.236:8443/healthz ...
	I0314 19:44:19.529186    8428 api_server.go:279] https://172.17.93.236:8443/healthz returned 200:
	ok
	I0314 19:44:19.544218    8428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl drain multinode-442000-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data
	I0314 19:44:19.693698    8428 command_runner.go:130] ! Warning: ignoring DaemonSet-managed Pods: kube-system/kindnet-c7m4p, kube-system/kube-proxy-72dzs
	I0314 19:44:22.732229    8428 command_runner.go:130] > node/multinode-442000-m02 cordoned
	I0314 19:44:22.732229    8428 command_runner.go:130] > pod "busybox-5b5d89c9d6-8drpb" has DeletionTimestamp older than 1 seconds, skipping
	I0314 19:44:22.732355    8428 command_runner.go:130] > node/multinode-442000-m02 drained
	I0314 19:44:22.732355    8428 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl drain multinode-442000-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data: (3.1878989s)
	I0314 19:44:22.732355    8428 node.go:128] successfully drained node "multinode-442000-m02"
	I0314 19:44:22.732355    8428 ssh_runner.go:195] Run: /bin/bash -c "KUBECONFIG=/var/lib/minikube/kubeconfig sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --force --ignore-preflight-errors=all --cri-socket=unix:///var/run/cri-dockerd.sock"
	I0314 19:44:22.732666    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000-m02 ).state
	I0314 19:44:24.694226    8428 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:44:24.694226    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:44:24.694226    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-442000-m02 ).networkadapters[0]).ipaddresses[0]
	I0314 19:44:27.034071    8428 main.go:141] libmachine: [stdout =====>] : 172.17.93.200
	
	I0314 19:44:27.034071    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:44:27.034571    8428 sshutil.go:53] new ssh client: &{IP:172.17.93.200 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-442000-m02\id_rsa Username:docker}
	I0314 19:44:27.419352    8428 command_runner.go:130] > [preflight] Running pre-flight checks
	I0314 19:44:27.421117    8428 command_runner.go:130] > [reset] Deleted contents of the etcd data directory: /var/lib/etcd
	I0314 19:44:27.422201    8428 command_runner.go:130] > [reset] Stopping the kubelet service
	I0314 19:44:27.436164    8428 command_runner.go:130] > [reset] Unmounting mounted directories in "/var/lib/kubelet"
	I0314 19:44:28.014663    8428 command_runner.go:130] > [reset] Deleting contents of directories: [/etc/kubernetes/manifests /var/lib/kubelet /etc/kubernetes/pki]
	I0314 19:44:28.034315    8428 command_runner.go:130] > [reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
	I0314 19:44:28.034435    8428 command_runner.go:130] > The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d
	I0314 19:44:28.034435    8428 command_runner.go:130] > The reset process does not reset or clean up iptables rules or IPVS tables.
	I0314 19:44:28.034435    8428 command_runner.go:130] > If you wish to reset iptables, you must do so manually by using the "iptables" command.
	I0314 19:44:28.034435    8428 command_runner.go:130] > If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
	I0314 19:44:28.034490    8428 command_runner.go:130] > to reset your system's IPVS tables.
	I0314 19:44:28.034490    8428 command_runner.go:130] > The reset process does not clean your kubeconfig files and you must remove them manually.
	I0314 19:44:28.034490    8428 command_runner.go:130] > Please, check the contents of the $HOME/.kube/config file.
	I0314 19:44:28.036255    8428 command_runner.go:130] ! W0314 19:44:27.679521    1550 removeetcdmember.go:106] [reset] No kubeadm config, using etcd pod spec to get data directory
	I0314 19:44:28.036387    8428 command_runner.go:130] ! W0314 19:44:28.271846    1550 cleanupnode.go:99] [reset] Failed to remove containers: failed to stop running pod a2877e9c2a8bda33c0139c1a1bf02c535834060c5ea2dbf379c752c83c6a304c: output: E0314 19:44:27.964246    1610 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \"busybox-5b5d89c9d6-8drpb_default\" network: cni config uninitialized" podSandboxID="a2877e9c2a8bda33c0139c1a1bf02c535834060c5ea2dbf379c752c83c6a304c"
	I0314 19:44:28.036532    8428 command_runner.go:130] ! time="2024-03-14T19:44:27Z" level=fatal msg="stopping the pod sandbox \"a2877e9c2a8bda33c0139c1a1bf02c535834060c5ea2dbf379c752c83c6a304c\": rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \"busybox-5b5d89c9d6-8drpb_default\" network: cni config uninitialized"
	I0314 19:44:28.036532    8428 command_runner.go:130] ! : exit status 1
	I0314 19:44:28.036634    8428 ssh_runner.go:235] Completed: /bin/bash -c "KUBECONFIG=/var/lib/minikube/kubeconfig sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --force --ignore-preflight-errors=all --cri-socket=unix:///var/run/cri-dockerd.sock": (5.303688s)
	I0314 19:44:28.036770    8428 node.go:155] successfully reset node "multinode-442000-m02"
	I0314 19:44:28.037570    8428 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0314 19:44:28.038453    8428 kapi.go:59] client config for multinode-442000: &rest.Config{Host:"https://172.17.93.236:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\multinode-442000\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\multinode-442000\\client.key", CAFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1ec9180), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0314 19:44:28.039575    8428 cert_rotation.go:137] Starting client certificate rotation controller
	I0314 19:44:28.039934    8428 request.go:1212] Request Body: {"kind":"DeleteOptions","apiVersion":"v1"}
	I0314 19:44:28.040011    8428 round_trippers.go:463] DELETE https://172.17.93.236:8443/api/v1/nodes/multinode-442000-m02
	I0314 19:44:28.040079    8428 round_trippers.go:469] Request Headers:
	I0314 19:44:28.040079    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:44:28.040079    8428 round_trippers.go:473]     Content-Type: application/json
	I0314 19:44:28.040114    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:44:28.062066    8428 round_trippers.go:574] Response Status: 200 OK in 21 milliseconds
	I0314 19:44:28.062066    8428 round_trippers.go:577] Response Headers:
	I0314 19:44:28.062129    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:44:28 GMT
	I0314 19:44:28.062129    8428 round_trippers.go:580]     Audit-Id: 07eab05f-7218-4546-b48a-64d5d569cb3d
	I0314 19:44:28.062129    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:44:28.062129    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:44:28.062159    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:44:28.062159    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:44:28.062159    8428 round_trippers.go:580]     Content-Length: 171
	I0314 19:44:28.062187    8428 request.go:1212] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"multinode-442000-m02","kind":"nodes","uid":"5f369d83-fce6-47fe-b14b-171ed626975b"}}
	I0314 19:44:28.062187    8428 node.go:180] successfully deleted node "multinode-442000-m02"
	I0314 19:44:28.062187    8428 start.go:333] successfully removed existing worker node "m02" from cluster: &{Name:m02 IP:172.17.93.200 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0314 19:44:28.062187    8428 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0314 19:44:28.062187    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000 ).state
	I0314 19:44:30.043379    8428 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:44:30.043379    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:44:30.043464    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-442000 ).networkadapters[0]).ipaddresses[0]
	I0314 19:44:32.376935    8428 main.go:141] libmachine: [stdout =====>] : 172.17.93.236
	
	I0314 19:44:32.376935    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:44:32.377349    8428 sshutil.go:53] new ssh client: &{IP:172.17.93.236 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-442000\id_rsa Username:docker}
	I0314 19:44:32.616519    8428 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token lkihtm.szfhj1z8jquppx08 --discovery-token-ca-cert-hash sha256:d6cca618a9b89cd38081689b02ff56bf93130e2cdd7cca4172dee33bbb1d34eb 
	I0314 19:44:32.616574    8428 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0": (4.5540472s)
	I0314 19:44:32.616707    8428 start.go:342] trying to join worker node "m02" to cluster: &{Name:m02 IP:172.17.93.200 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0314 19:44:32.616767    8428 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token lkihtm.szfhj1z8jquppx08 --discovery-token-ca-cert-hash sha256:d6cca618a9b89cd38081689b02ff56bf93130e2cdd7cca4172dee33bbb1d34eb --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=multinode-442000-m02"
	I0314 19:44:32.851207    8428 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0314 19:44:35.673533    8428 command_runner.go:130] > [preflight] Running pre-flight checks
	I0314 19:44:35.673609    8428 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0314 19:44:35.673609    8428 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0314 19:44:35.673609    8428 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0314 19:44:35.673691    8428 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0314 19:44:35.673691    8428 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0314 19:44:35.673691    8428 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I0314 19:44:35.673691    8428 command_runner.go:130] > This node has joined the cluster:
	I0314 19:44:35.673691    8428 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0314 19:44:35.673756    8428 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0314 19:44:35.673805    8428 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0314 19:44:35.673805    8428 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token lkihtm.szfhj1z8jquppx08 --discovery-token-ca-cert-hash sha256:d6cca618a9b89cd38081689b02ff56bf93130e2cdd7cca4172dee33bbb1d34eb --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=multinode-442000-m02": (3.0567517s)
	I0314 19:44:35.673900    8428 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0314 19:44:35.884827    8428 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
	I0314 19:44:36.082729    8428 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-442000-m02 minikube.k8s.io/updated_at=2024_03_14T19_44_36_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=c6f78a3db54ac629870afb44fb5bc8be9e04a8c7 minikube.k8s.io/name=multinode-442000 minikube.k8s.io/primary=false
	I0314 19:44:36.230888    8428 command_runner.go:130] > node/multinode-442000-m02 labeled
	I0314 19:44:36.231007    8428 start.go:318] duration metric: took 23.207766s to joinCluster
	I0314 19:44:36.231133    8428 start.go:234] Will wait 6m0s for node &{Name:m02 IP:172.17.93.200 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0314 19:44:36.251113    8428 out.go:177] * Verifying Kubernetes components...
	I0314 19:44:36.231711    8428 config.go:182] Loaded profile config "multinode-442000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0314 19:44:36.264093    8428 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 19:44:36.480416    8428 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0314 19:44:36.516051    8428 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0314 19:44:36.516651    8428 kapi.go:59] client config for multinode-442000: &rest.Config{Host:"https://172.17.93.236:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\multinode-442000\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\multinode-442000\\client.key", CAFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1ec9180), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0314 19:44:36.517650    8428 node_ready.go:35] waiting up to 6m0s for node "multinode-442000-m02" to be "Ready" ...
	I0314 19:44:36.517650    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000-m02
	I0314 19:44:36.517650    8428 round_trippers.go:469] Request Headers:
	I0314 19:44:36.517650    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:44:36.517650    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:44:36.522545    8428 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 19:44:36.522899    8428 round_trippers.go:577] Response Headers:
	I0314 19:44:36.522899    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:44:36.522899    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:44:36 GMT
	I0314 19:44:36.522899    8428 round_trippers.go:580]     Audit-Id: f151b53b-b6c9-4e7d-83a3-1dce6974d5e5
	I0314 19:44:36.522899    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:44:36.522899    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:44:36.522899    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:44:36.523118    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000-m02","uid":"269d9995-0aad-4983-a7b9-80160bdd37ef","resourceVersion":"2061","creationTimestamp":"2024-03-14T19:44:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_14T19_44_36_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:44:35Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3688 chars]
	I0314 19:44:37.020808    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000-m02
	I0314 19:44:37.020889    8428 round_trippers.go:469] Request Headers:
	I0314 19:44:37.020889    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:44:37.020889    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:44:37.024184    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:44:37.024704    8428 round_trippers.go:577] Response Headers:
	I0314 19:44:37.024704    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:44:37.024704    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:44:37.024704    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:44:37 GMT
	I0314 19:44:37.024704    8428 round_trippers.go:580]     Audit-Id: 009ac116-9398-40d2-ae94-8ec46b3b4e95
	I0314 19:44:37.024704    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:44:37.024704    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:44:37.024963    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000-m02","uid":"269d9995-0aad-4983-a7b9-80160bdd37ef","resourceVersion":"2061","creationTimestamp":"2024-03-14T19:44:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_14T19_44_36_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:44:35Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3688 chars]
	I0314 19:44:37.520754    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000-m02
	I0314 19:44:37.521019    8428 round_trippers.go:469] Request Headers:
	I0314 19:44:37.521019    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:44:37.521019    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:44:37.524988    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:44:37.525328    8428 round_trippers.go:577] Response Headers:
	I0314 19:44:37.525328    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:44:37.525328    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:44:37 GMT
	I0314 19:44:37.525328    8428 round_trippers.go:580]     Audit-Id: 1e022147-cc71-4930-8c96-3d4fb5d43d2f
	I0314 19:44:37.525328    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:44:37.525384    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:44:37.525384    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:44:37.525567    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000-m02","uid":"269d9995-0aad-4983-a7b9-80160bdd37ef","resourceVersion":"2061","creationTimestamp":"2024-03-14T19:44:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_14T19_44_36_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:44:35Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3688 chars]
	I0314 19:44:38.021838    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000-m02
	I0314 19:44:38.021907    8428 round_trippers.go:469] Request Headers:
	I0314 19:44:38.021907    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:44:38.021907    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:44:38.026490    8428 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 19:44:38.026490    8428 round_trippers.go:577] Response Headers:
	I0314 19:44:38.026490    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:44:38.026490    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:44:38 GMT
	I0314 19:44:38.026490    8428 round_trippers.go:580]     Audit-Id: 606f2489-4a52-49e8-b3c1-544d0f45ce12
	I0314 19:44:38.026490    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:44:38.026490    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:44:38.026490    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:44:38.026490    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000-m02","uid":"269d9995-0aad-4983-a7b9-80160bdd37ef","resourceVersion":"2061","creationTimestamp":"2024-03-14T19:44:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_14T19_44_36_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:44:35Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3688 chars]
	I0314 19:44:38.522636    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000-m02
	I0314 19:44:38.522636    8428 round_trippers.go:469] Request Headers:
	I0314 19:44:38.522845    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:44:38.522845    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:44:38.527373    8428 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 19:44:38.527373    8428 round_trippers.go:577] Response Headers:
	I0314 19:44:38.527373    8428 round_trippers.go:580]     Audit-Id: b54c0c43-5bc3-4769-b1fd-ad0e6184979d
	I0314 19:44:38.527373    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:44:38.527373    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:44:38.527373    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:44:38.527373    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:44:38.527373    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:44:38 GMT
	I0314 19:44:38.528059    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000-m02","uid":"269d9995-0aad-4983-a7b9-80160bdd37ef","resourceVersion":"2061","creationTimestamp":"2024-03-14T19:44:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_14T19_44_36_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:44:35Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3688 chars]
	I0314 19:44:38.528059    8428 node_ready.go:53] node "multinode-442000-m02" has status "Ready":"False"
	I0314 19:44:39.022793    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000-m02
	I0314 19:44:39.022881    8428 round_trippers.go:469] Request Headers:
	I0314 19:44:39.022881    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:44:39.022881    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:44:39.027371    8428 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 19:44:39.027456    8428 round_trippers.go:577] Response Headers:
	I0314 19:44:39.027456    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:44:39 GMT
	I0314 19:44:39.027551    8428 round_trippers.go:580]     Audit-Id: c76c39d0-a348-465a-bad6-e031a98aa3f3
	I0314 19:44:39.027597    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:44:39.027626    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:44:39.027626    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:44:39.027626    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:44:39.027626    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000-m02","uid":"269d9995-0aad-4983-a7b9-80160bdd37ef","resourceVersion":"2077","creationTimestamp":"2024-03-14T19:44:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_14T19_44_36_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:44:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3797 chars]
	I0314 19:44:39.524181    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000-m02
	I0314 19:44:39.524181    8428 round_trippers.go:469] Request Headers:
	I0314 19:44:39.524277    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:44:39.524277    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:44:39.528596    8428 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 19:44:39.528964    8428 round_trippers.go:577] Response Headers:
	I0314 19:44:39.528964    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:44:39 GMT
	I0314 19:44:39.528964    8428 round_trippers.go:580]     Audit-Id: be8f6a95-4345-41b8-83e1-2154edbed859
	I0314 19:44:39.528964    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:44:39.528964    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:44:39.528964    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:44:39.528964    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:44:39.529658    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000-m02","uid":"269d9995-0aad-4983-a7b9-80160bdd37ef","resourceVersion":"2077","creationTimestamp":"2024-03-14T19:44:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_14T19_44_36_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:44:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3797 chars]
	I0314 19:44:40.022622    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000-m02
	I0314 19:44:40.022704    8428 round_trippers.go:469] Request Headers:
	I0314 19:44:40.022704    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:44:40.022704    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:44:40.027921    8428 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0314 19:44:40.027921    8428 round_trippers.go:577] Response Headers:
	I0314 19:44:40.027921    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:44:40.027921    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:44:40.027921    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:44:40.027921    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:44:40 GMT
	I0314 19:44:40.027921    8428 round_trippers.go:580]     Audit-Id: 6d92ac80-9360-426a-b1a0-7ce7c32daec3
	I0314 19:44:40.027921    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:44:40.027921    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000-m02","uid":"269d9995-0aad-4983-a7b9-80160bdd37ef","resourceVersion":"2077","creationTimestamp":"2024-03-14T19:44:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_14T19_44_36_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:44:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3797 chars]
	I0314 19:44:40.524210    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000-m02
	I0314 19:44:40.524291    8428 round_trippers.go:469] Request Headers:
	I0314 19:44:40.524291    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:44:40.524291    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:44:40.528132    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:44:40.528132    8428 round_trippers.go:577] Response Headers:
	I0314 19:44:40.528132    8428 round_trippers.go:580]     Audit-Id: b30a3256-6123-48eb-b236-48446c8eef47
	I0314 19:44:40.528132    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:44:40.528132    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:44:40.528132    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:44:40.528132    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:44:40.528132    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:44:40 GMT
	I0314 19:44:40.528132    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000-m02","uid":"269d9995-0aad-4983-a7b9-80160bdd37ef","resourceVersion":"2077","creationTimestamp":"2024-03-14T19:44:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_14T19_44_36_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:44:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3797 chars]
	I0314 19:44:40.528651    8428 node_ready.go:53] node "multinode-442000-m02" has status "Ready":"False"
	I0314 19:44:41.023193    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000-m02
	I0314 19:44:41.023193    8428 round_trippers.go:469] Request Headers:
	I0314 19:44:41.023193    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:44:41.023193    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:44:41.026769    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:44:41.026769    8428 round_trippers.go:577] Response Headers:
	I0314 19:44:41.026769    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:44:41.026769    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:44:41 GMT
	I0314 19:44:41.027264    8428 round_trippers.go:580]     Audit-Id: f2a31b0c-b71c-4293-8cdb-325c1d21de88
	I0314 19:44:41.027264    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:44:41.027264    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:44:41.027264    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:44:41.027382    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000-m02","uid":"269d9995-0aad-4983-a7b9-80160bdd37ef","resourceVersion":"2077","creationTimestamp":"2024-03-14T19:44:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_14T19_44_36_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:44:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3797 chars]
	I0314 19:44:41.525036    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000-m02
	I0314 19:44:41.525107    8428 round_trippers.go:469] Request Headers:
	I0314 19:44:41.525107    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:44:41.525107    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:44:41.530881    8428 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0314 19:44:41.530881    8428 round_trippers.go:577] Response Headers:
	I0314 19:44:41.530881    8428 round_trippers.go:580]     Audit-Id: 457ddfb4-88a3-49c1-8a9c-5c9b8799ba35
	I0314 19:44:41.530881    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:44:41.530881    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:44:41.530881    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:44:41.530881    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:44:41.530881    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:44:41 GMT
	I0314 19:44:41.530881    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000-m02","uid":"269d9995-0aad-4983-a7b9-80160bdd37ef","resourceVersion":"2077","creationTimestamp":"2024-03-14T19:44:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_14T19_44_36_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:44:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3797 chars]
	I0314 19:44:42.022897    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000-m02
	I0314 19:44:42.022987    8428 round_trippers.go:469] Request Headers:
	I0314 19:44:42.022987    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:44:42.022987    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:44:42.028650    8428 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0314 19:44:42.028650    8428 round_trippers.go:577] Response Headers:
	I0314 19:44:42.028650    8428 round_trippers.go:580]     Audit-Id: f5fc9e4f-2fd4-457a-867c-d12d3b12e9c4
	I0314 19:44:42.028650    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:44:42.028650    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:44:42.029179    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:44:42.029179    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:44:42.029179    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:44:42 GMT
	I0314 19:44:42.029317    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000-m02","uid":"269d9995-0aad-4983-a7b9-80160bdd37ef","resourceVersion":"2077","creationTimestamp":"2024-03-14T19:44:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_14T19_44_36_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:44:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3797 chars]
	I0314 19:44:42.521467    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000-m02
	I0314 19:44:42.521543    8428 round_trippers.go:469] Request Headers:
	I0314 19:44:42.521543    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:44:42.521543    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:44:42.527184    8428 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0314 19:44:42.527184    8428 round_trippers.go:577] Response Headers:
	I0314 19:44:42.527184    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:44:42.527184    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:44:42.527184    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:44:42 GMT
	I0314 19:44:42.527184    8428 round_trippers.go:580]     Audit-Id: dff35d79-e600-4dd2-85f3-a3185dd1cf2a
	I0314 19:44:42.527184    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:44:42.527184    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:44:42.527859    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000-m02","uid":"269d9995-0aad-4983-a7b9-80160bdd37ef","resourceVersion":"2077","creationTimestamp":"2024-03-14T19:44:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_14T19_44_36_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:44:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3797 chars]
	I0314 19:44:43.022079    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000-m02
	I0314 19:44:43.022159    8428 round_trippers.go:469] Request Headers:
	I0314 19:44:43.022159    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:44:43.022159    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:44:43.026530    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:44:43.026530    8428 round_trippers.go:577] Response Headers:
	I0314 19:44:43.026615    8428 round_trippers.go:580]     Audit-Id: 92c300b6-3443-4bed-a12b-86fd2381d3bf
	I0314 19:44:43.026615    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:44:43.026615    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:44:43.026669    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:44:43.026669    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:44:43.026669    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:44:43 GMT
	I0314 19:44:43.026772    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000-m02","uid":"269d9995-0aad-4983-a7b9-80160bdd37ef","resourceVersion":"2077","creationTimestamp":"2024-03-14T19:44:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_14T19_44_36_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:44:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3797 chars]
	I0314 19:44:43.027300    8428 node_ready.go:53] node "multinode-442000-m02" has status "Ready":"False"
	I0314 19:44:43.524207    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000-m02
	I0314 19:44:43.524207    8428 round_trippers.go:469] Request Headers:
	I0314 19:44:43.524207    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:44:43.524207    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:44:43.527851    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:44:43.527851    8428 round_trippers.go:577] Response Headers:
	I0314 19:44:43.527851    8428 round_trippers.go:580]     Audit-Id: 7106586b-f89a-4a30-97c9-fb26b6589539
	I0314 19:44:43.527851    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:44:43.527851    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:44:43.527851    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:44:43.527851    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:44:43.528792    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:44:43 GMT
	I0314 19:44:43.528947    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000-m02","uid":"269d9995-0aad-4983-a7b9-80160bdd37ef","resourceVersion":"2077","creationTimestamp":"2024-03-14T19:44:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_14T19_44_36_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:44:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3797 chars]
	I0314 19:44:44.021420    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000-m02
	I0314 19:44:44.021595    8428 round_trippers.go:469] Request Headers:
	I0314 19:44:44.021595    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:44:44.021595    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:44:44.025262    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:44:44.025744    8428 round_trippers.go:577] Response Headers:
	I0314 19:44:44.025744    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:44:44 GMT
	I0314 19:44:44.025744    8428 round_trippers.go:580]     Audit-Id: a2081053-dc2e-40ed-a363-79541e921114
	I0314 19:44:44.025744    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:44:44.025744    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:44:44.025744    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:44:44.025744    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:44:44.025960    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000-m02","uid":"269d9995-0aad-4983-a7b9-80160bdd37ef","resourceVersion":"2077","creationTimestamp":"2024-03-14T19:44:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_14T19_44_36_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:44:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3797 chars]
	I0314 19:44:44.522760    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000-m02
	I0314 19:44:44.522850    8428 round_trippers.go:469] Request Headers:
	I0314 19:44:44.522850    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:44:44.522850    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:44:44.527016    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:44:44.527016    8428 round_trippers.go:577] Response Headers:
	I0314 19:44:44.527128    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:44:44.527128    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:44:44.527128    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:44:44.527174    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:44:44 GMT
	I0314 19:44:44.527174    8428 round_trippers.go:580]     Audit-Id: 250e711d-ebf7-49e4-8b58-b03372d08dca
	I0314 19:44:44.527174    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:44:44.527174    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000-m02","uid":"269d9995-0aad-4983-a7b9-80160bdd37ef","resourceVersion":"2077","creationTimestamp":"2024-03-14T19:44:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_14T19_44_36_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:44:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3797 chars]
	I0314 19:44:45.024098    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000-m02
	I0314 19:44:45.024371    8428 round_trippers.go:469] Request Headers:
	I0314 19:44:45.024371    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:44:45.024371    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:44:45.027820    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:44:45.027820    8428 round_trippers.go:577] Response Headers:
	I0314 19:44:45.028198    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:44:45.028198    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:44:45 GMT
	I0314 19:44:45.028198    8428 round_trippers.go:580]     Audit-Id: 9f300c51-6b2c-40ef-a72c-0e41bba8af55
	I0314 19:44:45.028198    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:44:45.028198    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:44:45.028198    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:44:45.028324    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000-m02","uid":"269d9995-0aad-4983-a7b9-80160bdd37ef","resourceVersion":"2077","creationTimestamp":"2024-03-14T19:44:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_14T19_44_36_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:44:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3797 chars]
	I0314 19:44:45.028811    8428 node_ready.go:53] node "multinode-442000-m02" has status "Ready":"False"
	I0314 19:44:45.524612    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000-m02
	I0314 19:44:45.524797    8428 round_trippers.go:469] Request Headers:
	I0314 19:44:45.524797    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:44:45.524830    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:44:45.528555    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:44:45.529340    8428 round_trippers.go:577] Response Headers:
	I0314 19:44:45.529340    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:44:45.529384    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:44:45.529384    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:44:45.529384    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:44:45 GMT
	I0314 19:44:45.529384    8428 round_trippers.go:580]     Audit-Id: b72d7e2d-2e27-4687-8d92-47b50540e97c
	I0314 19:44:45.529384    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:44:45.529384    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000-m02","uid":"269d9995-0aad-4983-a7b9-80160bdd37ef","resourceVersion":"2077","creationTimestamp":"2024-03-14T19:44:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_14T19_44_36_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:44:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3797 chars]
	I0314 19:44:46.024341    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000-m02
	I0314 19:44:46.024430    8428 round_trippers.go:469] Request Headers:
	I0314 19:44:46.024430    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:44:46.024430    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:44:46.027725    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:44:46.027725    8428 round_trippers.go:577] Response Headers:
	I0314 19:44:46.027725    8428 round_trippers.go:580]     Audit-Id: cc505574-5779-4e3a-a679-08ef6f53473f
	I0314 19:44:46.027725    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:44:46.027725    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:44:46.027725    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:44:46.027725    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:44:46.027725    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:44:46 GMT
	I0314 19:44:46.028585    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000-m02","uid":"269d9995-0aad-4983-a7b9-80160bdd37ef","resourceVersion":"2086","creationTimestamp":"2024-03-14T19:44:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_14T19_44_36_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:44:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4066 chars]
	I0314 19:44:46.527107    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000-m02
	I0314 19:44:46.527323    8428 round_trippers.go:469] Request Headers:
	I0314 19:44:46.527323    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:44:46.527323    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:44:46.530708    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:44:46.531253    8428 round_trippers.go:577] Response Headers:
	I0314 19:44:46.531253    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:44:46.531253    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:44:46.531253    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:44:46.531253    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:44:46 GMT
	I0314 19:44:46.531253    8428 round_trippers.go:580]     Audit-Id: 7dffc06f-e25b-4936-bfef-969df03e0e7e
	I0314 19:44:46.531253    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:44:46.531436    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000-m02","uid":"269d9995-0aad-4983-a7b9-80160bdd37ef","resourceVersion":"2086","creationTimestamp":"2024-03-14T19:44:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_14T19_44_36_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:44:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4066 chars]
	I0314 19:44:47.026797    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000-m02
	I0314 19:44:47.026797    8428 round_trippers.go:469] Request Headers:
	I0314 19:44:47.026797    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:44:47.026797    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:44:47.030503    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:44:47.030503    8428 round_trippers.go:577] Response Headers:
	I0314 19:44:47.030503    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:44:47 GMT
	I0314 19:44:47.030503    8428 round_trippers.go:580]     Audit-Id: be40900e-d975-47b7-a9d7-ad52fda93fd0
	I0314 19:44:47.030503    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:44:47.030503    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:44:47.030503    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:44:47.030503    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:44:47.031318    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000-m02","uid":"269d9995-0aad-4983-a7b9-80160bdd37ef","resourceVersion":"2086","creationTimestamp":"2024-03-14T19:44:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_14T19_44_36_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:44:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4066 chars]
	I0314 19:44:47.031900    8428 node_ready.go:53] node "multinode-442000-m02" has status "Ready":"False"
	I0314 19:44:47.528060    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000-m02
	I0314 19:44:47.528060    8428 round_trippers.go:469] Request Headers:
	I0314 19:44:47.528060    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:44:47.528060    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:44:47.533377    8428 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 19:44:47.533377    8428 round_trippers.go:577] Response Headers:
	I0314 19:44:47.533459    8428 round_trippers.go:580]     Audit-Id: 9428b3c7-89a6-4bd6-b6c1-447de9aa240b
	I0314 19:44:47.533459    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:44:47.533459    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:44:47.533459    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:44:47.533459    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:44:47.533459    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:44:47 GMT
	I0314 19:44:47.533656    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000-m02","uid":"269d9995-0aad-4983-a7b9-80160bdd37ef","resourceVersion":"2086","creationTimestamp":"2024-03-14T19:44:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_14T19_44_36_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:44:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4066 chars]
	I0314 19:44:48.024627    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000-m02
	I0314 19:44:48.024680    8428 round_trippers.go:469] Request Headers:
	I0314 19:44:48.024733    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:44:48.024733    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:44:48.029113    8428 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 19:44:48.029113    8428 round_trippers.go:577] Response Headers:
	I0314 19:44:48.029113    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:44:48 GMT
	I0314 19:44:48.029113    8428 round_trippers.go:580]     Audit-Id: 49cbee37-2f7d-4769-95e5-bbb9b2a8811f
	I0314 19:44:48.029113    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:44:48.029113    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:44:48.029113    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:44:48.029113    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:44:48.029113    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000-m02","uid":"269d9995-0aad-4983-a7b9-80160bdd37ef","resourceVersion":"2086","creationTimestamp":"2024-03-14T19:44:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_14T19_44_36_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:44:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4066 chars]
	I0314 19:44:48.532419    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000-m02
	I0314 19:44:48.532531    8428 round_trippers.go:469] Request Headers:
	I0314 19:44:48.532531    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:44:48.532531    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:44:48.536483    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:44:48.536574    8428 round_trippers.go:577] Response Headers:
	I0314 19:44:48.536611    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:44:48.536611    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:44:48.536611    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:44:48 GMT
	I0314 19:44:48.536611    8428 round_trippers.go:580]     Audit-Id: 19f0e3ed-bd07-4f43-9751-5820c71e20d5
	I0314 19:44:48.536611    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:44:48.536611    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:44:48.536611    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000-m02","uid":"269d9995-0aad-4983-a7b9-80160bdd37ef","resourceVersion":"2086","creationTimestamp":"2024-03-14T19:44:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_14T19_44_36_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:44:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4066 chars]
	I0314 19:44:49.026744    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000-m02
	I0314 19:44:49.026744    8428 round_trippers.go:469] Request Headers:
	I0314 19:44:49.026744    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:44:49.026744    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:44:49.031450    8428 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 19:44:49.031529    8428 round_trippers.go:577] Response Headers:
	I0314 19:44:49.031529    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:44:49.031529    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:44:49.031529    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:44:49 GMT
	I0314 19:44:49.031529    8428 round_trippers.go:580]     Audit-Id: 8b81c027-d892-4816-94bc-d00c6a714181
	I0314 19:44:49.031529    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:44:49.031529    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:44:49.031617    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000-m02","uid":"269d9995-0aad-4983-a7b9-80160bdd37ef","resourceVersion":"2086","creationTimestamp":"2024-03-14T19:44:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_14T19_44_36_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:44:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4066 chars]
	I0314 19:44:49.032076    8428 node_ready.go:53] node "multinode-442000-m02" has status "Ready":"False"
	I0314 19:44:49.530160    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000-m02
	I0314 19:44:49.530230    8428 round_trippers.go:469] Request Headers:
	I0314 19:44:49.530230    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:44:49.530230    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:44:49.534526    8428 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 19:44:49.534526    8428 round_trippers.go:577] Response Headers:
	I0314 19:44:49.534526    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:44:49.534526    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:44:49 GMT
	I0314 19:44:49.534526    8428 round_trippers.go:580]     Audit-Id: 226ca101-665e-4eab-a5de-75ade9244506
	I0314 19:44:49.534526    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:44:49.534526    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:44:49.534526    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:44:49.535004    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000-m02","uid":"269d9995-0aad-4983-a7b9-80160bdd37ef","resourceVersion":"2086","creationTimestamp":"2024-03-14T19:44:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_14T19_44_36_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:44:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4066 chars]
	I0314 19:44:50.031212    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000-m02
	I0314 19:44:50.031507    8428 round_trippers.go:469] Request Headers:
	I0314 19:44:50.031507    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:44:50.031507    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:44:50.040419    8428 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0314 19:44:50.040419    8428 round_trippers.go:577] Response Headers:
	I0314 19:44:50.040419    8428 round_trippers.go:580]     Audit-Id: 992d37ba-898e-4e34-9ee6-7678bf0fea9c
	I0314 19:44:50.040419    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:44:50.040419    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:44:50.040419    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:44:50.040419    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:44:50.040419    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:44:50 GMT
	I0314 19:44:50.042114    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000-m02","uid":"269d9995-0aad-4983-a7b9-80160bdd37ef","resourceVersion":"2086","creationTimestamp":"2024-03-14T19:44:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_14T19_44_36_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:44:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4066 chars]
	I0314 19:44:50.530902    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000-m02
	I0314 19:44:50.531131    8428 round_trippers.go:469] Request Headers:
	I0314 19:44:50.531215    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:44:50.531215    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:44:50.535026    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:44:50.535026    8428 round_trippers.go:577] Response Headers:
	I0314 19:44:50.535026    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:44:50.535389    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:44:50.535389    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:44:50 GMT
	I0314 19:44:50.535389    8428 round_trippers.go:580]     Audit-Id: ad2129bf-28c0-4b07-b974-a7c3a1cb2bde
	I0314 19:44:50.535389    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:44:50.535389    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:44:50.535598    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000-m02","uid":"269d9995-0aad-4983-a7b9-80160bdd37ef","resourceVersion":"2086","creationTimestamp":"2024-03-14T19:44:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_14T19_44_36_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:44:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4066 chars]
	I0314 19:44:51.032508    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000-m02
	I0314 19:44:51.032585    8428 round_trippers.go:469] Request Headers:
	I0314 19:44:51.032585    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:44:51.032585    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:44:51.036268    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:44:51.036426    8428 round_trippers.go:577] Response Headers:
	I0314 19:44:51.036426    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:44:51.036426    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:44:51.036426    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:44:51 GMT
	I0314 19:44:51.036426    8428 round_trippers.go:580]     Audit-Id: e9987cc4-fff5-460d-80fb-898ad9c42b74
	I0314 19:44:51.036426    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:44:51.036426    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:44:51.036490    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000-m02","uid":"269d9995-0aad-4983-a7b9-80160bdd37ef","resourceVersion":"2086","creationTimestamp":"2024-03-14T19:44:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_14T19_44_36_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:44:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4066 chars]
	I0314 19:44:51.037020    8428 node_ready.go:53] node "multinode-442000-m02" has status "Ready":"False"
	I0314 19:44:51.520101    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000-m02
	I0314 19:44:51.520101    8428 round_trippers.go:469] Request Headers:
	I0314 19:44:51.520101    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:44:51.520101    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:44:51.523812    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:44:51.523812    8428 round_trippers.go:577] Response Headers:
	I0314 19:44:51.523812    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:44:51.523812    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:44:51.523812    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:44:51 GMT
	I0314 19:44:51.523812    8428 round_trippers.go:580]     Audit-Id: c4ffd01c-8f45-44be-80b7-24213b5d7af9
	I0314 19:44:51.523812    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:44:51.523812    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:44:51.524802    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000-m02","uid":"269d9995-0aad-4983-a7b9-80160bdd37ef","resourceVersion":"2086","creationTimestamp":"2024-03-14T19:44:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_14T19_44_36_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:44:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4066 chars]
	I0314 19:44:52.021468    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000-m02
	I0314 19:44:52.021468    8428 round_trippers.go:469] Request Headers:
	I0314 19:44:52.021527    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:44:52.021527    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:44:52.024756    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:44:52.025353    8428 round_trippers.go:577] Response Headers:
	I0314 19:44:52.025353    8428 round_trippers.go:580]     Audit-Id: 47827d2c-26e1-44a3-8f95-1430870c470c
	I0314 19:44:52.025399    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:44:52.025399    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:44:52.025399    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:44:52.025399    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:44:52.025399    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:44:52 GMT
	I0314 19:44:52.025548    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000-m02","uid":"269d9995-0aad-4983-a7b9-80160bdd37ef","resourceVersion":"2086","creationTimestamp":"2024-03-14T19:44:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_14T19_44_36_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:44:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4066 chars]
	I0314 19:44:52.523383    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000-m02
	I0314 19:44:52.523590    8428 round_trippers.go:469] Request Headers:
	I0314 19:44:52.523590    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:44:52.523590    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:44:52.528867    8428 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0314 19:44:52.529775    8428 round_trippers.go:577] Response Headers:
	I0314 19:44:52.529775    8428 round_trippers.go:580]     Audit-Id: f5b9006c-0760-4f05-a42b-4f7bd99f77cb
	I0314 19:44:52.529775    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:44:52.529775    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:44:52.529775    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:44:52.529775    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:44:52.529775    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:44:52 GMT
	I0314 19:44:52.529974    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000-m02","uid":"269d9995-0aad-4983-a7b9-80160bdd37ef","resourceVersion":"2086","creationTimestamp":"2024-03-14T19:44:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_14T19_44_36_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:44:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4066 chars]
	I0314 19:44:53.026213    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000-m02
	I0314 19:44:53.026287    8428 round_trippers.go:469] Request Headers:
	I0314 19:44:53.026287    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:44:53.026287    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:44:53.029960    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:44:53.030059    8428 round_trippers.go:577] Response Headers:
	I0314 19:44:53.030059    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:44:53.030059    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:44:53 GMT
	I0314 19:44:53.030059    8428 round_trippers.go:580]     Audit-Id: be5e9cdf-a475-4f2f-a45c-188880ba2984
	I0314 19:44:53.030151    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:44:53.030241    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:44:53.030241    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:44:53.030483    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000-m02","uid":"269d9995-0aad-4983-a7b9-80160bdd37ef","resourceVersion":"2086","creationTimestamp":"2024-03-14T19:44:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_14T19_44_36_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:44:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4066 chars]
	I0314 19:44:53.527960    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000-m02
	I0314 19:44:53.527960    8428 round_trippers.go:469] Request Headers:
	I0314 19:44:53.527960    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:44:53.527960    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:44:53.533130    8428 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0314 19:44:53.533130    8428 round_trippers.go:577] Response Headers:
	I0314 19:44:53.533130    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:44:53 GMT
	I0314 19:44:53.533130    8428 round_trippers.go:580]     Audit-Id: 949fe0c9-6ce0-4473-a6b8-6bb98425793e
	I0314 19:44:53.533130    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:44:53.533130    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:44:53.533130    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:44:53.533130    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:44:53.533744    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000-m02","uid":"269d9995-0aad-4983-a7b9-80160bdd37ef","resourceVersion":"2086","creationTimestamp":"2024-03-14T19:44:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_14T19_44_36_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:44:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4066 chars]
	I0314 19:44:53.534086    8428 node_ready.go:53] node "multinode-442000-m02" has status "Ready":"False"
	I0314 19:44:54.025392    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000-m02
	I0314 19:44:54.025474    8428 round_trippers.go:469] Request Headers:
	I0314 19:44:54.025474    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:44:54.025474    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:44:54.031248    8428 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0314 19:44:54.031248    8428 round_trippers.go:577] Response Headers:
	I0314 19:44:54.031306    8428 round_trippers.go:580]     Audit-Id: e866fbe5-3e90-4d50-a5d7-0fdbb4eeea16
	I0314 19:44:54.031306    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:44:54.031306    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:44:54.031306    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:44:54.031306    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:44:54.031306    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:44:54 GMT
	I0314 19:44:54.031459    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000-m02","uid":"269d9995-0aad-4983-a7b9-80160bdd37ef","resourceVersion":"2086","creationTimestamp":"2024-03-14T19:44:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_14T19_44_36_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:44:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4066 chars]
	I0314 19:44:54.526516    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000-m02
	I0314 19:44:54.526516    8428 round_trippers.go:469] Request Headers:
	I0314 19:44:54.526516    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:44:54.526516    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:44:54.530184    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:44:54.530508    8428 round_trippers.go:577] Response Headers:
	I0314 19:44:54.530565    8428 round_trippers.go:580]     Audit-Id: 579b04a8-ce88-466a-ba03-da457c6e0b58
	I0314 19:44:54.530593    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:44:54.530593    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:44:54.530649    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:44:54.530699    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:44:54.530699    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:44:54 GMT
	I0314 19:44:54.530947    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000-m02","uid":"269d9995-0aad-4983-a7b9-80160bdd37ef","resourceVersion":"2086","creationTimestamp":"2024-03-14T19:44:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_14T19_44_36_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:44:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4066 chars]
	I0314 19:44:55.025559    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000-m02
	I0314 19:44:55.025761    8428 round_trippers.go:469] Request Headers:
	I0314 19:44:55.025837    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:44:55.025837    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:44:55.031629    8428 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0314 19:44:55.031629    8428 round_trippers.go:577] Response Headers:
	I0314 19:44:55.031629    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:44:55.031629    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:44:55.031629    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:44:55.031629    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:44:55 GMT
	I0314 19:44:55.031629    8428 round_trippers.go:580]     Audit-Id: 1454818e-b0b2-400c-a348-7408cac08c5e
	I0314 19:44:55.031629    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:44:55.031970    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000-m02","uid":"269d9995-0aad-4983-a7b9-80160bdd37ef","resourceVersion":"2110","creationTimestamp":"2024-03-14T19:44:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_14T19_44_36_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:44:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3932 chars]
	I0314 19:44:55.032103    8428 node_ready.go:49] node "multinode-442000-m02" has status "Ready":"True"
	I0314 19:44:55.032103    8428 node_ready.go:38] duration metric: took 18.5130743s for node "multinode-442000-m02" to be "Ready" ...
	I0314 19:44:55.032103    8428 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0314 19:44:55.032103    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/namespaces/kube-system/pods
	I0314 19:44:55.032103    8428 round_trippers.go:469] Request Headers:
	I0314 19:44:55.032103    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:44:55.032103    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:44:55.038123    8428 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0314 19:44:55.038438    8428 round_trippers.go:577] Response Headers:
	I0314 19:44:55.038438    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:44:55.038438    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:44:55.038438    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:44:55.038438    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:44:55 GMT
	I0314 19:44:55.038438    8428 round_trippers.go:580]     Audit-Id: 7221dde7-b0c7-4093-9385-86fa4c1a9551
	I0314 19:44:55.038438    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:44:55.039626    8428 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"2112"},"items":[{"metadata":{"name":"coredns-5dd5756b68-d22jc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2a563b3f-a175-4dc2-9f0b-67dbaefbfaac","resourceVersion":"1908","creationTimestamp":"2024-03-14T19:19:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"65689a14-ed43-48af-8104-53ea2e3991f3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:19:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"65689a14-ed43-48af-8104-53ea2e3991f3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 82557 chars]
	I0314 19:44:55.043548    8428 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-d22jc" in "kube-system" namespace to be "Ready" ...
	I0314 19:44:55.044116    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-d22jc
	I0314 19:44:55.044116    8428 round_trippers.go:469] Request Headers:
	I0314 19:44:55.044116    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:44:55.044163    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:44:55.047137    8428 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0314 19:44:55.047137    8428 round_trippers.go:577] Response Headers:
	I0314 19:44:55.047137    8428 round_trippers.go:580]     Audit-Id: a29737c5-cbe5-41d5-b0d0-efa9c7fcb612
	I0314 19:44:55.047816    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:44:55.047816    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:44:55.047816    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:44:55.047816    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:44:55.047816    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:44:55 GMT
	I0314 19:44:55.047917    8428 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-d22jc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2a563b3f-a175-4dc2-9f0b-67dbaefbfaac","resourceVersion":"1908","creationTimestamp":"2024-03-14T19:19:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"65689a14-ed43-48af-8104-53ea2e3991f3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:19:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"65689a14-ed43-48af-8104-53ea2e3991f3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6493 chars]
	I0314 19:44:55.048302    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:44:55.048302    8428 round_trippers.go:469] Request Headers:
	I0314 19:44:55.048302    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:44:55.048302    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:44:55.050873    8428 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0314 19:44:55.050873    8428 round_trippers.go:577] Response Headers:
	I0314 19:44:55.050873    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:44:55.050873    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:44:55.050873    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:44:55.050873    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:44:55.050873    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:44:55 GMT
	I0314 19:44:55.050873    8428 round_trippers.go:580]     Audit-Id: 7b83fe14-89fa-4208-a8a8-19d75c229969
	I0314 19:44:55.051823    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1868","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0314 19:44:55.052187    8428 pod_ready.go:92] pod "coredns-5dd5756b68-d22jc" in "kube-system" namespace has status "Ready":"True"
	I0314 19:44:55.052187    8428 pod_ready.go:81] duration metric: took 8.6381ms for pod "coredns-5dd5756b68-d22jc" in "kube-system" namespace to be "Ready" ...
	I0314 19:44:55.052187    8428 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-442000" in "kube-system" namespace to be "Ready" ...
	I0314 19:44:55.052187    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-442000
	I0314 19:44:55.052187    8428 round_trippers.go:469] Request Headers:
	I0314 19:44:55.052187    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:44:55.052187    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:44:55.055078    8428 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0314 19:44:55.055611    8428 round_trippers.go:577] Response Headers:
	I0314 19:44:55.055611    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:44:55.055611    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:44:55 GMT
	I0314 19:44:55.055611    8428 round_trippers.go:580]     Audit-Id: ebb19d2a-d5be-4221-b97d-f21a89d54183
	I0314 19:44:55.055611    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:44:55.055611    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:44:55.055611    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:44:55.055761    8428 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-442000","namespace":"kube-system","uid":"106cc31d-907f-4853-9e8d-f13c8ac4e398","resourceVersion":"1808","creationTimestamp":"2024-03-14T19:41:06Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.17.93.236:2379","kubernetes.io/config.hash":"fa99a5621d016aa714804afcaa1e0a53","kubernetes.io/config.mirror":"fa99a5621d016aa714804afcaa1e0a53","kubernetes.io/config.seen":"2024-03-14T19:41:00.367789550Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:41:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 5863 chars]
	I0314 19:44:55.055960    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:44:55.055960    8428 round_trippers.go:469] Request Headers:
	I0314 19:44:55.055960    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:44:55.055960    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:44:55.059249    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:44:55.059249    8428 round_trippers.go:577] Response Headers:
	I0314 19:44:55.059249    8428 round_trippers.go:580]     Audit-Id: 26320e85-7756-45af-943d-a496f77b5177
	I0314 19:44:55.059249    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:44:55.059249    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:44:55.059249    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:44:55.059249    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:44:55.059249    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:44:55 GMT
	I0314 19:44:55.059715    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1868","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0314 19:44:55.059715    8428 pod_ready.go:92] pod "etcd-multinode-442000" in "kube-system" namespace has status "Ready":"True"
	I0314 19:44:55.059715    8428 pod_ready.go:81] duration metric: took 7.5276ms for pod "etcd-multinode-442000" in "kube-system" namespace to be "Ready" ...
	I0314 19:44:55.059715    8428 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-442000" in "kube-system" namespace to be "Ready" ...
	I0314 19:44:55.060245    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-442000
	I0314 19:44:55.060245    8428 round_trippers.go:469] Request Headers:
	I0314 19:44:55.060245    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:44:55.060245    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:44:55.062428    8428 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0314 19:44:55.062428    8428 round_trippers.go:577] Response Headers:
	I0314 19:44:55.062428    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:44:55.062428    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:44:55.062428    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:44:55.062428    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:44:55.062428    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:44:55 GMT
	I0314 19:44:55.062428    8428 round_trippers.go:580]     Audit-Id: d6747f61-2a1e-4a63-a093-49c0d2fa8c3c
	I0314 19:44:55.063384    8428 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-442000","namespace":"kube-system","uid":"ebdd5ddf-2b02-4315-bc64-1b10c383d507","resourceVersion":"1817","creationTimestamp":"2024-03-14T19:41:06Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.17.93.236:8443","kubernetes.io/config.hash":"7754d2f32966faec8123dc3b8a2af767","kubernetes.io/config.mirror":"7754d2f32966faec8123dc3b8a2af767","kubernetes.io/config.seen":"2024-03-14T19:41:00.350706636Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:41:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7400 chars]
	I0314 19:44:55.063384    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:44:55.063384    8428 round_trippers.go:469] Request Headers:
	I0314 19:44:55.063384    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:44:55.063384    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:44:55.066036    8428 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0314 19:44:55.066036    8428 round_trippers.go:577] Response Headers:
	I0314 19:44:55.066036    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:44:55.066036    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:44:55.066036    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:44:55 GMT
	I0314 19:44:55.066036    8428 round_trippers.go:580]     Audit-Id: c8159632-c2e1-48f3-aa39-7528ec8b1265
	I0314 19:44:55.066036    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:44:55.066036    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:44:55.067055    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1868","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0314 19:44:55.067704    8428 pod_ready.go:92] pod "kube-apiserver-multinode-442000" in "kube-system" namespace has status "Ready":"True"
	I0314 19:44:55.067704    8428 pod_ready.go:81] duration metric: took 7.988ms for pod "kube-apiserver-multinode-442000" in "kube-system" namespace to be "Ready" ...
	I0314 19:44:55.067748    8428 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-442000" in "kube-system" namespace to be "Ready" ...
	I0314 19:44:55.067780    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-442000
	I0314 19:44:55.067780    8428 round_trippers.go:469] Request Headers:
	I0314 19:44:55.067780    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:44:55.067780    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:44:55.070008    8428 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0314 19:44:55.070008    8428 round_trippers.go:577] Response Headers:
	I0314 19:44:55.070008    8428 round_trippers.go:580]     Audit-Id: a86a88b8-91fe-4d6e-99a3-b0c2533e83ad
	I0314 19:44:55.070008    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:44:55.070008    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:44:55.070008    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:44:55.070008    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:44:55.070008    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:44:55 GMT
	I0314 19:44:55.071101    8428 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-442000","namespace":"kube-system","uid":"b16fc874-ef74-44ca-a54f-bb678bf982df","resourceVersion":"1813","creationTimestamp":"2024-03-14T19:19:01Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"a7ee530f2bd843eddeace8cd6ec0d204","kubernetes.io/config.mirror":"a7ee530f2bd843eddeace8cd6ec0d204","kubernetes.io/config.seen":"2024-03-14T19:18:55.420205308Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:19:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7175 chars]
	I0314 19:44:55.071720    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:44:55.071720    8428 round_trippers.go:469] Request Headers:
	I0314 19:44:55.071720    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:44:55.071720    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:44:55.074984    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:44:55.074984    8428 round_trippers.go:577] Response Headers:
	I0314 19:44:55.074984    8428 round_trippers.go:580]     Audit-Id: 69a22c54-086e-43c1-a97e-e8fc9348ef17
	I0314 19:44:55.074984    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:44:55.074984    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:44:55.074984    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:44:55.074984    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:44:55.074984    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:44:55 GMT
	I0314 19:44:55.075688    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1868","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0314 19:44:55.076003    8428 pod_ready.go:92] pod "kube-controller-manager-multinode-442000" in "kube-system" namespace has status "Ready":"True"
	I0314 19:44:55.076003    8428 pod_ready.go:81] duration metric: took 8.2541ms for pod "kube-controller-manager-multinode-442000" in "kube-system" namespace to be "Ready" ...
	I0314 19:44:55.076003    8428 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-72dzs" in "kube-system" namespace to be "Ready" ...
	I0314 19:44:55.225643    8428 request.go:629] Waited for 149.49ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.93.236:8443/api/v1/namespaces/kube-system/pods/kube-proxy-72dzs
	I0314 19:44:55.225832    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/namespaces/kube-system/pods/kube-proxy-72dzs
	I0314 19:44:55.225832    8428 round_trippers.go:469] Request Headers:
	I0314 19:44:55.225832    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:44:55.225832    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:44:55.229641    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:44:55.229641    8428 round_trippers.go:577] Response Headers:
	I0314 19:44:55.229641    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:44:55.229641    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:44:55.229641    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:44:55 GMT
	I0314 19:44:55.229641    8428 round_trippers.go:580]     Audit-Id: 18982d73-5ebe-47cf-a6b4-63e5a753ddaf
	I0314 19:44:55.229641    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:44:55.229641    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:44:55.230037    8428 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-72dzs","generateName":"kube-proxy-","namespace":"kube-system","uid":"80b840b0-3803-4102-a966-ea73aed74f49","resourceVersion":"2094","creationTimestamp":"2024-03-14T19:22:02Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"6fc4cc4b-ef3f-4f16-8df5-a146058b364e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:22:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6fc4cc4b-ef3f-4f16-8df5-a146058b364e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5542 chars]
	I0314 19:44:55.427584    8428 request.go:629] Waited for 196.9716ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.93.236:8443/api/v1/nodes/multinode-442000-m02
	I0314 19:44:55.427927    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000-m02
	I0314 19:44:55.427927    8428 round_trippers.go:469] Request Headers:
	I0314 19:44:55.427927    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:44:55.427927    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:44:55.431778    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:44:55.431778    8428 round_trippers.go:577] Response Headers:
	I0314 19:44:55.432330    8428 round_trippers.go:580]     Audit-Id: a462edc8-0c50-4db1-8881-d940ec00b59c
	I0314 19:44:55.432330    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:44:55.432330    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:44:55.432330    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:44:55.432330    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:44:55.432330    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:44:55 GMT
	I0314 19:44:55.432458    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000-m02","uid":"269d9995-0aad-4983-a7b9-80160bdd37ef","resourceVersion":"2110","creationTimestamp":"2024-03-14T19:44:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_14T19_44_36_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:44:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3932 chars]
	I0314 19:44:55.432932    8428 pod_ready.go:92] pod "kube-proxy-72dzs" in "kube-system" namespace has status "Ready":"True"
	I0314 19:44:55.432932    8428 pod_ready.go:81] duration metric: took 356.903ms for pod "kube-proxy-72dzs" in "kube-system" namespace to be "Ready" ...
	I0314 19:44:55.432932    8428 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-cg28g" in "kube-system" namespace to be "Ready" ...
	I0314 19:44:55.631631    8428 request.go:629] Waited for 198.4338ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.93.236:8443/api/v1/namespaces/kube-system/pods/kube-proxy-cg28g
	I0314 19:44:55.631803    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/namespaces/kube-system/pods/kube-proxy-cg28g
	I0314 19:44:55.631803    8428 round_trippers.go:469] Request Headers:
	I0314 19:44:55.631803    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:44:55.631803    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:44:55.635498    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:44:55.636214    8428 round_trippers.go:577] Response Headers:
	I0314 19:44:55.636214    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:44:55.636214    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:44:55.636214    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:44:55.636214    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:44:55 GMT
	I0314 19:44:55.636214    8428 round_trippers.go:580]     Audit-Id: f51fe332-d756-4c07-8dd7-5b2d7e182b6d
	I0314 19:44:55.636214    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:44:55.636441    8428 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-cg28g","generateName":"kube-proxy-","namespace":"kube-system","uid":"c7f798bf-6722-4731-af8d-ccd5703d116e","resourceVersion":"1728","creationTimestamp":"2024-03-14T19:19:16Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"6fc4cc4b-ef3f-4f16-8df5-a146058b364e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:19:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6fc4cc4b-ef3f-4f16-8df5-a146058b364e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5739 chars]
	I0314 19:44:55.833688    8428 request.go:629] Waited for 196.5699ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:44:55.833926    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:44:55.834130    8428 round_trippers.go:469] Request Headers:
	I0314 19:44:55.834130    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:44:55.834130    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:44:55.838166    8428 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 19:44:55.838166    8428 round_trippers.go:577] Response Headers:
	I0314 19:44:55.838917    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:44:55.838917    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:44:55.838917    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:44:56 GMT
	I0314 19:44:55.838917    8428 round_trippers.go:580]     Audit-Id: 8fb43a63-d3d5-4e28-ba6d-f92a65d17b86
	I0314 19:44:55.838917    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:44:55.838917    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:44:55.839435    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1868","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0314 19:44:55.839974    8428 pod_ready.go:92] pod "kube-proxy-cg28g" in "kube-system" namespace has status "Ready":"True"
	I0314 19:44:55.840012    8428 pod_ready.go:81] duration metric: took 407.0501ms for pod "kube-proxy-cg28g" in "kube-system" namespace to be "Ready" ...
	I0314 19:44:55.840048    8428 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-w2qls" in "kube-system" namespace to be "Ready" ...
	I0314 19:44:56.037784    8428 request.go:629] Waited for 197.6423ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.93.236:8443/api/v1/namespaces/kube-system/pods/kube-proxy-w2qls
	I0314 19:44:56.038143    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/namespaces/kube-system/pods/kube-proxy-w2qls
	I0314 19:44:56.038143    8428 round_trippers.go:469] Request Headers:
	I0314 19:44:56.038223    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:44:56.038270    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:44:56.042145    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:44:56.042145    8428 round_trippers.go:577] Response Headers:
	I0314 19:44:56.042717    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:44:56.042717    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:44:56.042717    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:44:56.042717    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:44:56 GMT
	I0314 19:44:56.042717    8428 round_trippers.go:580]     Audit-Id: 940a2641-4309-4676-98c0-2d3de3e95f4a
	I0314 19:44:56.042776    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:44:56.042911    8428 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-w2qls","generateName":"kube-proxy-","namespace":"kube-system","uid":"7a53e602-282e-4b63-a993-a5d23d3c615f","resourceVersion":"1678","creationTimestamp":"2024-03-14T19:26:25Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"6fc4cc4b-ef3f-4f16-8df5-a146058b364e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:26:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6fc4cc4b-ef3f-4f16-8df5-a146058b364e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5767 chars]
	I0314 19:44:56.240539    8428 request.go:629] Waited for 196.7889ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.93.236:8443/api/v1/nodes/multinode-442000-m03
	I0314 19:44:56.240539    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000-m03
	I0314 19:44:56.240539    8428 round_trippers.go:469] Request Headers:
	I0314 19:44:56.240539    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:44:56.240539    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:44:56.244496    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:44:56.244496    8428 round_trippers.go:577] Response Headers:
	I0314 19:44:56.244496    8428 round_trippers.go:580]     Audit-Id: 3c75c8cd-e522-4f5a-ae2a-d1d4550ee94d
	I0314 19:44:56.244496    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:44:56.244496    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:44:56.244496    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:44:56.244496    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:44:56.244496    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:44:56 GMT
	I0314 19:44:56.245210    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000-m03","uid":"1b8e342b-6e96-49e8-a22c-874445d29fe3","resourceVersion":"1846","creationTimestamp":"2024-03-14T19:36:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_14T19_36_47_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:36:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4399 chars]
	I0314 19:44:56.245592    8428 pod_ready.go:97] node "multinode-442000-m03" hosting pod "kube-proxy-w2qls" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-442000-m03" has status "Ready":"Unknown"
	I0314 19:44:56.245592    8428 pod_ready.go:81] duration metric: took 405.5144ms for pod "kube-proxy-w2qls" in "kube-system" namespace to be "Ready" ...
	E0314 19:44:56.245592    8428 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-442000-m03" hosting pod "kube-proxy-w2qls" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-442000-m03" has status "Ready":"Unknown"
	I0314 19:44:56.245592    8428 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-442000" in "kube-system" namespace to be "Ready" ...
	I0314 19:44:56.426970    8428 request.go:629] Waited for 180.8045ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.93.236:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-442000
	I0314 19:44:56.427304    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-442000
	I0314 19:44:56.427304    8428 round_trippers.go:469] Request Headers:
	I0314 19:44:56.427304    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:44:56.427304    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:44:56.431085    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:44:56.431085    8428 round_trippers.go:577] Response Headers:
	I0314 19:44:56.431085    8428 round_trippers.go:580]     Audit-Id: 2f795128-3064-4963-ab46-652c719623a5
	I0314 19:44:56.431085    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:44:56.431085    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:44:56.431085    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:44:56.431085    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:44:56.431085    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:44:56 GMT
	I0314 19:44:56.432232    8428 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-442000","namespace":"kube-system","uid":"76b10598-fe0d-4a14-a8e4-a32221fbb68f","resourceVersion":"1803","creationTimestamp":"2024-03-14T19:19:01Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"2b2434280023596d1e3c90125a7219ed","kubernetes.io/config.mirror":"2b2434280023596d1e3c90125a7219ed","kubernetes.io/config.seen":"2024-03-14T19:18:55.420206709Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:19:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 4905 chars]
	I0314 19:44:56.632348    8428 request.go:629] Waited for 199.4266ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:44:56.632439    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:44:56.632534    8428 round_trippers.go:469] Request Headers:
	I0314 19:44:56.632534    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:44:56.632597    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:44:56.636810    8428 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 19:44:56.636810    8428 round_trippers.go:577] Response Headers:
	I0314 19:44:56.636810    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:44:56.636810    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:44:56.636810    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:44:56.636810    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:44:56.636810    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:44:56 GMT
	I0314 19:44:56.636921    8428 round_trippers.go:580]     Audit-Id: b75dbdaf-e647-4f65-be35-b54e988a9d92
	I0314 19:44:56.637315    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1868","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0314 19:44:56.637315    8428 pod_ready.go:92] pod "kube-scheduler-multinode-442000" in "kube-system" namespace has status "Ready":"True"
	I0314 19:44:56.637315    8428 pod_ready.go:81] duration metric: took 391.6935ms for pod "kube-scheduler-multinode-442000" in "kube-system" namespace to be "Ready" ...
	I0314 19:44:56.637315    8428 pod_ready.go:38] duration metric: took 1.6050924s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0314 19:44:56.637858    8428 system_svc.go:44] waiting for kubelet service to be running ....
	I0314 19:44:56.647302    8428 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0314 19:44:56.670358    8428 system_svc.go:56] duration metric: took 32.4974ms WaitForService to wait for kubelet
	I0314 19:44:56.670421    8428 kubeadm.go:576] duration metric: took 20.4376182s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0314 19:44:56.670421    8428 node_conditions.go:102] verifying NodePressure condition ...
	I0314 19:44:56.833907    8428 request.go:629] Waited for 163.0453ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.93.236:8443/api/v1/nodes
	I0314 19:44:56.833907    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes
	I0314 19:44:56.833907    8428 round_trippers.go:469] Request Headers:
	I0314 19:44:56.833907    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:44:56.833907    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:44:56.837622    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:44:56.837622    8428 round_trippers.go:577] Response Headers:
	I0314 19:44:56.837622    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:44:56.837622    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:44:57 GMT
	I0314 19:44:56.837622    8428 round_trippers.go:580]     Audit-Id: 21e3f0e9-8317-4b72-b74e-ec4cc60bd3b2
	I0314 19:44:56.837622    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:44:56.837622    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:44:56.837622    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:44:56.838815    8428 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"2114"},"items":[{"metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1868","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 15606 chars]
	I0314 19:44:56.839497    8428 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0314 19:44:56.839497    8428 node_conditions.go:123] node cpu capacity is 2
	I0314 19:44:56.839497    8428 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0314 19:44:56.839497    8428 node_conditions.go:123] node cpu capacity is 2
	I0314 19:44:56.839497    8428 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0314 19:44:56.839497    8428 node_conditions.go:123] node cpu capacity is 2
	I0314 19:44:56.839497    8428 node_conditions.go:105] duration metric: took 169.0636ms to run NodePressure ...
	I0314 19:44:56.839497    8428 start.go:240] waiting for startup goroutines ...
	I0314 19:44:56.839497    8428 start.go:254] writing updated cluster config ...
	I0314 19:44:56.843730    8428 out.go:177] 
	I0314 19:44:56.846646    8428 config.go:182] Loaded profile config "ha-832100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0314 19:44:56.857353    8428 config.go:182] Loaded profile config "multinode-442000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0314 19:44:56.857353    8428 profile.go:142] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-442000\config.json ...
	I0314 19:44:56.863225    8428 out.go:177] * Starting "multinode-442000-m03" worker node in "multinode-442000" cluster
	I0314 19:44:56.865317    8428 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0314 19:44:56.865317    8428 cache.go:56] Caching tarball of preloaded images
	I0314 19:44:56.865713    8428 preload.go:173] Found C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0314 19:44:56.865713    8428 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0314 19:44:56.865713    8428 profile.go:142] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-442000\config.json ...
	I0314 19:44:56.873278    8428 start.go:360] acquireMachinesLock for multinode-442000-m03: {Name:mk814f158b6187cc9297257c36fdbe0d2871c950 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0314 19:44:56.873278    8428 start.go:364] duration metric: took 0s to acquireMachinesLock for "multinode-442000-m03"
	I0314 19:44:56.873278    8428 start.go:96] Skipping create...Using existing machine configuration
	I0314 19:44:56.873278    8428 fix.go:54] fixHost starting: m03
	I0314 19:44:56.874011    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000-m03 ).state
	I0314 19:44:58.832982    8428 main.go:141] libmachine: [stdout =====>] : Off
	
	I0314 19:44:58.832982    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:44:58.832982    8428 fix.go:112] recreateIfNeeded on multinode-442000-m03: state=Stopped err=<nil>
	W0314 19:44:58.833998    8428 fix.go:138] unexpected machine state, will restart: <nil>
	I0314 19:44:58.848474    8428 out.go:177] * Restarting existing hyperv VM for "multinode-442000-m03" ...
	I0314 19:44:58.853649    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-442000-m03
	I0314 19:45:01.109749    8428 main.go:141] libmachine: [stdout =====>] : 
	I0314 19:45:01.109749    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:45:01.109749    8428 main.go:141] libmachine: Waiting for host to start...
	I0314 19:45:01.109749    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000-m03 ).state
	I0314 19:45:03.199728    8428 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:45:03.199728    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:45:03.199728    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-442000-m03 ).networkadapters[0]).ipaddresses[0]
	I0314 19:45:05.518024    8428 main.go:141] libmachine: [stdout =====>] : 
	I0314 19:45:05.518024    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:45:06.531932    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000-m03 ).state
	I0314 19:45:08.542630    8428 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:45:08.542630    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:45:08.542720    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-442000-m03 ).networkadapters[0]).ipaddresses[0]
	I0314 19:45:10.857335    8428 main.go:141] libmachine: [stdout =====>] : 
	I0314 19:45:10.857335    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:45:11.868282    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000-m03 ).state
	I0314 19:45:13.861311    8428 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:45:13.861828    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:45:13.861966    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-442000-m03 ).networkadapters[0]).ipaddresses[0]
	I0314 19:45:16.155080    8428 main.go:141] libmachine: [stdout =====>] : 
	I0314 19:45:16.155080    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:45:17.165288    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000-m03 ).state
	I0314 19:45:19.169146    8428 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:45:19.169146    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:45:19.169461    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-442000-m03 ).networkadapters[0]).ipaddresses[0]
	I0314 19:45:21.484034    8428 main.go:141] libmachine: [stdout =====>] : 
	I0314 19:45:21.484866    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:45:22.491632    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000-m03 ).state
	I0314 19:45:24.563128    8428 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:45:24.563128    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:45:24.563128    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-442000-m03 ).networkadapters[0]).ipaddresses[0]
	I0314 19:45:26.923984    8428 main.go:141] libmachine: [stdout =====>] : 172.17.91.252
	
	I0314 19:45:26.923984    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:45:26.926928    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000-m03 ).state
	I0314 19:45:28.891146    8428 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:45:28.891828    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:45:28.891828    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-442000-m03 ).networkadapters[0]).ipaddresses[0]
	I0314 19:45:31.267766    8428 main.go:141] libmachine: [stdout =====>] : 172.17.91.252
	
	I0314 19:45:31.268252    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:45:31.268252    8428 profile.go:142] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-442000\config.json ...
	I0314 19:45:31.270525    8428 machine.go:94] provisionDockerMachine start ...
	I0314 19:45:31.270681    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000-m03 ).state
	I0314 19:45:33.252026    8428 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:45:33.252026    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:45:33.252275    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-442000-m03 ).networkadapters[0]).ipaddresses[0]
	I0314 19:45:35.594673    8428 main.go:141] libmachine: [stdout =====>] : 172.17.91.252
	
	I0314 19:45:35.594673    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:45:35.598570    8428 main.go:141] libmachine: Using SSH client type: native
	I0314 19:45:35.598656    8428 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xac9f80] 0xaccb60 <nil>  [] 0s} 172.17.91.252 22 <nil> <nil>}
	I0314 19:45:35.598656    8428 main.go:141] libmachine: About to run SSH command:
	hostname
	I0314 19:45:35.729082    8428 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0314 19:45:35.729082    8428 buildroot.go:166] provisioning hostname "multinode-442000-m03"
	I0314 19:45:35.729185    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000-m03 ).state
	I0314 19:45:37.700219    8428 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:45:37.700219    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:45:37.700775    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-442000-m03 ).networkadapters[0]).ipaddresses[0]
	I0314 19:45:40.070258    8428 main.go:141] libmachine: [stdout =====>] : 172.17.91.252
	
	I0314 19:45:40.070780    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:45:40.074459    8428 main.go:141] libmachine: Using SSH client type: native
	I0314 19:45:40.074982    8428 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xac9f80] 0xaccb60 <nil>  [] 0s} 172.17.91.252 22 <nil> <nil>}
	I0314 19:45:40.074982    8428 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-442000-m03 && echo "multinode-442000-m03" | sudo tee /etc/hostname
	I0314 19:45:40.234976    8428 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-442000-m03
	
	I0314 19:45:40.234976    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000-m03 ).state
	I0314 19:45:42.242937    8428 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:45:42.243078    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:45:42.243078    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-442000-m03 ).networkadapters[0]).ipaddresses[0]
	I0314 19:45:44.611732    8428 main.go:141] libmachine: [stdout =====>] : 172.17.91.252
	
	I0314 19:45:44.611732    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:45:44.615670    8428 main.go:141] libmachine: Using SSH client type: native
	I0314 19:45:44.616085    8428 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xac9f80] 0xaccb60 <nil>  [] 0s} 172.17.91.252 22 <nil> <nil>}
	I0314 19:45:44.616085    8428 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-442000-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-442000-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-442000-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0314 19:45:44.769043    8428 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0314 19:45:44.769043    8428 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube7\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube7\minikube-integration\.minikube}
	I0314 19:45:44.769043    8428 buildroot.go:174] setting up certificates
	I0314 19:45:44.769043    8428 provision.go:84] configureAuth start
	I0314 19:45:44.769043    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000-m03 ).state
	I0314 19:45:46.726827    8428 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:45:46.726827    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:45:46.726827    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-442000-m03 ).networkadapters[0]).ipaddresses[0]
	I0314 19:45:49.050646    8428 main.go:141] libmachine: [stdout =====>] : 172.17.91.252
	
	I0314 19:45:49.050646    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:45:49.051003    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000-m03 ).state
	I0314 19:45:51.016284    8428 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:45:51.016284    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:45:51.016369    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-442000-m03 ).networkadapters[0]).ipaddresses[0]
	I0314 19:45:53.398590    8428 main.go:141] libmachine: [stdout =====>] : 172.17.91.252
	
	I0314 19:45:53.398590    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:45:53.398650    8428 provision.go:143] copyHostCerts
	I0314 19:45:53.398766    8428 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem
	I0314 19:45:53.398994    8428 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem, removing ...
	I0314 19:45:53.398994    8428 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.pem
	I0314 19:45:53.399553    8428 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0314 19:45:53.400200    8428 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem
	I0314 19:45:53.400200    8428 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem, removing ...
	I0314 19:45:53.400200    8428 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cert.pem
	I0314 19:45:53.400746    8428 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0314 19:45:53.401030    8428 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem
	I0314 19:45:53.401728    8428 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem, removing ...
	I0314 19:45:53.401728    8428 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\key.pem
	I0314 19:45:53.401989    8428 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem (1679 bytes)
	I0314 19:45:53.402692    8428 provision.go:117] generating server cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-442000-m03 san=[127.0.0.1 172.17.91.252 localhost minikube multinode-442000-m03]
	I0314 19:45:53.975510    8428 provision.go:177] copyRemoteCerts
	I0314 19:45:53.985591    8428 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0314 19:45:53.985662    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000-m03 ).state
	I0314 19:45:55.944611    8428 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:45:55.945412    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:45:55.945531    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-442000-m03 ).networkadapters[0]).ipaddresses[0]
	I0314 19:45:58.312194    8428 main.go:141] libmachine: [stdout =====>] : 172.17.91.252
	
	I0314 19:45:58.312890    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:45:58.313257    8428 sshutil.go:53] new ssh client: &{IP:172.17.91.252 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-442000-m03\id_rsa Username:docker}
	I0314 19:45:58.422676    8428 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.4366857s)
	I0314 19:45:58.422676    8428 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0314 19:45:58.422676    8428 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0314 19:45:58.464670    8428 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0314 19:45:58.464670    8428 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1229 bytes)
	I0314 19:45:58.514330    8428 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0314 19:45:58.514587    8428 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0314 19:45:58.556708    8428 provision.go:87] duration metric: took 13.7866442s to configureAuth
	I0314 19:45:58.556708    8428 buildroot.go:189] setting minikube options for container-runtime
	I0314 19:45:58.557328    8428 config.go:182] Loaded profile config "multinode-442000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0314 19:45:58.557328    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000-m03 ).state
	I0314 19:46:00.542326    8428 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:46:00.542326    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:46:00.543122    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-442000-m03 ).networkadapters[0]).ipaddresses[0]
	I0314 19:46:02.887487    8428 main.go:141] libmachine: [stdout =====>] : 172.17.91.252
	
	I0314 19:46:02.887808    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:46:02.891673    8428 main.go:141] libmachine: Using SSH client type: native
	I0314 19:46:02.891945    8428 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xac9f80] 0xaccb60 <nil>  [] 0s} 172.17.91.252 22 <nil> <nil>}
	I0314 19:46:02.891945    8428 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0314 19:46:03.022681    8428 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0314 19:46:03.022750    8428 buildroot.go:70] root file system type: tmpfs
	I0314 19:46:03.022954    8428 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0314 19:46:03.023049    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000-m03 ).state
	I0314 19:46:04.981895    8428 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:46:04.981895    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:46:04.981895    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-442000-m03 ).networkadapters[0]).ipaddresses[0]

                                                
                                                
** /stderr **
multinode_test.go:328: failed to run minikube start. args "out/minikube-windows-amd64.exe node list -p multinode-442000" : exit status 1
multinode_test.go:331: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-442000
multinode_test.go:331: (dbg) Non-zero exit: out/minikube-windows-amd64.exe node list -p multinode-442000: context deadline exceeded (0s)
multinode_test.go:333: failed to run node list. args "out/minikube-windows-amd64.exe node list -p multinode-442000" : context deadline exceeded
multinode_test.go:338: reported node list is not the same after restart. Before restart: multinode-442000	172.17.86.124
multinode-442000-m02	172.17.80.135
multinode-442000-m03	172.17.84.215

                                                
                                                
After restart: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-442000 -n multinode-442000
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-442000 -n multinode-442000: (11.0250374s)
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-442000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-442000 logs -n 25: (12.1454186s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------------------------------------------------------------------------|------------------|-------------------|---------|---------------------|---------------------|
	| Command |                                                           Args                                                           |     Profile      |       User        | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------------------------------------------------------|------------------|-------------------|---------|---------------------|---------------------|
	| cp      | multinode-442000 cp testdata\cp-test.txt                                                                                 | multinode-442000 | minikube7\jenkins | v1.32.0 | 14 Mar 24 19:29 UTC | 14 Mar 24 19:29 UTC |
	|         | multinode-442000-m02:/home/docker/cp-test.txt                                                                            |                  |                   |         |                     |                     |
	| ssh     | multinode-442000 ssh -n                                                                                                  | multinode-442000 | minikube7\jenkins | v1.32.0 | 14 Mar 24 19:29 UTC | 14 Mar 24 19:29 UTC |
	|         | multinode-442000-m02 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| cp      | multinode-442000 cp multinode-442000-m02:/home/docker/cp-test.txt                                                        | multinode-442000 | minikube7\jenkins | v1.32.0 | 14 Mar 24 19:29 UTC | 14 Mar 24 19:30 UTC |
	|         | C:\Users\jenkins.minikube7\AppData\Local\Temp\TestMultiNodeserialCopyFile1678027892\001\cp-test_multinode-442000-m02.txt |                  |                   |         |                     |                     |
	| ssh     | multinode-442000 ssh -n                                                                                                  | multinode-442000 | minikube7\jenkins | v1.32.0 | 14 Mar 24 19:30 UTC | 14 Mar 24 19:30 UTC |
	|         | multinode-442000-m02 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| cp      | multinode-442000 cp multinode-442000-m02:/home/docker/cp-test.txt                                                        | multinode-442000 | minikube7\jenkins | v1.32.0 | 14 Mar 24 19:30 UTC | 14 Mar 24 19:30 UTC |
	|         | multinode-442000:/home/docker/cp-test_multinode-442000-m02_multinode-442000.txt                                          |                  |                   |         |                     |                     |
	| ssh     | multinode-442000 ssh -n                                                                                                  | multinode-442000 | minikube7\jenkins | v1.32.0 | 14 Mar 24 19:30 UTC | 14 Mar 24 19:30 UTC |
	|         | multinode-442000-m02 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| ssh     | multinode-442000 ssh -n multinode-442000 sudo cat                                                                        | multinode-442000 | minikube7\jenkins | v1.32.0 | 14 Mar 24 19:30 UTC | 14 Mar 24 19:30 UTC |
	|         | /home/docker/cp-test_multinode-442000-m02_multinode-442000.txt                                                           |                  |                   |         |                     |                     |
	| cp      | multinode-442000 cp multinode-442000-m02:/home/docker/cp-test.txt                                                        | multinode-442000 | minikube7\jenkins | v1.32.0 | 14 Mar 24 19:30 UTC | 14 Mar 24 19:31 UTC |
	|         | multinode-442000-m03:/home/docker/cp-test_multinode-442000-m02_multinode-442000-m03.txt                                  |                  |                   |         |                     |                     |
	| ssh     | multinode-442000 ssh -n                                                                                                  | multinode-442000 | minikube7\jenkins | v1.32.0 | 14 Mar 24 19:31 UTC | 14 Mar 24 19:31 UTC |
	|         | multinode-442000-m02 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| ssh     | multinode-442000 ssh -n multinode-442000-m03 sudo cat                                                                    | multinode-442000 | minikube7\jenkins | v1.32.0 | 14 Mar 24 19:31 UTC | 14 Mar 24 19:31 UTC |
	|         | /home/docker/cp-test_multinode-442000-m02_multinode-442000-m03.txt                                                       |                  |                   |         |                     |                     |
	| cp      | multinode-442000 cp testdata\cp-test.txt                                                                                 | multinode-442000 | minikube7\jenkins | v1.32.0 | 14 Mar 24 19:31 UTC | 14 Mar 24 19:31 UTC |
	|         | multinode-442000-m03:/home/docker/cp-test.txt                                                                            |                  |                   |         |                     |                     |
	| ssh     | multinode-442000 ssh -n                                                                                                  | multinode-442000 | minikube7\jenkins | v1.32.0 | 14 Mar 24 19:31 UTC | 14 Mar 24 19:31 UTC |
	|         | multinode-442000-m03 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| cp      | multinode-442000 cp multinode-442000-m03:/home/docker/cp-test.txt                                                        | multinode-442000 | minikube7\jenkins | v1.32.0 | 14 Mar 24 19:31 UTC | 14 Mar 24 19:31 UTC |
	|         | C:\Users\jenkins.minikube7\AppData\Local\Temp\TestMultiNodeserialCopyFile1678027892\001\cp-test_multinode-442000-m03.txt |                  |                   |         |                     |                     |
	| ssh     | multinode-442000 ssh -n                                                                                                  | multinode-442000 | minikube7\jenkins | v1.32.0 | 14 Mar 24 19:31 UTC | 14 Mar 24 19:31 UTC |
	|         | multinode-442000-m03 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| cp      | multinode-442000 cp multinode-442000-m03:/home/docker/cp-test.txt                                                        | multinode-442000 | minikube7\jenkins | v1.32.0 | 14 Mar 24 19:31 UTC | 14 Mar 24 19:32 UTC |
	|         | multinode-442000:/home/docker/cp-test_multinode-442000-m03_multinode-442000.txt                                          |                  |                   |         |                     |                     |
	| ssh     | multinode-442000 ssh -n                                                                                                  | multinode-442000 | minikube7\jenkins | v1.32.0 | 14 Mar 24 19:32 UTC | 14 Mar 24 19:32 UTC |
	|         | multinode-442000-m03 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| ssh     | multinode-442000 ssh -n multinode-442000 sudo cat                                                                        | multinode-442000 | minikube7\jenkins | v1.32.0 | 14 Mar 24 19:32 UTC | 14 Mar 24 19:32 UTC |
	|         | /home/docker/cp-test_multinode-442000-m03_multinode-442000.txt                                                           |                  |                   |         |                     |                     |
	| cp      | multinode-442000 cp multinode-442000-m03:/home/docker/cp-test.txt                                                        | multinode-442000 | minikube7\jenkins | v1.32.0 | 14 Mar 24 19:32 UTC | 14 Mar 24 19:32 UTC |
	|         | multinode-442000-m02:/home/docker/cp-test_multinode-442000-m03_multinode-442000-m02.txt                                  |                  |                   |         |                     |                     |
	| ssh     | multinode-442000 ssh -n                                                                                                  | multinode-442000 | minikube7\jenkins | v1.32.0 | 14 Mar 24 19:32 UTC | 14 Mar 24 19:32 UTC |
	|         | multinode-442000-m03 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| ssh     | multinode-442000 ssh -n multinode-442000-m02 sudo cat                                                                    | multinode-442000 | minikube7\jenkins | v1.32.0 | 14 Mar 24 19:32 UTC | 14 Mar 24 19:33 UTC |
	|         | /home/docker/cp-test_multinode-442000-m03_multinode-442000-m02.txt                                                       |                  |                   |         |                     |                     |
	| node    | multinode-442000 node stop m03                                                                                           | multinode-442000 | minikube7\jenkins | v1.32.0 | 14 Mar 24 19:33 UTC |                     |
	| node    | multinode-442000 node start                                                                                              | multinode-442000 | minikube7\jenkins | v1.32.0 | 14 Mar 24 19:34 UTC | 14 Mar 24 19:36 UTC |
	|         | m03 -v=7 --alsologtostderr                                                                                               |                  |                   |         |                     |                     |
	| node    | list -p multinode-442000                                                                                                 | multinode-442000 | minikube7\jenkins | v1.32.0 | 14 Mar 24 19:37 UTC |                     |
	| stop    | -p multinode-442000                                                                                                      | multinode-442000 | minikube7\jenkins | v1.32.0 | 14 Mar 24 19:37 UTC | 14 Mar 24 19:39 UTC |
	| start   | -p multinode-442000                                                                                                      | multinode-442000 | minikube7\jenkins | v1.32.0 | 14 Mar 24 19:39 UTC |                     |
	|         | --wait=true -v=8                                                                                                         |                  |                   |         |                     |                     |
	|         | --alsologtostderr                                                                                                        |                  |                   |         |                     |                     |
	|---------|--------------------------------------------------------------------------------------------------------------------------|------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/14 19:39:02
	Running on machine: minikube7
	Binary: Built with gc go1.22.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0314 19:39:02.625615    8428 out.go:291] Setting OutFile to fd 1780 ...
	I0314 19:39:02.626675    8428 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 19:39:02.626675    8428 out.go:304] Setting ErrFile to fd 1656...
	I0314 19:39:02.626675    8428 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 19:39:02.646420    8428 out.go:298] Setting JSON to false
	I0314 19:39:02.649032    8428 start.go:129] hostinfo: {"hostname":"minikube7","uptime":66947,"bootTime":1710378195,"procs":197,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4046 Build 19045.4046","kernelVersion":"10.0.19045.4046 Build 19045.4046","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f66ed2ea-6c04-4a6b-8eea-b2fb0953e990"}
	W0314 19:39:02.649032    8428 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0314 19:39:02.676633    8428 out.go:177] * [multinode-442000] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4046 Build 19045.4046
	I0314 19:39:02.876298    8428 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0314 19:39:02.719328    8428 notify.go:220] Checking for updates...
	I0314 19:39:03.065147    8428 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0314 19:39:03.115186    8428 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	I0314 19:39:03.254105    8428 out.go:177]   - MINIKUBE_LOCATION=18384
	I0314 19:39:03.420663    8428 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0314 19:39:03.429141    8428 config.go:182] Loaded profile config "multinode-442000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0314 19:39:03.429417    8428 driver.go:392] Setting default libvirt URI to qemu:///system
	I0314 19:39:08.617424    8428 out.go:177] * Using the hyperv driver based on existing profile
	I0314 19:39:08.622317    8428 start.go:297] selected driver: hyperv
	I0314 19:39:08.622317    8428 start.go:901] validating driver "hyperv" against &{Name:multinode-442000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Ku
bernetesVersion:v1.28.4 ClusterName:multinode-442000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.86.124 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.17.80.135 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.17.84.215 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:fa
lse ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 19:39:08.622487    8428 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0314 19:39:08.669081    8428 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0314 19:39:08.669151    8428 cni.go:84] Creating CNI manager for ""
	I0314 19:39:08.669151    8428 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0314 19:39:08.669295    8428 start.go:340] cluster config:
	{Name:multinode-442000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-442000 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.86.124 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.17.80.135 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.17.84.215 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner
:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFir
mwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 19:39:08.669603    8428 iso.go:125] acquiring lock: {Name:mk1b3e73402180391a20a865a9454da445c269fc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0314 19:39:08.823039    8428 out.go:177] * Starting "multinode-442000" primary control-plane node in "multinode-442000" cluster
	I0314 19:39:08.872180    8428 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0314 19:39:08.872280    8428 preload.go:147] Found local preload: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I0314 19:39:08.872280    8428 cache.go:56] Caching tarball of preloaded images
	I0314 19:39:08.872812    8428 preload.go:173] Found C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0314 19:39:08.873066    8428 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0314 19:39:08.873445    8428 profile.go:142] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-442000\config.json ...
	I0314 19:39:08.877074    8428 start.go:360] acquireMachinesLock for multinode-442000: {Name:mk814f158b6187cc9297257c36fdbe0d2871c950 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0314 19:39:08.877162    8428 start.go:364] duration metric: took 88.5µs to acquireMachinesLock for "multinode-442000"
	I0314 19:39:08.877162    8428 start.go:96] Skipping create...Using existing machine configuration
	I0314 19:39:08.877162    8428 fix.go:54] fixHost starting: 
	I0314 19:39:08.877808    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000 ).state
	I0314 19:39:11.462259    8428 main.go:141] libmachine: [stdout =====>] : Off
	
	I0314 19:39:11.462259    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:39:11.462259    8428 fix.go:112] recreateIfNeeded on multinode-442000: state=Stopped err=<nil>
	W0314 19:39:11.462259    8428 fix.go:138] unexpected machine state, will restart: <nil>
	I0314 19:39:11.527884    8428 out.go:177] * Restarting existing hyperv VM for "multinode-442000" ...
	I0314 19:39:11.531003    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-442000
	I0314 19:39:15.520294    8428 main.go:141] libmachine: [stdout =====>] : 
	I0314 19:39:15.520294    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:39:15.520294    8428 main.go:141] libmachine: Waiting for host to start...
	I0314 19:39:15.520294    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000 ).state
	I0314 19:39:17.578362    8428 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:39:17.578865    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:39:17.578865    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-442000 ).networkadapters[0]).ipaddresses[0]
	I0314 19:39:19.898828    8428 main.go:141] libmachine: [stdout =====>] : 
	I0314 19:39:19.898828    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:39:20.908383    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000 ).state
	I0314 19:39:22.933851    8428 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:39:22.933851    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:39:22.934499    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-442000 ).networkadapters[0]).ipaddresses[0]
	I0314 19:39:25.225186    8428 main.go:141] libmachine: [stdout =====>] : 
	I0314 19:39:25.225186    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:39:26.227725    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000 ).state
	I0314 19:39:28.251206    8428 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:39:28.251388    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:39:28.251486    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-442000 ).networkadapters[0]).ipaddresses[0]
	I0314 19:39:30.558089    8428 main.go:141] libmachine: [stdout =====>] : 
	I0314 19:39:30.558089    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:39:31.566622    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000 ).state
	I0314 19:39:33.559717    8428 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:39:33.559781    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:39:33.559781    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-442000 ).networkadapters[0]).ipaddresses[0]
	I0314 19:39:35.875289    8428 main.go:141] libmachine: [stdout =====>] : 
	I0314 19:39:35.875289    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:39:36.886006    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000 ).state
	I0314 19:39:38.917520    8428 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:39:38.917939    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:39:38.917939    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-442000 ).networkadapters[0]).ipaddresses[0]
	I0314 19:39:41.267585    8428 main.go:141] libmachine: [stdout =====>] : 172.17.93.236
	
	I0314 19:39:41.267585    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:39:41.270463    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000 ).state
	I0314 19:39:43.251733    8428 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:39:43.251880    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:39:43.251957    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-442000 ).networkadapters[0]).ipaddresses[0]
	I0314 19:39:45.644162    8428 main.go:141] libmachine: [stdout =====>] : 172.17.93.236
	
	I0314 19:39:45.644162    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:39:45.644792    8428 profile.go:142] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-442000\config.json ...
	I0314 19:39:45.646761    8428 machine.go:94] provisionDockerMachine start ...
	I0314 19:39:45.646870    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000 ).state
	I0314 19:39:47.623471    8428 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:39:47.623557    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:39:47.623557    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-442000 ).networkadapters[0]).ipaddresses[0]
	I0314 19:39:49.994101    8428 main.go:141] libmachine: [stdout =====>] : 172.17.93.236
	
	I0314 19:39:49.994101    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:39:49.998736    8428 main.go:141] libmachine: Using SSH client type: native
	I0314 19:39:49.998736    8428 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xac9f80] 0xaccb60 <nil>  [] 0s} 172.17.93.236 22 <nil> <nil>}
	I0314 19:39:49.998736    8428 main.go:141] libmachine: About to run SSH command:
	hostname
	I0314 19:39:50.139786    8428 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0314 19:39:50.139884    8428 buildroot.go:166] provisioning hostname "multinode-442000"
	I0314 19:39:50.140008    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000 ).state
	I0314 19:39:52.110791    8428 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:39:52.110791    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:39:52.110791    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-442000 ).networkadapters[0]).ipaddresses[0]
	I0314 19:39:54.474094    8428 main.go:141] libmachine: [stdout =====>] : 172.17.93.236
	
	I0314 19:39:54.474094    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:39:54.478157    8428 main.go:141] libmachine: Using SSH client type: native
	I0314 19:39:54.478566    8428 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xac9f80] 0xaccb60 <nil>  [] 0s} 172.17.93.236 22 <nil> <nil>}
	I0314 19:39:54.478647    8428 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-442000 && echo "multinode-442000" | sudo tee /etc/hostname
	I0314 19:39:54.645826    8428 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-442000
	
	I0314 19:39:54.645915    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000 ).state
	I0314 19:39:56.597485    8428 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:39:56.597485    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:39:56.597797    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-442000 ).networkadapters[0]).ipaddresses[0]
	I0314 19:39:58.974093    8428 main.go:141] libmachine: [stdout =====>] : 172.17.93.236
	
	I0314 19:39:58.974093    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:39:58.981067    8428 main.go:141] libmachine: Using SSH client type: native
	I0314 19:39:58.981067    8428 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xac9f80] 0xaccb60 <nil>  [] 0s} 172.17.93.236 22 <nil> <nil>}
	I0314 19:39:58.981067    8428 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-442000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-442000/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-442000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0314 19:39:59.130757    8428 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0314 19:39:59.130757    8428 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube7\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube7\minikube-integration\.minikube}
	I0314 19:39:59.130757    8428 buildroot.go:174] setting up certificates
	I0314 19:39:59.130757    8428 provision.go:84] configureAuth start
	I0314 19:39:59.131540    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000 ).state
	I0314 19:40:01.112146    8428 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:40:01.112146    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:40:01.112204    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-442000 ).networkadapters[0]).ipaddresses[0]
	I0314 19:40:03.486170    8428 main.go:141] libmachine: [stdout =====>] : 172.17.93.236
	
	I0314 19:40:03.486170    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:40:03.486170    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000 ).state
	I0314 19:40:05.459428    8428 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:40:05.459428    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:40:05.459428    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-442000 ).networkadapters[0]).ipaddresses[0]
	I0314 19:40:07.792496    8428 main.go:141] libmachine: [stdout =====>] : 172.17.93.236
	
	I0314 19:40:07.792496    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:40:07.792496    8428 provision.go:143] copyHostCerts
	I0314 19:40:07.793369    8428 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem
	I0314 19:40:07.793369    8428 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem, removing ...
	I0314 19:40:07.793369    8428 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.pem
	I0314 19:40:07.794065    8428 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0314 19:40:07.795007    8428 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem
	I0314 19:40:07.795797    8428 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem, removing ...
	I0314 19:40:07.795961    8428 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cert.pem
	I0314 19:40:07.795961    8428 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0314 19:40:07.796719    8428 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem
	I0314 19:40:07.797326    8428 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem, removing ...
	I0314 19:40:07.797326    8428 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\key.pem
	I0314 19:40:07.797326    8428 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem (1679 bytes)
	I0314 19:40:07.797996    8428 provision.go:117] generating server cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-442000 san=[127.0.0.1 172.17.93.236 localhost minikube multinode-442000]
	I0314 19:40:08.179126    8428 provision.go:177] copyRemoteCerts
	I0314 19:40:08.191121    8428 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0314 19:40:08.191121    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000 ).state
	I0314 19:40:10.185425    8428 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:40:10.185425    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:40:10.186036    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-442000 ).networkadapters[0]).ipaddresses[0]
	I0314 19:40:12.534992    8428 main.go:141] libmachine: [stdout =====>] : 172.17.93.236
	
	I0314 19:40:12.535721    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:40:12.535721    8428 sshutil.go:53] new ssh client: &{IP:172.17.93.236 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-442000\id_rsa Username:docker}
	I0314 19:40:12.643746    8428 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.4522878s)
	I0314 19:40:12.645778    8428 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0314 19:40:12.646410    8428 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0314 19:40:12.690092    8428 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0314 19:40:12.690092    8428 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1216 bytes)
	I0314 19:40:12.736222    8428 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0314 19:40:12.736595    8428 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0314 19:40:12.783798    8428 provision.go:87] duration metric: took 13.6520056s to configureAuth
	I0314 19:40:12.783938    8428 buildroot.go:189] setting minikube options for container-runtime
	I0314 19:40:12.784532    8428 config.go:182] Loaded profile config "multinode-442000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0314 19:40:12.784623    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000 ).state
	I0314 19:40:14.772316    8428 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:40:14.772571    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:40:14.772571    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-442000 ).networkadapters[0]).ipaddresses[0]
	I0314 19:40:17.126045    8428 main.go:141] libmachine: [stdout =====>] : 172.17.93.236
	
	I0314 19:40:17.126045    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:40:17.130726    8428 main.go:141] libmachine: Using SSH client type: native
	I0314 19:40:17.131251    8428 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xac9f80] 0xaccb60 <nil>  [] 0s} 172.17.93.236 22 <nil> <nil>}
	I0314 19:40:17.131364    8428 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0314 19:40:17.274520    8428 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0314 19:40:17.274520    8428 buildroot.go:70] root file system type: tmpfs
	I0314 19:40:17.274520    8428 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0314 19:40:17.274520    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000 ).state
	I0314 19:40:19.239278    8428 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:40:19.239278    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:40:19.240298    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-442000 ).networkadapters[0]).ipaddresses[0]
	I0314 19:40:21.613985    8428 main.go:141] libmachine: [stdout =====>] : 172.17.93.236
	
	I0314 19:40:21.613985    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:40:21.618312    8428 main.go:141] libmachine: Using SSH client type: native
	I0314 19:40:21.618465    8428 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xac9f80] 0xaccb60 <nil>  [] 0s} 172.17.93.236 22 <nil> <nil>}
	I0314 19:40:21.618465    8428 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0314 19:40:21.786728    8428 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0314 19:40:21.786728    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000 ).state
	I0314 19:40:23.741801    8428 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:40:23.741801    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:40:23.742000    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-442000 ).networkadapters[0]).ipaddresses[0]
	I0314 19:40:26.151856    8428 main.go:141] libmachine: [stdout =====>] : 172.17.93.236
	
	I0314 19:40:26.151856    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:40:26.156707    8428 main.go:141] libmachine: Using SSH client type: native
	I0314 19:40:26.156707    8428 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xac9f80] 0xaccb60 <nil>  [] 0s} 172.17.93.236 22 <nil> <nil>}
	I0314 19:40:26.156707    8428 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0314 19:40:28.541060    8428 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0314 19:40:28.541142    8428 machine.go:97] duration metric: took 42.8911279s to provisionDockerMachine
	I0314 19:40:28.541142    8428 start.go:293] postStartSetup for "multinode-442000" (driver="hyperv")
	I0314 19:40:28.541142    8428 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0314 19:40:28.552934    8428 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0314 19:40:28.552934    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000 ).state
	I0314 19:40:30.512463    8428 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:40:30.512463    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:40:30.512463    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-442000 ).networkadapters[0]).ipaddresses[0]
	I0314 19:40:32.860394    8428 main.go:141] libmachine: [stdout =====>] : 172.17.93.236
	
	I0314 19:40:32.860394    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:40:32.861252    8428 sshutil.go:53] new ssh client: &{IP:172.17.93.236 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-442000\id_rsa Username:docker}
	I0314 19:40:32.968061    8428 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.4147186s)
	I0314 19:40:32.976856    8428 ssh_runner.go:195] Run: cat /etc/os-release
	I0314 19:40:32.983165    8428 command_runner.go:130] > NAME=Buildroot
	I0314 19:40:32.983165    8428 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0314 19:40:32.983285    8428 command_runner.go:130] > ID=buildroot
	I0314 19:40:32.983285    8428 command_runner.go:130] > VERSION_ID=2023.02.9
	I0314 19:40:32.983285    8428 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0314 19:40:32.983350    8428 info.go:137] Remote host: Buildroot 2023.02.9
	I0314 19:40:32.983350    8428 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\addons for local assets ...
	I0314 19:40:32.983350    8428 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\files for local assets ...
	I0314 19:40:32.984582    8428 filesync.go:149] local asset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\110522.pem -> 110522.pem in /etc/ssl/certs
	I0314 19:40:32.984582    8428 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\110522.pem -> /etc/ssl/certs/110522.pem
	I0314 19:40:32.994800    8428 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0314 19:40:33.010951    8428 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\110522.pem --> /etc/ssl/certs/110522.pem (1708 bytes)
	I0314 19:40:33.054192    8428 start.go:296] duration metric: took 4.5127083s for postStartSetup
	I0314 19:40:33.054192    8428 fix.go:56] duration metric: took 1m24.1706439s for fixHost
	I0314 19:40:33.054192    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000 ).state
	I0314 19:40:35.037620    8428 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:40:35.037620    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:40:35.037620    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-442000 ).networkadapters[0]).ipaddresses[0]
	I0314 19:40:37.375754    8428 main.go:141] libmachine: [stdout =====>] : 172.17.93.236
	
	I0314 19:40:37.375754    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:40:37.379584    8428 main.go:141] libmachine: Using SSH client type: native
	I0314 19:40:37.380125    8428 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xac9f80] 0xaccb60 <nil>  [] 0s} 172.17.93.236 22 <nil> <nil>}
	I0314 19:40:37.380125    8428 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0314 19:40:37.519664    8428 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710445237.779688673
	
	I0314 19:40:37.519664    8428 fix.go:216] guest clock: 1710445237.779688673
	I0314 19:40:37.519664    8428 fix.go:229] Guest: 2024-03-14 19:40:37.779688673 +0000 UTC Remote: 2024-03-14 19:40:33.0541927 +0000 UTC m=+90.580944101 (delta=4.725495973s)
	I0314 19:40:37.519734    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000 ).state
	I0314 19:40:39.497293    8428 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:40:39.498250    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:40:39.498372    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-442000 ).networkadapters[0]).ipaddresses[0]
	I0314 19:40:41.891520    8428 main.go:141] libmachine: [stdout =====>] : 172.17.93.236
	
	I0314 19:40:41.891520    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:40:41.895458    8428 main.go:141] libmachine: Using SSH client type: native
	I0314 19:40:41.896077    8428 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xac9f80] 0xaccb60 <nil>  [] 0s} 172.17.93.236 22 <nil> <nil>}
	I0314 19:40:41.896077    8428 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1710445237
	I0314 19:40:42.049221    8428 main.go:141] libmachine: SSH cmd err, output: <nil>: Thu Mar 14 19:40:37 UTC 2024
	
	I0314 19:40:42.049221    8428 fix.go:236] clock set: Thu Mar 14 19:40:37 UTC 2024
	 (err=<nil>)
	I0314 19:40:42.049386    8428 start.go:83] releasing machines lock for "multinode-442000", held for 1m33.1651553s
	I0314 19:40:42.049461    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000 ).state
	I0314 19:40:44.010021    8428 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:40:44.010705    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:40:44.010782    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-442000 ).networkadapters[0]).ipaddresses[0]
	I0314 19:40:46.365248    8428 main.go:141] libmachine: [stdout =====>] : 172.17.93.236
	
	I0314 19:40:46.365577    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:40:46.368874    8428 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0314 19:40:46.368953    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000 ).state
	I0314 19:40:46.376353    8428 ssh_runner.go:195] Run: cat /version.json
	I0314 19:40:46.376353    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000 ).state
	I0314 19:40:48.346155    8428 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:40:48.346155    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:40:48.346155    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-442000 ).networkadapters[0]).ipaddresses[0]
	I0314 19:40:48.348108    8428 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:40:48.348108    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:40:48.348108    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-442000 ).networkadapters[0]).ipaddresses[0]
	I0314 19:40:50.725561    8428 main.go:141] libmachine: [stdout =====>] : 172.17.93.236
	
	I0314 19:40:50.725561    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:40:50.726613    8428 sshutil.go:53] new ssh client: &{IP:172.17.93.236 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-442000\id_rsa Username:docker}
	I0314 19:40:50.769534    8428 main.go:141] libmachine: [stdout =====>] : 172.17.93.236
	
	I0314 19:40:50.769761    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:40:50.769761    8428 sshutil.go:53] new ssh client: &{IP:172.17.93.236 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-442000\id_rsa Username:docker}
	I0314 19:40:50.956491    8428 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0314 19:40:50.956491    8428 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.5872692s)
	I0314 19:40:50.956693    8428 command_runner.go:130] > {"iso_version": "v1.32.1-1710348681-18375", "kicbase_version": "v0.0.42-1710284843-18375", "minikube_version": "v1.32.0", "commit": "fd5757a6603390a2c0efe3b1e5cdd797538203fd"}
	I0314 19:40:50.956780    8428 ssh_runner.go:235] Completed: cat /version.json: (4.5799927s)
	I0314 19:40:50.966080    8428 ssh_runner.go:195] Run: systemctl --version
	I0314 19:40:50.974657    8428 command_runner.go:130] > systemd 252 (252)
	I0314 19:40:50.974657    8428 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0314 19:40:50.984378    8428 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0314 19:40:50.991360    8428 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0314 19:40:50.992238    8428 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0314 19:40:51.000634    8428 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0314 19:40:51.026317    8428 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0314 19:40:51.026451    8428 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0314 19:40:51.026451    8428 start.go:494] detecting cgroup driver to use...
	I0314 19:40:51.026451    8428 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0314 19:40:51.061844    8428 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0314 19:40:51.073589    8428 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0314 19:40:51.101324    8428 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0314 19:40:51.119293    8428 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0314 19:40:51.127857    8428 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0314 19:40:51.154447    8428 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0314 19:40:51.182910    8428 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0314 19:40:51.211448    8428 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0314 19:40:51.237874    8428 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0314 19:40:51.266309    8428 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0314 19:40:51.294353    8428 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0314 19:40:51.310243    8428 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0314 19:40:51.320623    8428 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0314 19:40:51.349378    8428 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 19:40:51.545869    8428 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0314 19:40:51.574590    8428 start.go:494] detecting cgroup driver to use...
	I0314 19:40:51.586105    8428 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0314 19:40:51.607564    8428 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0314 19:40:51.607564    8428 command_runner.go:130] > [Unit]
	I0314 19:40:51.607564    8428 command_runner.go:130] > Description=Docker Application Container Engine
	I0314 19:40:51.607564    8428 command_runner.go:130] > Documentation=https://docs.docker.com
	I0314 19:40:51.607564    8428 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0314 19:40:51.607564    8428 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0314 19:40:51.607564    8428 command_runner.go:130] > StartLimitBurst=3
	I0314 19:40:51.607564    8428 command_runner.go:130] > StartLimitIntervalSec=60
	I0314 19:40:51.607564    8428 command_runner.go:130] > [Service]
	I0314 19:40:51.607564    8428 command_runner.go:130] > Type=notify
	I0314 19:40:51.607564    8428 command_runner.go:130] > Restart=on-failure
	I0314 19:40:51.607564    8428 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0314 19:40:51.607564    8428 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0314 19:40:51.607564    8428 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0314 19:40:51.607564    8428 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0314 19:40:51.607564    8428 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0314 19:40:51.607564    8428 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0314 19:40:51.607564    8428 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0314 19:40:51.607564    8428 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0314 19:40:51.607564    8428 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0314 19:40:51.607564    8428 command_runner.go:130] > ExecStart=
	I0314 19:40:51.607564    8428 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0314 19:40:51.608058    8428 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0314 19:40:51.608058    8428 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0314 19:40:51.608058    8428 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0314 19:40:51.608058    8428 command_runner.go:130] > LimitNOFILE=infinity
	I0314 19:40:51.608058    8428 command_runner.go:130] > LimitNPROC=infinity
	I0314 19:40:51.608058    8428 command_runner.go:130] > LimitCORE=infinity
	I0314 19:40:51.608058    8428 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0314 19:40:51.608058    8428 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0314 19:40:51.608058    8428 command_runner.go:130] > TasksMax=infinity
	I0314 19:40:51.608058    8428 command_runner.go:130] > TimeoutStartSec=0
	I0314 19:40:51.608058    8428 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0314 19:40:51.608058    8428 command_runner.go:130] > Delegate=yes
	I0314 19:40:51.608058    8428 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0314 19:40:51.608058    8428 command_runner.go:130] > KillMode=process
	I0314 19:40:51.608058    8428 command_runner.go:130] > [Install]
	I0314 19:40:51.608058    8428 command_runner.go:130] > WantedBy=multi-user.target
	I0314 19:40:51.618678    8428 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0314 19:40:51.651292    8428 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0314 19:40:51.683681    8428 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0314 19:40:51.714551    8428 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0314 19:40:51.745489    8428 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0314 19:40:51.805850    8428 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0314 19:40:51.828345    8428 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0314 19:40:51.861970    8428 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0314 19:40:51.874456    8428 ssh_runner.go:195] Run: which cri-dockerd
	I0314 19:40:51.880911    8428 command_runner.go:130] > /usr/bin/cri-dockerd
	I0314 19:40:51.891375    8428 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0314 19:40:51.907991    8428 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0314 19:40:51.945642    8428 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0314 19:40:52.127221    8428 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0314 19:40:52.308629    8428 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0314 19:40:52.308852    8428 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0314 19:40:52.347014    8428 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 19:40:52.537598    8428 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0314 19:40:55.155720    8428 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.617924s)
	I0314 19:40:55.167960    8428 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0314 19:40:55.201822    8428 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0314 19:40:55.232206    8428 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0314 19:40:55.423642    8428 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0314 19:40:55.609931    8428 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 19:40:55.797295    8428 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0314 19:40:55.835509    8428 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0314 19:40:55.866682    8428 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 19:40:56.052216    8428 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0314 19:40:56.149554    8428 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0314 19:40:56.158895    8428 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0314 19:40:56.168281    8428 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0314 19:40:56.168281    8428 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0314 19:40:56.168281    8428 command_runner.go:130] > Device: 0,22	Inode: 856         Links: 1
	I0314 19:40:56.168281    8428 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0314 19:40:56.168281    8428 command_runner.go:130] > Access: 2024-03-14 19:40:56.338615339 +0000
	I0314 19:40:56.168739    8428 command_runner.go:130] > Modify: 2024-03-14 19:40:56.338615339 +0000
	I0314 19:40:56.168739    8428 command_runner.go:130] > Change: 2024-03-14 19:40:56.341615570 +0000
	I0314 19:40:56.168739    8428 command_runner.go:130] >  Birth: -
	I0314 19:40:56.168797    8428 start.go:562] Will wait 60s for crictl version
	I0314 19:40:56.178007    8428 ssh_runner.go:195] Run: which crictl
	I0314 19:40:56.185001    8428 command_runner.go:130] > /usr/bin/crictl
	I0314 19:40:56.193733    8428 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0314 19:40:56.257753    8428 command_runner.go:130] > Version:  0.1.0
	I0314 19:40:56.257753    8428 command_runner.go:130] > RuntimeName:  docker
	I0314 19:40:56.257753    8428 command_runner.go:130] > RuntimeVersion:  25.0.4
	I0314 19:40:56.257753    8428 command_runner.go:130] > RuntimeApiVersion:  v1
	I0314 19:40:56.260162    8428 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  25.0.4
	RuntimeApiVersion:  v1
	I0314 19:40:56.266763    8428 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0314 19:40:56.296840    8428 command_runner.go:130] > 25.0.4
	I0314 19:40:56.305077    8428 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0314 19:40:56.336519    8428 command_runner.go:130] > 25.0.4
	I0314 19:40:56.342370    8428 out.go:204] * Preparing Kubernetes v1.28.4 on Docker 25.0.4 ...
	I0314 19:40:56.342370    8428 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0314 19:40:56.347124    8428 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0314 19:40:56.347124    8428 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0314 19:40:56.347124    8428 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0314 19:40:56.347124    8428 ip.go:207] Found interface: {Index:7 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:82:e8:09 Flags:up|broadcast|multicast|running}
	I0314 19:40:56.349770    8428 ip.go:210] interface addr: fe80::e3be:cf7e:6bd2:b964/64
	I0314 19:40:56.349770    8428 ip.go:210] interface addr: 172.17.80.1/20
	I0314 19:40:56.357988    8428 ssh_runner.go:195] Run: grep 172.17.80.1	host.minikube.internal$ /etc/hosts
	I0314 19:40:56.364079    8428 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.17.80.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0314 19:40:56.384355    8428 kubeadm.go:877] updating cluster {Name:multinode-442000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.28.4 ClusterName:multinode-442000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.93.236 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.17.80.135 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.17.84.215 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress
-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0314 19:40:56.384641    8428 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0314 19:40:56.391424    8428 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0314 19:40:56.419361    8428 command_runner.go:130] > kindest/kindnetd:v20240202-8f1494ea
	I0314 19:40:56.419361    8428 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.28.4
	I0314 19:40:56.419361    8428 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.28.4
	I0314 19:40:56.419361    8428 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.28.4
	I0314 19:40:56.419361    8428 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.28.4
	I0314 19:40:56.419361    8428 command_runner.go:130] > registry.k8s.io/etcd:3.5.9-0
	I0314 19:40:56.419361    8428 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.10.1
	I0314 19:40:56.419361    8428 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0314 19:40:56.419361    8428 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0314 19:40:56.419361    8428 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0314 19:40:56.420806    8428 docker.go:685] Got preloaded images: -- stdout --
	kindest/kindnetd:v20240202-8f1494ea
	registry.k8s.io/kube-apiserver:v1.28.4
	registry.k8s.io/kube-proxy:v1.28.4
	registry.k8s.io/kube-scheduler:v1.28.4
	registry.k8s.io/kube-controller-manager:v1.28.4
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0314 19:40:56.420806    8428 docker.go:615] Images already preloaded, skipping extraction
	I0314 19:40:56.431512    8428 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0314 19:40:56.456399    8428 command_runner.go:130] > kindest/kindnetd:v20240202-8f1494ea
	I0314 19:40:56.456480    8428 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.28.4
	I0314 19:40:56.456480    8428 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.28.4
	I0314 19:40:56.456480    8428 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.28.4
	I0314 19:40:56.456480    8428 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.28.4
	I0314 19:40:56.456480    8428 command_runner.go:130] > registry.k8s.io/etcd:3.5.9-0
	I0314 19:40:56.456480    8428 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.10.1
	I0314 19:40:56.456480    8428 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0314 19:40:56.456480    8428 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0314 19:40:56.456480    8428 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0314 19:40:56.456480    8428 docker.go:685] Got preloaded images: -- stdout --
	kindest/kindnetd:v20240202-8f1494ea
	registry.k8s.io/kube-apiserver:v1.28.4
	registry.k8s.io/kube-scheduler:v1.28.4
	registry.k8s.io/kube-controller-manager:v1.28.4
	registry.k8s.io/kube-proxy:v1.28.4
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0314 19:40:56.456480    8428 cache_images.go:84] Images are preloaded, skipping loading
	I0314 19:40:56.456480    8428 kubeadm.go:928] updating node { 172.17.93.236 8443 v1.28.4 docker true true} ...
	I0314 19:40:56.456480    8428 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-442000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.17.93.236
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-442000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0314 19:40:56.463446    8428 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0314 19:40:56.493313    8428 command_runner.go:130] > cgroupfs
	I0314 19:40:56.494532    8428 cni.go:84] Creating CNI manager for ""
	I0314 19:40:56.494603    8428 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0314 19:40:56.494674    8428 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0314 19:40:56.494700    8428 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.17.93.236 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-442000 NodeName:multinode-442000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.17.93.236"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.17.93.236 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0314 19:40:56.494700    8428 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.17.93.236
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-442000"
	  kubeletExtraArgs:
	    node-ip: 172.17.93.236
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.17.93.236"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0314 19:40:56.504511    8428 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0314 19:40:56.521995    8428 command_runner.go:130] > kubeadm
	I0314 19:40:56.521995    8428 command_runner.go:130] > kubectl
	I0314 19:40:56.521995    8428 command_runner.go:130] > kubelet
	I0314 19:40:56.522073    8428 binaries.go:44] Found k8s binaries, skipping transfer
	I0314 19:40:56.531041    8428 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0314 19:40:56.546860    8428 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0314 19:40:56.575351    8428 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0314 19:40:56.608897    8428 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0314 19:40:56.647785    8428 ssh_runner.go:195] Run: grep 172.17.93.236	control-plane.minikube.internal$ /etc/hosts
	I0314 19:40:56.653743    8428 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.17.93.236	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0314 19:40:56.683448    8428 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 19:40:56.876493    8428 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0314 19:40:56.903499    8428 certs.go:68] Setting up C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-442000 for IP: 172.17.93.236
	I0314 19:40:56.903499    8428 certs.go:194] generating shared ca certs ...
	I0314 19:40:56.903499    8428 certs.go:226] acquiring lock for ca certs: {Name:mk3bd6e475aadf28590677101b36f61c132d4290 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 19:40:56.903499    8428 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key
	I0314 19:40:56.904508    8428 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key
	I0314 19:40:56.904508    8428 certs.go:256] generating profile certs ...
	I0314 19:40:56.905498    8428 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-442000\client.key
	I0314 19:40:56.905498    8428 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-442000\apiserver.key.4297411e
	I0314 19:40:56.905498    8428 crypto.go:68] Generating cert C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-442000\apiserver.crt.4297411e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.17.93.236]
	I0314 19:40:56.973061    8428 crypto.go:156] Writing cert to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-442000\apiserver.crt.4297411e ...
	I0314 19:40:56.973061    8428 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-442000\apiserver.crt.4297411e: {Name:mk3aa0c8e492a00a020e4819ada54e3fb813a9b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 19:40:56.974071    8428 crypto.go:164] Writing key to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-442000\apiserver.key.4297411e ...
	I0314 19:40:56.974071    8428 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-442000\apiserver.key.4297411e: {Name:mk67eb1255f403684b279a0cad001ea7a631783c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 19:40:56.975243    8428 certs.go:381] copying C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-442000\apiserver.crt.4297411e -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-442000\apiserver.crt
	I0314 19:40:56.989288    8428 certs.go:385] copying C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-442000\apiserver.key.4297411e -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-442000\apiserver.key
	I0314 19:40:56.990279    8428 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-442000\proxy-client.key
	I0314 19:40:56.990279    8428 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0314 19:40:56.990279    8428 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0314 19:40:56.990279    8428 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0314 19:40:56.990279    8428 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0314 19:40:56.990279    8428 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-442000\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0314 19:40:56.991281    8428 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-442000\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0314 19:40:56.991281    8428 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-442000\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0314 19:40:56.991281    8428 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-442000\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0314 19:40:56.991281    8428 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\11052.pem (1338 bytes)
	W0314 19:40:56.991281    8428 certs.go:480] ignoring C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\11052_empty.pem, impossibly tiny 0 bytes
	I0314 19:40:56.991281    8428 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0314 19:40:56.992289    8428 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0314 19:40:56.992289    8428 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0314 19:40:56.992289    8428 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0314 19:40:56.992289    8428 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\110522.pem (1708 bytes)
	I0314 19:40:56.992289    8428 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\110522.pem -> /usr/share/ca-certificates/110522.pem
	I0314 19:40:56.992289    8428 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0314 19:40:56.992289    8428 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\11052.pem -> /usr/share/ca-certificates/11052.pem
	I0314 19:40:56.993277    8428 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0314 19:40:57.041055    8428 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0314 19:40:57.085389    8428 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0314 19:40:57.135501    8428 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0314 19:40:57.177078    8428 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-442000\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0314 19:40:57.219978    8428 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-442000\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0314 19:40:57.263688    8428 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-442000\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0314 19:40:57.308090    8428 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-442000\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0314 19:40:57.349693    8428 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\110522.pem --> /usr/share/ca-certificates/110522.pem (1708 bytes)
	I0314 19:40:57.388829    8428 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0314 19:40:57.443289    8428 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\11052.pem --> /usr/share/ca-certificates/11052.pem (1338 bytes)
	I0314 19:40:57.482666    8428 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0314 19:40:57.522357    8428 ssh_runner.go:195] Run: openssl version
	I0314 19:40:57.531101    8428 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0314 19:40:57.540550    8428 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110522.pem && ln -fs /usr/share/ca-certificates/110522.pem /etc/ssl/certs/110522.pem"
	I0314 19:40:57.567626    8428 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/110522.pem
	I0314 19:40:57.575461    8428 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Mar 14 17:58 /usr/share/ca-certificates/110522.pem
	I0314 19:40:57.575461    8428 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 14 17:58 /usr/share/ca-certificates/110522.pem
	I0314 19:40:57.584643    8428 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110522.pem
	I0314 19:40:57.592872    8428 command_runner.go:130] > 3ec20f2e
	I0314 19:40:57.601393    8428 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110522.pem /etc/ssl/certs/3ec20f2e.0"
	I0314 19:40:57.627162    8428 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0314 19:40:57.658079    8428 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0314 19:40:57.665232    8428 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Mar 14 17:44 /usr/share/ca-certificates/minikubeCA.pem
	I0314 19:40:57.665232    8428 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 14 17:44 /usr/share/ca-certificates/minikubeCA.pem
	I0314 19:40:57.674049    8428 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0314 19:40:57.681843    8428 command_runner.go:130] > b5213941
	I0314 19:40:57.690689    8428 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0314 19:40:57.717923    8428 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11052.pem && ln -fs /usr/share/ca-certificates/11052.pem /etc/ssl/certs/11052.pem"
	I0314 19:40:57.745112    8428 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11052.pem
	I0314 19:40:57.751922    8428 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Mar 14 17:58 /usr/share/ca-certificates/11052.pem
	I0314 19:40:57.752117    8428 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 14 17:58 /usr/share/ca-certificates/11052.pem
	I0314 19:40:57.763062    8428 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11052.pem
	I0314 19:40:57.771658    8428 command_runner.go:130] > 51391683
	I0314 19:40:57.780245    8428 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11052.pem /etc/ssl/certs/51391683.0"
	I0314 19:40:57.810149    8428 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0314 19:40:57.817135    8428 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0314 19:40:57.817379    8428 command_runner.go:130] >   Size: 1164      	Blocks: 8          IO Block: 4096   regular file
	I0314 19:40:57.817379    8428 command_runner.go:130] > Device: 8,1	Inode: 9430309     Links: 1
	I0314 19:40:57.817466    8428 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0314 19:40:57.817466    8428 command_runner.go:130] > Access: 2024-03-14 19:18:50.767195126 +0000
	I0314 19:40:57.817466    8428 command_runner.go:130] > Modify: 2024-03-14 19:18:50.767195126 +0000
	I0314 19:40:57.817540    8428 command_runner.go:130] > Change: 2024-03-14 19:18:50.767195126 +0000
	I0314 19:40:57.817589    8428 command_runner.go:130] >  Birth: 2024-03-14 19:18:50.767195126 +0000
	I0314 19:40:57.827750    8428 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0314 19:40:57.837857    8428 command_runner.go:130] > Certificate will not expire
	I0314 19:40:57.846977    8428 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0314 19:40:57.856185    8428 command_runner.go:130] > Certificate will not expire
	I0314 19:40:57.864861    8428 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0314 19:40:57.874470    8428 command_runner.go:130] > Certificate will not expire
	I0314 19:40:57.885563    8428 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0314 19:40:57.895080    8428 command_runner.go:130] > Certificate will not expire
	I0314 19:40:57.903869    8428 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0314 19:40:57.914464    8428 command_runner.go:130] > Certificate will not expire
	I0314 19:40:57.923585    8428 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0314 19:40:57.933178    8428 command_runner.go:130] > Certificate will not expire
	I0314 19:40:57.933561    8428 kubeadm.go:391] StartCluster: {Name:multinode-442000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.2
8.4 ClusterName:multinode-442000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.93.236 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.17.80.135 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.17.84.215 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dn
s:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 19:40:57.939846    8428 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0314 19:40:57.974028    8428 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0314 19:40:57.992181    8428 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I0314 19:40:57.992251    8428 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I0314 19:40:57.992251    8428 command_runner.go:130] > /var/lib/minikube/etcd:
	I0314 19:40:57.992251    8428 command_runner.go:130] > member
	W0314 19:40:57.992342    8428 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0314 19:40:57.992375    8428 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0314 19:40:57.992375    8428 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0314 19:40:58.001174    8428 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0314 19:40:58.016522    8428 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0314 19:40:58.017278    8428 kubeconfig.go:47] verify endpoint returned: get endpoint: "multinode-442000" does not appear in C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0314 19:40:58.018120    8428 kubeconfig.go:62] C:\Users\jenkins.minikube7\minikube-integration\kubeconfig needs updating (will repair): [kubeconfig missing "multinode-442000" cluster setting kubeconfig missing "multinode-442000" context setting]
	I0314 19:40:58.018690    8428 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\kubeconfig: {Name:mk3b9816be6135fef42a073128e9ed356868417a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 19:40:58.032678    8428 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0314 19:40:58.033397    8428 kapi.go:59] client config for multinode-442000: &rest.Config{Host:"https://172.17.93.236:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\multinode-442000/client.crt", KeyFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\multinode-442000/client.key", CAFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData
:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1ec9180), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0314 19:40:58.034722    8428 cert_rotation.go:137] Starting client certificate rotation controller
	I0314 19:40:58.043318    8428 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0314 19:40:58.060922    8428 command_runner.go:130] > --- /var/tmp/minikube/kubeadm.yaml
	I0314 19:40:58.060922    8428 command_runner.go:130] > +++ /var/tmp/minikube/kubeadm.yaml.new
	I0314 19:40:58.060922    8428 command_runner.go:130] > @@ -1,7 +1,7 @@
	I0314 19:40:58.060922    8428 command_runner.go:130] >  apiVersion: kubeadm.k8s.io/v1beta3
	I0314 19:40:58.060922    8428 command_runner.go:130] >  kind: InitConfiguration
	I0314 19:40:58.060922    8428 command_runner.go:130] >  localAPIEndpoint:
	I0314 19:40:58.060922    8428 command_runner.go:130] > -  advertiseAddress: 172.17.86.124
	I0314 19:40:58.060922    8428 command_runner.go:130] > +  advertiseAddress: 172.17.93.236
	I0314 19:40:58.060922    8428 command_runner.go:130] >    bindPort: 8443
	I0314 19:40:58.060922    8428 command_runner.go:130] >  bootstrapTokens:
	I0314 19:40:58.060922    8428 command_runner.go:130] >    - groups:
	I0314 19:40:58.060922    8428 command_runner.go:130] > @@ -14,13 +14,13 @@
	I0314 19:40:58.060922    8428 command_runner.go:130] >    criSocket: unix:///var/run/cri-dockerd.sock
	I0314 19:40:58.060922    8428 command_runner.go:130] >    name: "multinode-442000"
	I0314 19:40:58.060922    8428 command_runner.go:130] >    kubeletExtraArgs:
	I0314 19:40:58.060922    8428 command_runner.go:130] > -    node-ip: 172.17.86.124
	I0314 19:40:58.060922    8428 command_runner.go:130] > +    node-ip: 172.17.93.236
	I0314 19:40:58.060922    8428 command_runner.go:130] >    taints: []
	I0314 19:40:58.060922    8428 command_runner.go:130] >  ---
	I0314 19:40:58.060922    8428 command_runner.go:130] >  apiVersion: kubeadm.k8s.io/v1beta3
	I0314 19:40:58.060922    8428 command_runner.go:130] >  kind: ClusterConfiguration
	I0314 19:40:58.060922    8428 command_runner.go:130] >  apiServer:
	I0314 19:40:58.060922    8428 command_runner.go:130] > -  certSANs: ["127.0.0.1", "localhost", "172.17.86.124"]
	I0314 19:40:58.060922    8428 command_runner.go:130] > +  certSANs: ["127.0.0.1", "localhost", "172.17.93.236"]
	I0314 19:40:58.060922    8428 command_runner.go:130] >    extraArgs:
	I0314 19:40:58.060922    8428 command_runner.go:130] >      enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	I0314 19:40:58.060922    8428 command_runner.go:130] >  controllerManager:
	I0314 19:40:58.060922    8428 kubeadm.go:634] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -1,7 +1,7 @@
	 apiVersion: kubeadm.k8s.io/v1beta3
	 kind: InitConfiguration
	 localAPIEndpoint:
	-  advertiseAddress: 172.17.86.124
	+  advertiseAddress: 172.17.93.236
	   bindPort: 8443
	 bootstrapTokens:
	   - groups:
	@@ -14,13 +14,13 @@
	   criSocket: unix:///var/run/cri-dockerd.sock
	   name: "multinode-442000"
	   kubeletExtraArgs:
	-    node-ip: 172.17.86.124
	+    node-ip: 172.17.93.236
	   taints: []
	 ---
	 apiVersion: kubeadm.k8s.io/v1beta3
	 kind: ClusterConfiguration
	 apiServer:
	-  certSANs: ["127.0.0.1", "localhost", "172.17.86.124"]
	+  certSANs: ["127.0.0.1", "localhost", "172.17.93.236"]
	   extraArgs:
	     enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	 controllerManager:
	
	-- /stdout --
	I0314 19:40:58.060922    8428 kubeadm.go:1153] stopping kube-system containers ...
	I0314 19:40:58.067921    8428 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0314 19:40:58.101075    8428 command_runner.go:130] > 8899bc003893
	I0314 19:40:58.101075    8428 command_runner.go:130] > 07c2872c48ed
	I0314 19:40:58.101075    8428 command_runner.go:130] > b179d157b6b2
	I0314 19:40:58.101075    8428 command_runner.go:130] > a3dba3fc54c0
	I0314 19:40:58.101075    8428 command_runner.go:130] > 1a321c0e8997
	I0314 19:40:58.101075    8428 command_runner.go:130] > 2a62baf3f1b4
	I0314 19:40:58.101075    8428 command_runner.go:130] > 9b3244b47278
	I0314 19:40:58.101075    8428 command_runner.go:130] > b046b896affe
	I0314 19:40:58.101075    8428 command_runner.go:130] > cd640f130e42
	I0314 19:40:58.101075    8428 command_runner.go:130] > dbb603289bf1
	I0314 19:40:58.101075    8428 command_runner.go:130] > 16b80f73683d
	I0314 19:40:58.101075    8428 command_runner.go:130] > 9585e3eb2ead
	I0314 19:40:58.101075    8428 command_runner.go:130] > 54e39762d7a6
	I0314 19:40:58.101075    8428 command_runner.go:130] > 102c907609a3
	I0314 19:40:58.101075    8428 command_runner.go:130] > ab390fc53b99
	I0314 19:40:58.101075    8428 command_runner.go:130] > af5b88117f99
	I0314 19:40:58.101075    8428 docker.go:483] Stopping containers: [8899bc003893 07c2872c48ed b179d157b6b2 a3dba3fc54c0 1a321c0e8997 2a62baf3f1b4 9b3244b47278 b046b896affe cd640f130e42 dbb603289bf1 16b80f73683d 9585e3eb2ead 54e39762d7a6 102c907609a3 ab390fc53b99 af5b88117f99]
	I0314 19:40:58.109662    8428 ssh_runner.go:195] Run: docker stop 8899bc003893 07c2872c48ed b179d157b6b2 a3dba3fc54c0 1a321c0e8997 2a62baf3f1b4 9b3244b47278 b046b896affe cd640f130e42 dbb603289bf1 16b80f73683d 9585e3eb2ead 54e39762d7a6 102c907609a3 ab390fc53b99 af5b88117f99
	I0314 19:40:58.134945    8428 command_runner.go:130] > 8899bc003893
	I0314 19:40:58.134945    8428 command_runner.go:130] > 07c2872c48ed
	I0314 19:40:58.134945    8428 command_runner.go:130] > b179d157b6b2
	I0314 19:40:58.134945    8428 command_runner.go:130] > a3dba3fc54c0
	I0314 19:40:58.134945    8428 command_runner.go:130] > 1a321c0e8997
	I0314 19:40:58.134945    8428 command_runner.go:130] > 2a62baf3f1b4
	I0314 19:40:58.134945    8428 command_runner.go:130] > 9b3244b47278
	I0314 19:40:58.134945    8428 command_runner.go:130] > b046b896affe
	I0314 19:40:58.134945    8428 command_runner.go:130] > cd640f130e42
	I0314 19:40:58.134945    8428 command_runner.go:130] > dbb603289bf1
	I0314 19:40:58.134945    8428 command_runner.go:130] > 16b80f73683d
	I0314 19:40:58.134945    8428 command_runner.go:130] > 9585e3eb2ead
	I0314 19:40:58.134945    8428 command_runner.go:130] > 54e39762d7a6
	I0314 19:40:58.134945    8428 command_runner.go:130] > 102c907609a3
	I0314 19:40:58.134945    8428 command_runner.go:130] > ab390fc53b99
	I0314 19:40:58.134945    8428 command_runner.go:130] > af5b88117f99
	I0314 19:40:58.145935    8428 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0314 19:40:58.181868    8428 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0314 19:40:58.199931    8428 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0314 19:40:58.199970    8428 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0314 19:40:58.199970    8428 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0314 19:40:58.199970    8428 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0314 19:40:58.199970    8428 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0314 19:40:58.199970    8428 kubeadm.go:156] found existing configuration files:
	
	I0314 19:40:58.208510    8428 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0314 19:40:58.225973    8428 command_runner.go:130] ! grep: /etc/kubernetes/admin.conf: No such file or directory
	I0314 19:40:58.226140    8428 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0314 19:40:58.238965    8428 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0314 19:40:58.266015    8428 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0314 19:40:58.282779    8428 command_runner.go:130] ! grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0314 19:40:58.282884    8428 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0314 19:40:58.292147    8428 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0314 19:40:58.317530    8428 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0314 19:40:58.334084    8428 command_runner.go:130] ! grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0314 19:40:58.334204    8428 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0314 19:40:58.343828    8428 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0314 19:40:58.372412    8428 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0314 19:40:58.387831    8428 command_runner.go:130] ! grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0314 19:40:58.387831    8428 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0314 19:40:58.396514    8428 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0314 19:40:58.421893    8428 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0314 19:40:58.437677    8428 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0314 19:40:58.745595    8428 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0314 19:40:58.745691    8428 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0314 19:40:58.745691    8428 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0314 19:40:58.745691    8428 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0314 19:40:58.745785    8428 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I0314 19:40:58.745785    8428 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I0314 19:40:58.745825    8428 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I0314 19:40:58.745825    8428 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I0314 19:40:58.745857    8428 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I0314 19:40:58.745902    8428 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0314 19:40:58.745936    8428 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0314 19:40:58.745980    8428 command_runner.go:130] > [certs] Using the existing "sa" key
	I0314 19:40:58.746082    8428 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0314 19:40:59.622877    8428 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0314 19:40:59.622877    8428 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0314 19:40:59.622877    8428 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0314 19:40:59.622877    8428 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0314 19:40:59.622877    8428 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0314 19:40:59.622877    8428 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0314 19:40:59.919191    8428 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0314 19:40:59.919229    8428 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0314 19:40:59.919229    8428 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0314 19:40:59.919229    8428 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0314 19:41:00.010216    8428 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0314 19:41:00.010216    8428 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0314 19:41:00.010216    8428 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0314 19:41:00.010216    8428 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0314 19:41:00.010216    8428 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0314 19:41:00.104060    8428 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0314 19:41:00.104060    8428 api_server.go:52] waiting for apiserver process to appear ...
	I0314 19:41:00.113047    8428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:41:00.616123    8428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:41:01.124257    8428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:41:01.628803    8428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:41:02.121788    8428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:41:02.142784    8428 command_runner.go:130] > 2008
	I0314 19:41:02.143188    8428 api_server.go:72] duration metric: took 2.0389736s to wait for apiserver process to appear ...
	I0314 19:41:02.143188    8428 api_server.go:88] waiting for apiserver healthz status ...
	I0314 19:41:02.143188    8428 api_server.go:253] Checking apiserver healthz at https://172.17.93.236:8443/healthz ...
	I0314 19:41:05.419799    8428 api_server.go:279] https://172.17.93.236:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0314 19:41:05.419799    8428 api_server.go:103] status: https://172.17.93.236:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0314 19:41:05.419799    8428 api_server.go:253] Checking apiserver healthz at https://172.17.93.236:8443/healthz ...
	I0314 19:41:05.503543    8428 api_server.go:279] https://172.17.93.236:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0314 19:41:05.503543    8428 api_server.go:103] status: https://172.17.93.236:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0314 19:41:05.654492    8428 api_server.go:253] Checking apiserver healthz at https://172.17.93.236:8443/healthz ...
	I0314 19:41:05.665202    8428 api_server.go:279] https://172.17.93.236:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0314 19:41:05.666026    8428 api_server.go:103] status: https://172.17.93.236:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0314 19:41:06.157882    8428 api_server.go:253] Checking apiserver healthz at https://172.17.93.236:8443/healthz ...
	I0314 19:41:06.186077    8428 api_server.go:279] https://172.17.93.236:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0314 19:41:06.186077    8428 api_server.go:103] status: https://172.17.93.236:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0314 19:41:06.652460    8428 api_server.go:253] Checking apiserver healthz at https://172.17.93.236:8443/healthz ...
	I0314 19:41:06.660908    8428 api_server.go:279] https://172.17.93.236:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0314 19:41:06.660908    8428 api_server.go:103] status: https://172.17.93.236:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0314 19:41:07.144026    8428 api_server.go:253] Checking apiserver healthz at https://172.17.93.236:8443/healthz ...
	I0314 19:41:07.150727    8428 api_server.go:279] https://172.17.93.236:8443/healthz returned 200:
	ok
	I0314 19:41:07.151685    8428 round_trippers.go:463] GET https://172.17.93.236:8443/version
	I0314 19:41:07.151743    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:07.151761    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:07.151761    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:07.162083    8428 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0314 19:41:07.162898    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:07.162898    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:07.162959    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:07.162959    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:07.162959    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:07.162959    8428 round_trippers.go:580]     Content-Length: 264
	I0314 19:41:07.162959    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:07 GMT
	I0314 19:41:07.162959    8428 round_trippers.go:580]     Audit-Id: adc14fa1-3ec8-4ca8-bcbf-285a1d507ddf
	I0314 19:41:07.162959    8428 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.4",
	  "gitCommit": "bae2c62678db2b5053817bc97181fcc2e8388103",
	  "gitTreeState": "clean",
	  "buildDate": "2023-11-15T16:48:54Z",
	  "goVersion": "go1.20.11",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0314 19:41:07.162959    8428 api_server.go:141] control plane version: v1.28.4
	I0314 19:41:07.162959    8428 api_server.go:131] duration metric: took 5.0193918s to wait for apiserver health ...
	I0314 19:41:07.162959    8428 cni.go:84] Creating CNI manager for ""
	I0314 19:41:07.162959    8428 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0314 19:41:07.167153    8428 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0314 19:41:07.180755    8428 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0314 19:41:07.189531    8428 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0314 19:41:07.189531    8428 command_runner.go:130] >   Size: 2694104   	Blocks: 5264       IO Block: 4096   regular file
	I0314 19:41:07.189531    8428 command_runner.go:130] > Device: 0,17	Inode: 3497        Links: 1
	I0314 19:41:07.189531    8428 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0314 19:41:07.189531    8428 command_runner.go:130] > Access: 2024-03-14 19:39:37.562004600 +0000
	I0314 19:41:07.189531    8428 command_runner.go:130] > Modify: 2024-03-13 22:53:41.000000000 +0000
	I0314 19:41:07.189531    8428 command_runner.go:130] > Change: 2024-03-14 19:39:30.743000000 +0000
	I0314 19:41:07.189531    8428 command_runner.go:130] >  Birth: -
	I0314 19:41:07.190135    8428 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0314 19:41:07.190135    8428 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0314 19:41:07.262895    8428 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0314 19:41:08.791840    8428 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0314 19:41:08.791879    8428 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0314 19:41:08.791879    8428 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0314 19:41:08.791879    8428 command_runner.go:130] > daemonset.apps/kindnet configured
	I0314 19:41:08.791934    8428 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.5289228s)
	I0314 19:41:08.791987    8428 system_pods.go:43] waiting for kube-system pods to appear ...
	I0314 19:41:08.792153    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/namespaces/kube-system/pods
	I0314 19:41:08.792153    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:08.792153    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:08.792153    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:08.797722    8428 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0314 19:41:08.797722    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:08.797722    8428 round_trippers.go:580]     Audit-Id: 2b33a3ae-5d46-4e40-a15f-cfca67283dda
	I0314 19:41:08.797722    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:08.797722    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:08.797722    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:08.798730    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:08.798730    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:09 GMT
	I0314 19:41:08.798730    8428 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1729"},"items":[{"metadata":{"name":"coredns-5dd5756b68-d22jc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2a563b3f-a175-4dc2-9f0b-67dbaefbfaac","resourceVersion":"1714","creationTimestamp":"2024-03-14T19:19:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"65689a14-ed43-48af-8104-53ea2e3991f3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:19:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"65689a14-ed43-48af-8104-53ea2e3991f3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 83773 chars]
	I0314 19:41:08.806668    8428 system_pods.go:59] 12 kube-system pods found
	I0314 19:41:08.806668    8428 system_pods.go:61] "coredns-5dd5756b68-d22jc" [2a563b3f-a175-4dc2-9f0b-67dbaefbfaac] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0314 19:41:08.806668    8428 system_pods.go:61] "etcd-multinode-442000" [106cc31d-907f-4853-9e8d-f13c8ac4e398] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0314 19:41:08.806668    8428 system_pods.go:61] "kindnet-7b9lf" [677b9084-0026-4b21-b041-445940624ed7] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0314 19:41:08.806668    8428 system_pods.go:61] "kindnet-c7m4p" [926a47cb-e444-455d-8b74-d17a229020a1] Running
	I0314 19:41:08.806668    8428 system_pods.go:61] "kindnet-r7zdb" [69b103aa-023b-4243-ba7b-875106aac183] Running
	I0314 19:41:08.806668    8428 system_pods.go:61] "kube-apiserver-multinode-442000" [ebdd5ddf-2b02-4315-bc64-1b10c383d507] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0314 19:41:08.806668    8428 system_pods.go:61] "kube-controller-manager-multinode-442000" [b16fc874-ef74-44ca-a54f-bb678bf982df] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0314 19:41:08.806668    8428 system_pods.go:61] "kube-proxy-72dzs" [80b840b0-3803-4102-a966-ea73aed74f49] Running
	I0314 19:41:08.806668    8428 system_pods.go:61] "kube-proxy-cg28g" [c7f798bf-6722-4731-af8d-ccd5703d116e] Running
	I0314 19:41:08.806668    8428 system_pods.go:61] "kube-proxy-w2qls" [7a53e602-282e-4b63-a993-a5d23d3c615f] Running
	I0314 19:41:08.806668    8428 system_pods.go:61] "kube-scheduler-multinode-442000" [76b10598-fe0d-4a14-a8e4-a32221fbb68f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0314 19:41:08.806668    8428 system_pods.go:61] "storage-provisioner" [65d76566-4401-4b28-8452-10ed98624901] Running
	I0314 19:41:08.806668    8428 system_pods.go:74] duration metric: took 14.6396ms to wait for pod list to return data ...
	I0314 19:41:08.806668    8428 node_conditions.go:102] verifying NodePressure condition ...
	I0314 19:41:08.806668    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes
	I0314 19:41:08.806668    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:08.806668    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:08.806668    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:08.814106    8428 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0314 19:41:08.814106    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:08.814106    8428 round_trippers.go:580]     Audit-Id: e0708dc2-5f29-4486-b61c-97fc222cf858
	I0314 19:41:08.814106    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:08.814106    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:08.814106    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:08.814106    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:08.814106    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:09 GMT
	I0314 19:41:08.814106    8428 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1729"},"items":[{"metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1707","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 15627 chars]
	I0314 19:41:08.815709    8428 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0314 19:41:08.815709    8428 node_conditions.go:123] node cpu capacity is 2
	I0314 19:41:08.815709    8428 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0314 19:41:08.815709    8428 node_conditions.go:123] node cpu capacity is 2
	I0314 19:41:08.815709    8428 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0314 19:41:08.815709    8428 node_conditions.go:123] node cpu capacity is 2
	I0314 19:41:08.815709    8428 node_conditions.go:105] duration metric: took 9.0408ms to run NodePressure ...
	I0314 19:41:08.815709    8428 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0314 19:41:09.171059    8428 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0314 19:41:09.171059    8428 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0314 19:41:09.171164    8428 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0314 19:41:09.171320    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/namespaces/kube-system/pods?labelSelector=tier%!D(MISSING)control-plane
	I0314 19:41:09.171391    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:09.171391    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:09.171391    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:09.175576    8428 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 19:41:09.176542    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:09.176542    8428 round_trippers.go:580]     Audit-Id: 210afc00-498f-40e3-9c5a-8e3b45f11632
	I0314 19:41:09.176581    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:09.176581    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:09.176581    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:09.176581    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:09.176581    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:09 GMT
	I0314 19:41:09.176706    8428 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1732"},"items":[{"metadata":{"name":"etcd-multinode-442000","namespace":"kube-system","uid":"106cc31d-907f-4853-9e8d-f13c8ac4e398","resourceVersion":"1726","creationTimestamp":"2024-03-14T19:41:06Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.17.93.236:2379","kubernetes.io/config.hash":"fa99a5621d016aa714804afcaa1e0a53","kubernetes.io/config.mirror":"fa99a5621d016aa714804afcaa1e0a53","kubernetes.io/config.seen":"2024-03-14T19:41:00.367789550Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:41:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotatio
ns":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-client-urls":{},"f: [truncated 29350 chars]
	I0314 19:41:09.178835    8428 kubeadm.go:733] kubelet initialised
	I0314 19:41:09.178868    8428 kubeadm.go:734] duration metric: took 7.7038ms waiting for restarted kubelet to initialise ...
	I0314 19:41:09.178910    8428 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0314 19:41:09.179016    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/namespaces/kube-system/pods
	I0314 19:41:09.179055    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:09.179055    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:09.179055    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:09.188274    8428 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0314 19:41:09.188335    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:09.188335    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:09.188335    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:09.188383    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:09 GMT
	I0314 19:41:09.188383    8428 round_trippers.go:580]     Audit-Id: 5862cf79-e6ad-440a-b0d3-98c024526415
	I0314 19:41:09.188383    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:09.188383    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:09.189783    8428 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1732"},"items":[{"metadata":{"name":"coredns-5dd5756b68-d22jc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2a563b3f-a175-4dc2-9f0b-67dbaefbfaac","resourceVersion":"1714","creationTimestamp":"2024-03-14T19:19:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"65689a14-ed43-48af-8104-53ea2e3991f3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:19:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"65689a14-ed43-48af-8104-53ea2e3991f3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 83581 chars]
	I0314 19:41:09.193297    8428 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-d22jc" in "kube-system" namespace to be "Ready" ...
	I0314 19:41:09.193297    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-d22jc
	I0314 19:41:09.193297    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:09.193297    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:09.193297    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:09.196852    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:41:09.197333    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:09.197333    8428 round_trippers.go:580]     Audit-Id: 5140e513-374a-4f0c-84d5-c8083d5e75db
	I0314 19:41:09.197333    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:09.197333    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:09.197333    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:09.197408    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:09.197408    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:09 GMT
	I0314 19:41:09.197537    8428 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-d22jc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2a563b3f-a175-4dc2-9f0b-67dbaefbfaac","resourceVersion":"1714","creationTimestamp":"2024-03-14T19:19:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"65689a14-ed43-48af-8104-53ea2e3991f3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:19:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"65689a14-ed43-48af-8104-53ea2e3991f3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I0314 19:41:09.198224    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:41:09.198295    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:09.198295    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:09.198295    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:09.200977    8428 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0314 19:41:09.200977    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:09.201841    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:09 GMT
	I0314 19:41:09.201841    8428 round_trippers.go:580]     Audit-Id: 55b96cd4-989f-4a8a-85a9-359add4fb771
	I0314 19:41:09.201841    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:09.201841    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:09.201876    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:09.201876    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:09.201995    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1707","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I0314 19:41:09.201995    8428 pod_ready.go:97] node "multinode-442000" hosting pod "coredns-5dd5756b68-d22jc" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-442000" has status "Ready":"False"
	I0314 19:41:09.202527    8428 pod_ready.go:81] duration metric: took 9.2294ms for pod "coredns-5dd5756b68-d22jc" in "kube-system" namespace to be "Ready" ...
	E0314 19:41:09.202568    8428 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-442000" hosting pod "coredns-5dd5756b68-d22jc" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-442000" has status "Ready":"False"
	I0314 19:41:09.202584    8428 pod_ready.go:78] waiting up to 4m0s for pod "etcd-multinode-442000" in "kube-system" namespace to be "Ready" ...
	I0314 19:41:09.202696    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-442000
	I0314 19:41:09.202735    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:09.202735    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:09.202735    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:09.205818    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:41:09.205818    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:09.205818    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:09.205818    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:09.205818    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:09 GMT
	I0314 19:41:09.205818    8428 round_trippers.go:580]     Audit-Id: 94feb1b6-bc4f-4304-8f06-b404ed63c50a
	I0314 19:41:09.205818    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:09.205818    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:09.205818    8428 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-442000","namespace":"kube-system","uid":"106cc31d-907f-4853-9e8d-f13c8ac4e398","resourceVersion":"1726","creationTimestamp":"2024-03-14T19:41:06Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.17.93.236:2379","kubernetes.io/config.hash":"fa99a5621d016aa714804afcaa1e0a53","kubernetes.io/config.mirror":"fa99a5621d016aa714804afcaa1e0a53","kubernetes.io/config.seen":"2024-03-14T19:41:00.367789550Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:41:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6087 chars]
	I0314 19:41:09.205818    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:41:09.205818    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:09.205818    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:09.205818    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:09.208955    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:41:09.208955    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:09.209921    8428 round_trippers.go:580]     Audit-Id: 093453b0-7d6d-43e9-9174-a6701217f77c
	I0314 19:41:09.209921    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:09.209921    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:09.209921    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:09.209921    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:09.209921    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:09 GMT
	I0314 19:41:09.209921    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1707","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I0314 19:41:09.210500    8428 pod_ready.go:97] node "multinode-442000" hosting pod "etcd-multinode-442000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-442000" has status "Ready":"False"
	I0314 19:41:09.210500    8428 pod_ready.go:81] duration metric: took 7.9156ms for pod "etcd-multinode-442000" in "kube-system" namespace to be "Ready" ...
	E0314 19:41:09.210500    8428 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-442000" hosting pod "etcd-multinode-442000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-442000" has status "Ready":"False"
	I0314 19:41:09.210587    8428 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-multinode-442000" in "kube-system" namespace to be "Ready" ...
	I0314 19:41:09.210697    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-442000
	I0314 19:41:09.210719    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:09.210719    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:09.210719    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:09.213277    8428 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0314 19:41:09.213911    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:09.213911    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:09.213911    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:09 GMT
	I0314 19:41:09.213911    8428 round_trippers.go:580]     Audit-Id: 7ff85a82-040a-458b-8860-4f2f62773e57
	I0314 19:41:09.213911    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:09.213911    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:09.213911    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:09.214263    8428 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-442000","namespace":"kube-system","uid":"ebdd5ddf-2b02-4315-bc64-1b10c383d507","resourceVersion":"1719","creationTimestamp":"2024-03-14T19:41:06Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.17.93.236:8443","kubernetes.io/config.hash":"7754d2f32966faec8123dc3b8a2af767","kubernetes.io/config.mirror":"7754d2f32966faec8123dc3b8a2af767","kubernetes.io/config.seen":"2024-03-14T19:41:00.350706636Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:41:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7644 chars]
	I0314 19:41:09.214794    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:41:09.214794    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:09.214794    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:09.214794    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:09.218902    8428 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 19:41:09.218902    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:09.218902    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:09.218902    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:09.219026    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:09 GMT
	I0314 19:41:09.219026    8428 round_trippers.go:580]     Audit-Id: 115a34dd-3caa-4ad3-adeb-a34843207664
	I0314 19:41:09.219026    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:09.219026    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:09.219193    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1707","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I0314 19:41:09.219582    8428 pod_ready.go:97] node "multinode-442000" hosting pod "kube-apiserver-multinode-442000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-442000" has status "Ready":"False"
	I0314 19:41:09.219631    8428 pod_ready.go:81] duration metric: took 9.0435ms for pod "kube-apiserver-multinode-442000" in "kube-system" namespace to be "Ready" ...
	E0314 19:41:09.219631    8428 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-442000" hosting pod "kube-apiserver-multinode-442000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-442000" has status "Ready":"False"
	I0314 19:41:09.219631    8428 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-multinode-442000" in "kube-system" namespace to be "Ready" ...
	I0314 19:41:09.219736    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-442000
	I0314 19:41:09.219736    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:09.219736    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:09.219800    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:09.222977    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:41:09.222977    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:09.222977    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:09.223309    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:09.223309    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:09.223309    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:09.223309    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:09 GMT
	I0314 19:41:09.223309    8428 round_trippers.go:580]     Audit-Id: a2ab6a43-1b37-46df-bacf-ec964ada0191
	I0314 19:41:09.223579    8428 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-442000","namespace":"kube-system","uid":"b16fc874-ef74-44ca-a54f-bb678bf982df","resourceVersion":"1717","creationTimestamp":"2024-03-14T19:19:01Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"a7ee530f2bd843eddeace8cd6ec0d204","kubernetes.io/config.mirror":"a7ee530f2bd843eddeace8cd6ec0d204","kubernetes.io/config.seen":"2024-03-14T19:18:55.420205308Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:19:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7437 chars]
	I0314 19:41:09.224149    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:41:09.224149    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:09.224198    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:09.224198    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:09.226944    8428 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0314 19:41:09.226944    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:09.227244    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:09.227244    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:09.227244    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:09.227244    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:09.227244    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:09 GMT
	I0314 19:41:09.227244    8428 round_trippers.go:580]     Audit-Id: 2a7c63aa-1465-4e4c-9f5f-c53b397ad2e1
	I0314 19:41:09.227354    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1707","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I0314 19:41:09.227777    8428 pod_ready.go:97] node "multinode-442000" hosting pod "kube-controller-manager-multinode-442000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-442000" has status "Ready":"False"
	I0314 19:41:09.227853    8428 pod_ready.go:81] duration metric: took 8.2206ms for pod "kube-controller-manager-multinode-442000" in "kube-system" namespace to be "Ready" ...
	E0314 19:41:09.227853    8428 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-442000" hosting pod "kube-controller-manager-multinode-442000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-442000" has status "Ready":"False"
	I0314 19:41:09.227853    8428 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-72dzs" in "kube-system" namespace to be "Ready" ...
	I0314 19:41:09.393363    8428 request.go:629] Waited for 165.0379ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.93.236:8443/api/v1/namespaces/kube-system/pods/kube-proxy-72dzs
	I0314 19:41:09.393363    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/namespaces/kube-system/pods/kube-proxy-72dzs
	I0314 19:41:09.393363    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:09.393363    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:09.393363    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:09.397987    8428 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 19:41:09.397987    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:09.397987    8428 round_trippers.go:580]     Audit-Id: eb7d23d8-e7cd-4193-b454-7524dddfc577
	I0314 19:41:09.397987    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:09.398185    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:09.398259    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:09.398259    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:09.398259    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:09 GMT
	I0314 19:41:09.399033    8428 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-72dzs","generateName":"kube-proxy-","namespace":"kube-system","uid":"80b840b0-3803-4102-a966-ea73aed74f49","resourceVersion":"621","creationTimestamp":"2024-03-14T19:22:02Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"6fc4cc4b-ef3f-4f16-8df5-a146058b364e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:22:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6fc4cc4b-ef3f-4f16-8df5-a146058b364e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5541 chars]
	I0314 19:41:09.596494    8428 request.go:629] Waited for 197.4463ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.93.236:8443/api/v1/nodes/multinode-442000-m02
	I0314 19:41:09.596805    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000-m02
	I0314 19:41:09.596805    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:09.596932    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:09.596932    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:09.599863    8428 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0314 19:41:09.599863    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:09.599863    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:09.599863    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:09.599863    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:09.599863    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:09 GMT
	I0314 19:41:09.599863    8428 round_trippers.go:580]     Audit-Id: 791da0aa-59cc-4e18-8f9c-c00c881216bf
	I0314 19:41:09.599863    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:09.600918    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000-m02","uid":"5f369d83-fce6-47fe-b14b-171ed626975b","resourceVersion":"1346","creationTimestamp":"2024-03-14T19:22:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_14T19_22_02_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:22:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3826 chars]
	I0314 19:41:09.600918    8428 pod_ready.go:92] pod "kube-proxy-72dzs" in "kube-system" namespace has status "Ready":"True"
	I0314 19:41:09.600918    8428 pod_ready.go:81] duration metric: took 373.0376ms for pod "kube-proxy-72dzs" in "kube-system" namespace to be "Ready" ...
	I0314 19:41:09.600918    8428 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-cg28g" in "kube-system" namespace to be "Ready" ...
	I0314 19:41:09.801186    8428 request.go:629] Waited for 199.7314ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.93.236:8443/api/v1/namespaces/kube-system/pods/kube-proxy-cg28g
	I0314 19:41:09.801186    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/namespaces/kube-system/pods/kube-proxy-cg28g
	I0314 19:41:09.801186    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:09.801186    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:09.801186    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:09.805078    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:41:09.805446    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:09.805446    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:09.805446    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:10 GMT
	I0314 19:41:09.805446    8428 round_trippers.go:580]     Audit-Id: 57df20d1-b284-4a39-97c6-a9be036bb196
	I0314 19:41:09.805446    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:09.805446    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:09.805446    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:09.805712    8428 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-cg28g","generateName":"kube-proxy-","namespace":"kube-system","uid":"c7f798bf-6722-4731-af8d-ccd5703d116e","resourceVersion":"1728","creationTimestamp":"2024-03-14T19:19:16Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"6fc4cc4b-ef3f-4f16-8df5-a146058b364e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:19:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6fc4cc4b-ef3f-4f16-8df5-a146058b364e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5739 chars]
	I0314 19:41:10.006267    8428 request.go:629] Waited for 199.5775ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:41:10.006267    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:41:10.006267    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:10.006267    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:10.006267    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:10.009844    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:41:10.010040    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:10.010040    8428 round_trippers.go:580]     Audit-Id: b32ba143-e2a3-4590-b5eb-17d46831f335
	I0314 19:41:10.010040    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:10.010040    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:10.010040    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:10.010040    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:10.010040    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:10 GMT
	I0314 19:41:10.010305    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1707","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I0314 19:41:10.010451    8428 pod_ready.go:97] node "multinode-442000" hosting pod "kube-proxy-cg28g" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-442000" has status "Ready":"False"
	I0314 19:41:10.010451    8428 pod_ready.go:81] duration metric: took 409.5011ms for pod "kube-proxy-cg28g" in "kube-system" namespace to be "Ready" ...
	E0314 19:41:10.010451    8428 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-442000" hosting pod "kube-proxy-cg28g" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-442000" has status "Ready":"False"
	I0314 19:41:10.010451    8428 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-w2qls" in "kube-system" namespace to be "Ready" ...
	I0314 19:41:10.193151    8428 request.go:629] Waited for 182.6868ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.93.236:8443/api/v1/namespaces/kube-system/pods/kube-proxy-w2qls
	I0314 19:41:10.193353    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/namespaces/kube-system/pods/kube-proxy-w2qls
	I0314 19:41:10.193353    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:10.193738    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:10.193777    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:10.197761    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:41:10.197819    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:10.197819    8428 round_trippers.go:580]     Audit-Id: 02391dd9-57bc-4e58-8d28-4228817b2666
	I0314 19:41:10.197819    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:10.197819    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:10.197819    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:10.197819    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:10.197872    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:10 GMT
	I0314 19:41:10.197872    8428 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-w2qls","generateName":"kube-proxy-","namespace":"kube-system","uid":"7a53e602-282e-4b63-a993-a5d23d3c615f","resourceVersion":"1678","creationTimestamp":"2024-03-14T19:26:25Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"6fc4cc4b-ef3f-4f16-8df5-a146058b364e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:26:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6fc4cc4b-ef3f-4f16-8df5-a146058b364e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5767 chars]
	I0314 19:41:10.398299    8428 request.go:629] Waited for 199.7405ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.93.236:8443/api/v1/nodes/multinode-442000-m03
	I0314 19:41:10.398801    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000-m03
	I0314 19:41:10.398801    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:10.398801    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:10.398801    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:10.402482    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:41:10.402482    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:10.403100    8428 round_trippers.go:580]     Audit-Id: daf7de4c-5774-4946-8e30-78a41a1a1ff5
	I0314 19:41:10.403100    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:10.403100    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:10.403100    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:10.403100    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:10.403100    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:10 GMT
	I0314 19:41:10.403480    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000-m03","uid":"1b8e342b-6e96-49e8-a22c-874445d29fe3","resourceVersion":"1688","creationTimestamp":"2024-03-14T19:36:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_14T19_36_47_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:36:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4399 chars]
	I0314 19:41:10.404010    8428 pod_ready.go:97] node "multinode-442000-m03" hosting pod "kube-proxy-w2qls" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-442000-m03" has status "Ready":"Unknown"
	I0314 19:41:10.404075    8428 pod_ready.go:81] duration metric: took 393.5951ms for pod "kube-proxy-w2qls" in "kube-system" namespace to be "Ready" ...
	E0314 19:41:10.404075    8428 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-442000-m03" hosting pod "kube-proxy-w2qls" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-442000-m03" has status "Ready":"Unknown"
	I0314 19:41:10.404075    8428 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-multinode-442000" in "kube-system" namespace to be "Ready" ...
	I0314 19:41:10.601485    8428 request.go:629] Waited for 197.3948ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.93.236:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-442000
	I0314 19:41:10.601708    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-442000
	I0314 19:41:10.601708    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:10.601708    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:10.601708    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:10.606546    8428 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 19:41:10.606546    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:10.606546    8428 round_trippers.go:580]     Audit-Id: 94bf8275-796e-459b-8502-5cfeed46fae1
	I0314 19:41:10.606546    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:10.606546    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:10.606546    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:10.606546    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:10.607571    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:10 GMT
	I0314 19:41:10.608010    8428 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-442000","namespace":"kube-system","uid":"76b10598-fe0d-4a14-a8e4-a32221fbb68f","resourceVersion":"1716","creationTimestamp":"2024-03-14T19:19:01Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"2b2434280023596d1e3c90125a7219ed","kubernetes.io/config.mirror":"2b2434280023596d1e3c90125a7219ed","kubernetes.io/config.seen":"2024-03-14T19:18:55.420206709Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:19:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 5149 chars]
	I0314 19:41:10.804758    8428 request.go:629] Waited for 195.6455ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:41:10.804921    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:41:10.804984    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:10.805035    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:10.805035    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:10.809625    8428 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 19:41:10.809625    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:10.809625    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:10.809625    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:10.809625    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:10.809625    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:11 GMT
	I0314 19:41:10.809625    8428 round_trippers.go:580]     Audit-Id: be49477e-f53e-4f00-9413-f03c1ac9aa0d
	I0314 19:41:10.809625    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:10.810319    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1707","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I0314 19:41:10.810319    8428 pod_ready.go:97] node "multinode-442000" hosting pod "kube-scheduler-multinode-442000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-442000" has status "Ready":"False"
	I0314 19:41:10.810853    8428 pod_ready.go:81] duration metric: took 406.7464ms for pod "kube-scheduler-multinode-442000" in "kube-system" namespace to be "Ready" ...
	E0314 19:41:10.810853    8428 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-442000" hosting pod "kube-scheduler-multinode-442000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-442000" has status "Ready":"False"
	I0314 19:41:10.810940    8428 pod_ready.go:38] duration metric: took 1.6319064s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0314 19:41:10.810940    8428 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0314 19:41:10.830620    8428 command_runner.go:130] > -16
	I0314 19:41:10.830765    8428 ops.go:34] apiserver oom_adj: -16
	I0314 19:41:10.830765    8428 kubeadm.go:591] duration metric: took 12.8374176s to restartPrimaryControlPlane
	I0314 19:41:10.830765    8428 kubeadm.go:393] duration metric: took 12.8962854s to StartCluster
	I0314 19:41:10.830818    8428 settings.go:142] acquiring lock: {Name:mk2f48f1c2db86c45c5c20d13312e07e9c171d09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 19:41:10.830884    8428 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0314 19:41:10.832480    8428 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\kubeconfig: {Name:mk3b9816be6135fef42a073128e9ed356868417a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 19:41:10.833753    8428 start.go:234] Will wait 6m0s for node &{Name: IP:172.17.93.236 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0314 19:41:10.833753    8428 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0314 19:41:10.836872    8428 out.go:177] * Verifying Kubernetes components...
	I0314 19:41:10.839781    8428 out.go:177] * Enabled addons: 
	I0314 19:41:10.834364    8428 config.go:182] Loaded profile config "multinode-442000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0314 19:41:10.843389    8428 addons.go:505] duration metric: took 9.6864ms for enable addons: enabled=[]
	I0314 19:41:10.854601    8428 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 19:41:11.154424    8428 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0314 19:41:11.194059    8428 node_ready.go:35] waiting up to 6m0s for node "multinode-442000" to be "Ready" ...
	I0314 19:41:11.194232    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:41:11.194232    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:11.194232    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:11.194232    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:11.196374    8428 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0314 19:41:11.196374    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:11.196374    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:11.197177    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:11.197177    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:11.197177    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:11 GMT
	I0314 19:41:11.197177    8428 round_trippers.go:580]     Audit-Id: fbf20db4-2496-4d1f-a43f-a2ff2f9ea23b
	I0314 19:41:11.197177    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:11.197841    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1707","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I0314 19:41:11.701344    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:41:11.701344    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:11.701436    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:11.701436    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:11.706144    8428 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 19:41:11.706144    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:11.706144    8428 round_trippers.go:580]     Audit-Id: 8b6754d9-6d6d-4ba4-ae2d-b6e56683db54
	I0314 19:41:11.706144    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:11.706144    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:11.706144    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:11.706144    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:11.706144    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:11 GMT
	I0314 19:41:11.706572    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1707","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I0314 19:41:12.200807    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:41:12.200885    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:12.200885    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:12.200885    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:12.204116    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:41:12.205168    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:12.205168    8428 round_trippers.go:580]     Audit-Id: bf1982f4-7453-45f2-a6e1-10adc79e2f21
	I0314 19:41:12.205168    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:12.205168    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:12.205168    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:12.205168    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:12.205168    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:12 GMT
	I0314 19:41:12.205470    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1707","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I0314 19:41:12.703948    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:41:12.703948    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:12.703948    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:12.703948    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:12.708354    8428 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 19:41:12.708354    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:12.708354    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:12.708354    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:12.708354    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:12.708354    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:12.708354    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:12 GMT
	I0314 19:41:12.708354    8428 round_trippers.go:580]     Audit-Id: 7f301715-1049-4f73-a7c0-a33d0761e77c
	I0314 19:41:12.708354    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1707","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I0314 19:41:13.204021    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:41:13.204021    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:13.204021    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:13.204021    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:13.208948    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:41:13.208948    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:13.208948    8428 round_trippers.go:580]     Audit-Id: e73fe370-3658-4436-a73f-36b8bbbafdba
	I0314 19:41:13.209070    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:13.209070    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:13.209070    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:13.209070    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:13.209070    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:13 GMT
	I0314 19:41:13.209270    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1707","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I0314 19:41:13.210162    8428 node_ready.go:53] node "multinode-442000" has status "Ready":"False"
	I0314 19:41:13.706141    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:41:13.706141    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:13.706211    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:13.706211    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:13.709768    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:41:13.709768    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:13.709768    8428 round_trippers.go:580]     Audit-Id: d58f281a-b114-4d38-b710-a7d7929aceb7
	I0314 19:41:13.709768    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:13.709768    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:13.709768    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:13.709768    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:13.709768    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:13 GMT
	I0314 19:41:13.710632    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1707","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I0314 19:41:14.207204    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:41:14.207274    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:14.207274    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:14.207274    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:14.211425    8428 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 19:41:14.211521    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:14.211521    8428 round_trippers.go:580]     Audit-Id: 2e34aa9a-d1e2-48cf-8bc5-b1d3bbfe6e0a
	I0314 19:41:14.211521    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:14.211521    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:14.211521    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:14.211521    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:14.211521    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:14 GMT
	I0314 19:41:14.212063    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1707","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I0314 19:41:14.708444    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:41:14.708692    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:14.708692    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:14.708692    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:14.713259    8428 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 19:41:14.713338    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:14.713405    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:14.713405    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:14.713405    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:14.713405    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:14 GMT
	I0314 19:41:14.713405    8428 round_trippers.go:580]     Audit-Id: 37c5de44-3c87-407b-9fa1-9bfad7343a75
	I0314 19:41:14.713405    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:14.713405    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1707","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I0314 19:41:15.196361    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:41:15.196361    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:15.196361    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:15.196361    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:15.200705    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:41:15.200705    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:15.200705    8428 round_trippers.go:580]     Audit-Id: 30f34791-a826-4657-9149-524b87a5b814
	I0314 19:41:15.200705    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:15.200705    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:15.200705    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:15.200705    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:15.200705    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:15 GMT
	I0314 19:41:15.200705    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1707","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I0314 19:41:15.696025    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:41:15.696107    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:15.696107    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:15.696107    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:15.703736    8428 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0314 19:41:15.703736    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:15.703736    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:15 GMT
	I0314 19:41:15.703736    8428 round_trippers.go:580]     Audit-Id: ea741c36-6145-4cbb-a156-62765d4c3552
	I0314 19:41:15.703736    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:15.703736    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:15.703736    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:15.703736    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:15.704211    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1707","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I0314 19:41:15.704409    8428 node_ready.go:53] node "multinode-442000" has status "Ready":"False"
	I0314 19:41:16.196894    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:41:16.196894    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:16.196968    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:16.196968    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:16.201032    8428 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 19:41:16.201032    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:16.201032    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:16 GMT
	I0314 19:41:16.201032    8428 round_trippers.go:580]     Audit-Id: 067b1039-0f92-4a7b-932f-2d641038029e
	I0314 19:41:16.201032    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:16.201032    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:16.201032    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:16.201032    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:16.201032    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1707","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I0314 19:41:16.696270    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:41:16.696360    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:16.696434    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:16.696434    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:16.703698    8428 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0314 19:41:16.703698    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:16.703698    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:16.703698    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:16.703698    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:16 GMT
	I0314 19:41:16.703698    8428 round_trippers.go:580]     Audit-Id: 0c75326e-7ea4-4e83-8aea-0eb90c485978
	I0314 19:41:16.703698    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:16.703698    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:16.704230    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1707","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I0314 19:41:17.196761    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:41:17.196830    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:17.196830    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:17.196830    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:17.200827    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:41:17.201243    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:17.201243    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:17.201243    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:17.201243    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:17 GMT
	I0314 19:41:17.201243    8428 round_trippers.go:580]     Audit-Id: a218d508-4b85-472d-b5f0-04fea64336c2
	I0314 19:41:17.201243    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:17.201243    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:17.201545    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1707","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I0314 19:41:17.697485    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:41:17.697810    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:17.697810    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:17.697810    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:17.703943    8428 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0314 19:41:17.703943    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:17.703943    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:17.703943    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:17.703943    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:17.703943    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:17.703943    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:17 GMT
	I0314 19:41:17.703943    8428 round_trippers.go:580]     Audit-Id: 13017c24-1c4c-49d1-9277-195a59a7263a
	I0314 19:41:17.704545    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1707","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I0314 19:41:17.705229    8428 node_ready.go:53] node "multinode-442000" has status "Ready":"False"
	I0314 19:41:18.198356    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:41:18.198436    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:18.198436    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:18.198436    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:18.202797    8428 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 19:41:18.203639    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:18.203639    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:18.203639    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:18 GMT
	I0314 19:41:18.203639    8428 round_trippers.go:580]     Audit-Id: 8298cabd-c94d-43e2-89b4-2651e7265b30
	I0314 19:41:18.203639    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:18.203639    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:18.203639    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:18.204033    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1707","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I0314 19:41:18.704147    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:41:18.704147    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:18.704220    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:18.704220    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:18.707736    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:41:18.707736    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:18.707736    8428 round_trippers.go:580]     Audit-Id: 6c81b5fa-b9e1-453e-bcf6-cddc21e9be5b
	I0314 19:41:18.707736    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:18.707736    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:18.707736    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:18.707736    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:18.707736    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:18 GMT
	I0314 19:41:18.708334    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1707","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I0314 19:41:19.209680    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:41:19.209680    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:19.209680    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:19.209680    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:19.213464    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:41:19.213464    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:19.213464    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:19.213464    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:19.213464    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:19.213464    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:19 GMT
	I0314 19:41:19.213464    8428 round_trippers.go:580]     Audit-Id: bc7eb2b2-49cf-4b50-9552-8a0f91590f1b
	I0314 19:41:19.213464    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:19.213464    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1830","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0314 19:41:19.695554    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:41:19.695554    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:19.695554    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:19.695723    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:19.698463    8428 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0314 19:41:19.698463    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:19.699421    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:19 GMT
	I0314 19:41:19.699421    8428 round_trippers.go:580]     Audit-Id: d4ae5292-034d-4f0b-b79b-25f6792d7cfb
	I0314 19:41:19.699421    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:19.699421    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:19.699421    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:19.699421    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:19.699572    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1830","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0314 19:41:20.198853    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:41:20.198853    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:20.198853    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:20.198853    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:20.202694    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:41:20.202694    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:20.202694    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:20.202694    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:20 GMT
	I0314 19:41:20.202694    8428 round_trippers.go:580]     Audit-Id: e21ed7cb-6d6e-4e88-b60d-a87969b0179f
	I0314 19:41:20.202694    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:20.202694    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:20.202694    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:20.202694    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1830","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0314 19:41:20.203410    8428 node_ready.go:53] node "multinode-442000" has status "Ready":"False"
	I0314 19:41:20.702467    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:41:20.702467    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:20.702467    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:20.702467    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:20.706042    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:41:20.706042    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:20.706042    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:20.706042    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:20.706042    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:20.706042    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:20 GMT
	I0314 19:41:20.706042    8428 round_trippers.go:580]     Audit-Id: 0a229a6b-a0b1-4341-95f8-5bca0160e1a5
	I0314 19:41:20.706042    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:20.707040    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1830","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0314 19:41:21.205117    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:41:21.205506    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:21.205538    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:21.205538    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:21.214694    8428 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0314 19:41:21.214694    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:21.214694    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:21.214694    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:21.214694    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:21.214694    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:21 GMT
	I0314 19:41:21.214694    8428 round_trippers.go:580]     Audit-Id: 821abdd6-7c52-4ed1-8f58-e9bee1b83e46
	I0314 19:41:21.214694    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:21.215229    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1830","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0314 19:41:21.704306    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:41:21.704380    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:21.704380    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:21.704380    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:21.707571    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:41:21.708471    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:21.708471    8428 round_trippers.go:580]     Audit-Id: 3be7aecf-35c9-447c-97f4-c81e0d047d94
	I0314 19:41:21.708471    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:21.708471    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:21.708471    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:21.708596    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:21.708596    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:21 GMT
	I0314 19:41:21.708881    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1830","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0314 19:41:22.205630    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:41:22.205630    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:22.205630    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:22.205630    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:22.210204    8428 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 19:41:22.210763    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:22.210763    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:22.210763    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:22.210763    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:22.210763    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:22 GMT
	I0314 19:41:22.210763    8428 round_trippers.go:580]     Audit-Id: 56e4f5e8-4bfd-485a-ab28-c96aa0a28bc9
	I0314 19:41:22.210894    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:22.211210    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1830","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0314 19:41:22.211926    8428 node_ready.go:53] node "multinode-442000" has status "Ready":"False"
	I0314 19:41:22.704264    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:41:22.704264    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:22.704365    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:22.704365    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:22.710046    8428 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0314 19:41:22.710046    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:22.710046    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:22 GMT
	I0314 19:41:22.710046    8428 round_trippers.go:580]     Audit-Id: 66f64f63-03e2-4fe1-a022-264696375071
	I0314 19:41:22.710046    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:22.710046    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:22.710046    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:22.711058    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:22.711220    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1830","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0314 19:41:23.204508    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:41:23.204508    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:23.204508    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:23.204567    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:23.208039    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:41:23.208039    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:23.208039    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:23 GMT
	I0314 19:41:23.208039    8428 round_trippers.go:580]     Audit-Id: b9c5ac56-9f98-4a63-9652-0c24db61ba8a
	I0314 19:41:23.208039    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:23.208039    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:23.208039    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:23.208039    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:23.208795    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1830","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0314 19:41:23.702700    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:41:23.702700    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:23.702700    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:23.702700    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:23.705762    8428 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0314 19:41:23.706323    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:23.706323    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:23.706323    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:23.706323    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:23 GMT
	I0314 19:41:23.706323    8428 round_trippers.go:580]     Audit-Id: cab92b8e-c7b7-400d-8246-deac63b3eb4d
	I0314 19:41:23.706323    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:23.706323    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:23.706559    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1830","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0314 19:41:24.203705    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:41:24.203705    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:24.203705    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:24.203705    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:24.208317    8428 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 19:41:24.208317    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:24.208317    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:24 GMT
	I0314 19:41:24.208317    8428 round_trippers.go:580]     Audit-Id: f346c234-b78b-444e-a2f4-80249f4bad42
	I0314 19:41:24.208317    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:24.208317    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:24.208317    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:24.208317    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:24.208317    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1830","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0314 19:41:24.706100    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:41:24.706156    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:24.706156    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:24.706156    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:24.709644    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:41:24.710403    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:24.710403    8428 round_trippers.go:580]     Audit-Id: 8e67a93a-d8e5-41c3-a1f1-34af91412eef
	I0314 19:41:24.710403    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:24.710403    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:24.710403    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:24.710403    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:24.710403    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:24 GMT
	I0314 19:41:24.710538    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1830","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0314 19:41:24.711437    8428 node_ready.go:53] node "multinode-442000" has status "Ready":"False"
	I0314 19:41:25.206876    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:41:25.207163    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:25.207163    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:25.207163    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:25.211348    8428 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 19:41:25.211859    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:25.211859    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:25.211859    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:25.211859    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:25 GMT
	I0314 19:41:25.211859    8428 round_trippers.go:580]     Audit-Id: 51ddccec-cefc-48db-885c-0bee4de68761
	I0314 19:41:25.211859    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:25.211859    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:25.212094    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1830","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0314 19:41:25.710310    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:41:25.710310    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:25.710310    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:25.710310    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:25.714070    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:41:25.714070    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:25.715021    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:25.715261    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:25.715317    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:25 GMT
	I0314 19:41:25.715317    8428 round_trippers.go:580]     Audit-Id: 9bba2453-fe1b-4c00-aa33-27178358573e
	I0314 19:41:25.715317    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:25.715317    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:25.715317    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1830","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0314 19:41:26.196781    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:41:26.196781    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:26.196781    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:26.196781    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:26.200374    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:41:26.200374    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:26.200374    8428 round_trippers.go:580]     Audit-Id: 38a0865d-e9fd-4357-ae22-310a2ca8054e
	I0314 19:41:26.200374    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:26.200374    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:26.200374    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:26.200374    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:26.200374    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:26 GMT
	I0314 19:41:26.201377    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1830","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0314 19:41:26.711357    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:41:26.711437    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:26.711437    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:26.711437    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:26.715606    8428 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 19:41:26.715606    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:26.715606    8428 round_trippers.go:580]     Audit-Id: 83f705e2-d6ce-4277-a141-bbf7fb20cb36
	I0314 19:41:26.715606    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:26.715606    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:26.715606    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:26.715606    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:26.715606    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:26 GMT
	I0314 19:41:26.715606    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1830","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0314 19:41:26.716145    8428 node_ready.go:53] node "multinode-442000" has status "Ready":"False"
	I0314 19:41:27.197788    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:41:27.197788    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:27.197788    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:27.197788    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:27.202284    8428 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 19:41:27.202284    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:27.202284    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:27.202284    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:27.202284    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:27.202284    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:27.202284    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:27 GMT
	I0314 19:41:27.202284    8428 round_trippers.go:580]     Audit-Id: 89780342-689a-4b4b-9a53-507e4becaf42
	I0314 19:41:27.202284    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1830","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0314 19:41:27.700410    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:41:27.700410    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:27.700410    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:27.700410    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:27.703997    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:41:27.704520    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:27.704520    8428 round_trippers.go:580]     Audit-Id: 9d64fa9e-f1ad-4b25-a409-a011218db958
	I0314 19:41:27.704520    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:27.704520    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:27.704520    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:27.704520    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:27.704520    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:27 GMT
	I0314 19:41:27.705028    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1830","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0314 19:41:28.197800    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:41:28.197800    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:28.197800    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:28.197884    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:28.202176    8428 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 19:41:28.202176    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:28.202176    8428 round_trippers.go:580]     Audit-Id: 1b5f2e31-1bec-41fd-ace2-56b5a084375e
	I0314 19:41:28.202176    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:28.202176    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:28.202176    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:28.202176    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:28.202176    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:28 GMT
	I0314 19:41:28.202176    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1830","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0314 19:41:28.696108    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:41:28.696184    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:28.696184    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:28.696240    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:28.702088    8428 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0314 19:41:28.702088    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:28.702088    8428 round_trippers.go:580]     Audit-Id: 0b41cd5b-99cc-4e50-bc6a-4bf59ddc2c08
	I0314 19:41:28.702088    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:28.702088    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:28.702088    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:28.702088    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:28.702088    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:28 GMT
	I0314 19:41:28.702688    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1830","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0314 19:41:29.202761    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:41:29.202761    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:29.202761    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:29.202838    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:29.207149    8428 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 19:41:29.207194    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:29.207194    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:29.207194    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:29.207194    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:29.207194    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:29 GMT
	I0314 19:41:29.207194    8428 round_trippers.go:580]     Audit-Id: fc335680-e751-44a1-b304-a9d9ba9270e4
	I0314 19:41:29.207254    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:29.207577    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1830","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0314 19:41:29.208204    8428 node_ready.go:53] node "multinode-442000" has status "Ready":"False"
	I0314 19:41:29.706721    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:41:29.706721    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:29.706721    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:29.706721    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:29.710405    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:41:29.710405    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:29.710405    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:29 GMT
	I0314 19:41:29.710405    8428 round_trippers.go:580]     Audit-Id: c1c053bb-259b-400f-b791-63de33b648b9
	I0314 19:41:29.710405    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:29.710405    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:29.710405    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:29.710405    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:29.711309    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1830","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0314 19:41:30.205481    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:41:30.205591    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:30.205591    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:30.205591    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:30.209317    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:41:30.209317    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:30.209317    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:30.209502    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:30.209502    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:30 GMT
	I0314 19:41:30.209502    8428 round_trippers.go:580]     Audit-Id: 4525adea-2f03-4922-ba43-c78e4863bb3b
	I0314 19:41:30.209502    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:30.209502    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:30.209694    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1830","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0314 19:41:30.710690    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:41:30.710690    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:30.710690    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:30.710690    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:30.715042    8428 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 19:41:30.715289    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:30.715289    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:30.715289    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:30.715289    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:30.715289    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:30.715289    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:30 GMT
	I0314 19:41:30.715289    8428 round_trippers.go:580]     Audit-Id: 8b5ec208-d0d4-4558-81ba-0373ff9b6752
	I0314 19:41:30.715496    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1830","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0314 19:41:31.209024    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:41:31.209059    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:31.209113    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:31.209145    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:31.212799    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:41:31.213090    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:31.213090    8428 round_trippers.go:580]     Audit-Id: 3c362792-722e-431f-b1ff-782f8acf2474
	I0314 19:41:31.213146    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:31.213146    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:31.213146    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:31.213146    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:31.213146    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:31 GMT
	I0314 19:41:31.213393    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1830","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0314 19:41:31.213860    8428 node_ready.go:53] node "multinode-442000" has status "Ready":"False"
	I0314 19:41:31.710401    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:41:31.710452    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:31.710522    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:31.710522    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:31.713831    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:41:31.713831    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:31.713831    8428 round_trippers.go:580]     Audit-Id: 212a743c-6403-4f5b-88ce-91fc302ad0ae
	I0314 19:41:31.713831    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:31.713831    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:31.713831    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:31.713831    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:31.713831    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:31 GMT
	I0314 19:41:31.713831    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1830","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0314 19:41:32.207804    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:41:32.207804    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:32.207804    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:32.207804    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:32.211862    8428 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 19:41:32.212050    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:32.212050    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:32.212050    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:32.212050    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:32 GMT
	I0314 19:41:32.212050    8428 round_trippers.go:580]     Audit-Id: b6f497b2-4c77-4c58-b8d8-746b5b80300a
	I0314 19:41:32.212050    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:32.212050    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:32.212358    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1830","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0314 19:41:32.707961    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:41:32.707961    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:32.707961    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:32.707961    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:32.712077    8428 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 19:41:32.712077    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:32.712077    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:32.712142    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:32.712142    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:32 GMT
	I0314 19:41:32.712142    8428 round_trippers.go:580]     Audit-Id: 53ee98bb-9195-4391-92a1-ba1146bb275f
	I0314 19:41:32.712142    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:32.712142    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:32.712142    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1830","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0314 19:41:33.207974    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:41:33.207974    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:33.207974    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:33.207974    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:33.211561    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:41:33.212170    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:33.212227    8428 round_trippers.go:580]     Audit-Id: 3089fa3e-969f-4d05-aae5-937d050da974
	I0314 19:41:33.212227    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:33.212227    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:33.212227    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:33.212227    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:33.212227    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:33 GMT
	I0314 19:41:33.212227    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1830","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0314 19:41:33.710070    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:41:33.710294    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:33.710294    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:33.710294    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:33.716000    8428 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0314 19:41:33.716000    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:33.716000    8428 round_trippers.go:580]     Audit-Id: 23fe6eb1-f627-4944-b86e-691cc4dc3568
	I0314 19:41:33.716000    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:33.716000    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:33.716000    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:33.716000    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:33.716000    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:33 GMT
	I0314 19:41:33.716601    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1830","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0314 19:41:33.716632    8428 node_ready.go:53] node "multinode-442000" has status "Ready":"False"
	I0314 19:41:34.210253    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:41:34.210348    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:34.210348    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:34.210348    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:34.215843    8428 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0314 19:41:34.216538    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:34.216538    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:34.216538    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:34.216538    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:34 GMT
	I0314 19:41:34.216538    8428 round_trippers.go:580]     Audit-Id: a10e4223-f4bb-4f14-b72c-4bd44fd67b81
	I0314 19:41:34.216538    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:34.216538    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:34.216990    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1830","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0314 19:41:34.709540    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:41:34.709540    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:34.709540    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:34.709540    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:34.713643    8428 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 19:41:34.713643    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:34.713643    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:34.713643    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:34.713643    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:34.713643    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:34.713643    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:34 GMT
	I0314 19:41:34.713643    8428 round_trippers.go:580]     Audit-Id: 9dcfb0cf-c239-4d37-a4bc-190169535f2b
	I0314 19:41:34.713643    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1830","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0314 19:41:35.208688    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:41:35.208688    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:35.208688    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:35.208688    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:35.213436    8428 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 19:41:35.213436    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:35.213436    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:35.213436    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:35.213436    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:35.213530    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:35 GMT
	I0314 19:41:35.213530    8428 round_trippers.go:580]     Audit-Id: 80e2a421-0cbe-40e9-b801-bd9f038f6e2d
	I0314 19:41:35.213530    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:35.213595    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1830","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0314 19:41:35.708424    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:41:35.708512    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:35.708512    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:35.708605    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:35.712889    8428 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 19:41:35.712956    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:35.712956    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:35.712956    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:35 GMT
	I0314 19:41:35.712956    8428 round_trippers.go:580]     Audit-Id: 00e913b2-1b38-424c-8611-652a8f9f4f52
	I0314 19:41:35.712956    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:35.713021    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:35.713021    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:35.713328    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1830","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0314 19:41:36.209782    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:41:36.209782    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:36.209880    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:36.209880    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:36.213180    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:41:36.213180    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:36.213180    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:36 GMT
	I0314 19:41:36.213180    8428 round_trippers.go:580]     Audit-Id: 22025d39-8ff9-4b37-90bd-50da8c10b3d3
	I0314 19:41:36.213180    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:36.213180    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:36.213180    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:36.213180    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:36.213992    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1830","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0314 19:41:36.214449    8428 node_ready.go:53] node "multinode-442000" has status "Ready":"False"
	I0314 19:41:36.710015    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:41:36.710015    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:36.710015    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:36.710015    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:36.715827    8428 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0314 19:41:36.715968    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:36.715968    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:36.715968    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:36.715968    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:36.715968    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:36 GMT
	I0314 19:41:36.715968    8428 round_trippers.go:580]     Audit-Id: 3f0c16bb-a945-4a3e-bd1c-d07c66b61ef9
	I0314 19:41:36.715968    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:36.715968    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1830","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0314 19:41:37.197552    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:41:37.197552    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:37.197552    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:37.197552    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:37.201960    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:41:37.201960    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:37.201960    8428 round_trippers.go:580]     Audit-Id: 183eb24f-291b-4cfa-bb64-b49c3c05e888
	I0314 19:41:37.201960    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:37.201960    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:37.201960    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:37.201960    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:37.201960    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:37 GMT
	I0314 19:41:37.201960    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1830","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0314 19:41:37.698833    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:41:37.698901    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:37.698901    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:37.698968    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:37.705769    8428 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0314 19:41:37.705769    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:37.705769    8428 round_trippers.go:580]     Audit-Id: aa5cbed7-5978-45ea-b68b-9fa5eaac33e9
	I0314 19:41:37.705769    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:37.705769    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:37.705769    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:37.705769    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:37.705769    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:37 GMT
	I0314 19:41:37.705769    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1830","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0314 19:41:38.211491    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:41:38.211491    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:38.211491    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:38.211491    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:38.216247    8428 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 19:41:38.216247    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:38.216247    8428 round_trippers.go:580]     Audit-Id: 5983a43c-4199-482c-81a4-89b23ace5760
	I0314 19:41:38.216247    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:38.216247    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:38.216247    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:38.216247    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:38.216247    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:38 GMT
	I0314 19:41:38.216799    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1830","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0314 19:41:38.217298    8428 node_ready.go:53] node "multinode-442000" has status "Ready":"False"
	I0314 19:41:38.709246    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:41:38.709246    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:38.709246    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:38.709246    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:38.712497    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:41:38.713368    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:38.713368    8428 round_trippers.go:580]     Audit-Id: 74eb41ff-7336-4f60-a913-313c50fc0b27
	I0314 19:41:38.713368    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:38.713368    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:38.713469    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:38.713469    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:38.713469    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:38 GMT
	I0314 19:41:38.713787    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1830","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0314 19:41:39.206718    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:41:39.206718    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:39.206718    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:39.206718    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:39.211305    8428 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 19:41:39.211305    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:39.211305    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:39.211305    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:39.212017    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:39 GMT
	I0314 19:41:39.212017    8428 round_trippers.go:580]     Audit-Id: d0bd5345-1f27-44f5-bd58-b5aac7cd8f01
	I0314 19:41:39.212017    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:39.212017    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:39.212378    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1830","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0314 19:41:39.697087    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:41:39.697499    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:39.697499    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:39.697594    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:39.703696    8428 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0314 19:41:39.703777    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:39.703777    8428 round_trippers.go:580]     Audit-Id: c6041361-a19e-4eb4-82ab-118084172ce8
	I0314 19:41:39.703777    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:39.703868    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:39.703868    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:39.703868    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:39.703868    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:39 GMT
	I0314 19:41:39.704194    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1830","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0314 19:41:40.208478    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:41:40.208478    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:40.208695    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:40.208695    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:40.213404    8428 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 19:41:40.213404    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:40.213404    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:40.213404    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:40.213404    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:40 GMT
	I0314 19:41:40.213404    8428 round_trippers.go:580]     Audit-Id: 1d8a0d30-a790-4fb3-8344-52a305e27afa
	I0314 19:41:40.213404    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:40.213404    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:40.213793    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1830","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0314 19:41:40.709352    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:41:40.709429    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:40.709429    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:40.709429    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:40.716639    8428 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0314 19:41:40.716639    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:40.716639    8428 round_trippers.go:580]     Audit-Id: 6d2df5bf-207e-4bd9-a80b-67330db0e987
	I0314 19:41:40.716639    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:40.716639    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:40.716639    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:40.716639    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:40.716639    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:40 GMT
	I0314 19:41:40.716639    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1830","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0314 19:41:40.717652    8428 node_ready.go:53] node "multinode-442000" has status "Ready":"False"
	I0314 19:41:41.211418    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:41:41.211627    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:41.211627    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:41.211627    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:41.215210    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:41:41.215700    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:41.215700    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:41.215700    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:41 GMT
	I0314 19:41:41.215700    8428 round_trippers.go:580]     Audit-Id: b1fe46bb-dcce-4475-99c8-116dc549e69e
	I0314 19:41:41.215700    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:41.215700    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:41.215700    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:41.215700    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1830","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0314 19:41:41.709834    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:41:41.709834    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:41.709834    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:41.709834    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:41.713901    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:41:41.713901    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:41.713901    8428 round_trippers.go:580]     Audit-Id: 4d2f24b7-e161-47d0-8da5-96c9e378d420
	I0314 19:41:41.713901    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:41.713901    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:41.713901    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:41.713901    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:41.713901    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:41 GMT
	I0314 19:41:41.714047    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1867","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5357 chars]
	I0314 19:41:41.714511    8428 node_ready.go:49] node "multinode-442000" has status "Ready":"True"
	I0314 19:41:41.714610    8428 node_ready.go:38] duration metric: took 30.5181132s for node "multinode-442000" to be "Ready" ...
	I0314 19:41:41.714610    8428 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0314 19:41:41.714762    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/namespaces/kube-system/pods
	I0314 19:41:41.714762    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:41.714762    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:41.714762    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:41.720492    8428 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0314 19:41:41.720492    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:41.720492    8428 round_trippers.go:580]     Audit-Id: 23467580-7140-4737-ae92-fc35303fd912
	I0314 19:41:41.720492    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:41.720492    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:41.720492    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:41.720492    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:41.720492    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:41 GMT
	I0314 19:41:41.722598    8428 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1867"},"items":[{"metadata":{"name":"coredns-5dd5756b68-d22jc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2a563b3f-a175-4dc2-9f0b-67dbaefbfaac","resourceVersion":"1714","creationTimestamp":"2024-03-14T19:19:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"65689a14-ed43-48af-8104-53ea2e3991f3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:19:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"65689a14-ed43-48af-8104-53ea2e3991f3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 83020 chars]
	I0314 19:41:41.726160    8428 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-d22jc" in "kube-system" namespace to be "Ready" ...
	I0314 19:41:41.726686    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-d22jc
	I0314 19:41:41.726749    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:41.726749    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:41.726749    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:41.729506    8428 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0314 19:41:41.730210    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:41.730210    8428 round_trippers.go:580]     Audit-Id: aac4ba97-4438-4e5c-b248-0e23c1db98a1
	I0314 19:41:41.730210    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:41.730210    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:41.730210    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:41.730210    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:41.730210    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:41 GMT
	I0314 19:41:41.730821    8428 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-d22jc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2a563b3f-a175-4dc2-9f0b-67dbaefbfaac","resourceVersion":"1714","creationTimestamp":"2024-03-14T19:19:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"65689a14-ed43-48af-8104-53ea2e3991f3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:19:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"65689a14-ed43-48af-8104-53ea2e3991f3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I0314 19:41:41.731431    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:41:41.731431    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:41.731431    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:41.731504    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:41.734317    8428 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0314 19:41:41.734317    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:41.734317    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:41.734317    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:41.734317    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:41.734317    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:41 GMT
	I0314 19:41:41.734317    8428 round_trippers.go:580]     Audit-Id: a86aada6-4dcd-4600-8413-567c0ef68fe1
	I0314 19:41:41.734317    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:41.734824    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1867","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5357 chars]
	I0314 19:41:42.239804    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-d22jc
	I0314 19:41:42.239804    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:42.239875    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:42.239875    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:42.243092    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:41:42.243868    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:42.243868    8428 round_trippers.go:580]     Audit-Id: c27723f4-9f49-4996-a08e-d841a22a19a8
	I0314 19:41:42.243868    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:42.243868    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:42.243868    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:42.243868    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:42.243868    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:42 GMT
	I0314 19:41:42.244108    8428 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-d22jc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2a563b3f-a175-4dc2-9f0b-67dbaefbfaac","resourceVersion":"1714","creationTimestamp":"2024-03-14T19:19:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"65689a14-ed43-48af-8104-53ea2e3991f3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:19:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"65689a14-ed43-48af-8104-53ea2e3991f3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I0314 19:41:42.245153    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:41:42.245153    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:42.245244    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:42.245244    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:42.248392    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:41:42.248447    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:42.248447    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:42.248447    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:42 GMT
	I0314 19:41:42.248447    8428 round_trippers.go:580]     Audit-Id: f2ec66a6-a178-4878-ad81-569404ee6f75
	I0314 19:41:42.248447    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:42.248447    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:42.248447    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:42.248762    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1867","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5357 chars]
	I0314 19:41:42.739855    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-d22jc
	I0314 19:41:42.739883    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:42.739923    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:42.739923    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:42.743580    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:41:42.743580    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:42.743580    8428 round_trippers.go:580]     Audit-Id: d80107bf-25a5-4cb9-99a5-84b14b857b50
	I0314 19:41:42.743580    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:42.743580    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:42.743580    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:42.743580    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:42.743580    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:43 GMT
	I0314 19:41:42.744187    8428 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-d22jc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2a563b3f-a175-4dc2-9f0b-67dbaefbfaac","resourceVersion":"1714","creationTimestamp":"2024-03-14T19:19:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"65689a14-ed43-48af-8104-53ea2e3991f3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:19:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"65689a14-ed43-48af-8104-53ea2e3991f3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I0314 19:41:42.744838    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:41:42.744900    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:42.744900    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:42.744948    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:42.749179    8428 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 19:41:42.749179    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:42.749179    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:43 GMT
	I0314 19:41:42.749179    8428 round_trippers.go:580]     Audit-Id: 7f61f411-82b1-4b3d-a89e-1667431fc0b0
	I0314 19:41:42.749179    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:42.749179    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:42.749179    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:42.749179    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:42.749179    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1867","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5357 chars]
	I0314 19:41:43.241429    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-d22jc
	I0314 19:41:43.241429    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:43.241429    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:43.241429    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:43.245469    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:41:43.245469    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:43.245469    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:43.245469    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:43.245469    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:43 GMT
	I0314 19:41:43.245469    8428 round_trippers.go:580]     Audit-Id: 947f9ba5-6674-4db6-b848-ccfcab0b246c
	I0314 19:41:43.245469    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:43.245469    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:43.245756    8428 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-d22jc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2a563b3f-a175-4dc2-9f0b-67dbaefbfaac","resourceVersion":"1714","creationTimestamp":"2024-03-14T19:19:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"65689a14-ed43-48af-8104-53ea2e3991f3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:19:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"65689a14-ed43-48af-8104-53ea2e3991f3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I0314 19:41:43.246225    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:41:43.246225    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:43.246225    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:43.246225    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:43.253802    8428 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0314 19:41:43.254028    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:43.254028    8428 round_trippers.go:580]     Audit-Id: cbe17f86-20eb-4b61-96fe-0a6469ce5633
	I0314 19:41:43.254028    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:43.254028    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:43.254028    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:43.254101    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:43.254101    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:43 GMT
	I0314 19:41:43.254248    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1867","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5357 chars]
	I0314 19:41:43.727571    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-d22jc
	I0314 19:41:43.727571    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:43.727571    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:43.727571    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:43.731149    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:41:43.731149    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:43.731149    8428 round_trippers.go:580]     Audit-Id: 0be58a2d-4b28-46b4-a274-6ea303323589
	I0314 19:41:43.731893    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:43.731893    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:43.731893    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:43.731893    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:43.731893    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:43 GMT
	I0314 19:41:43.731990    8428 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-d22jc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2a563b3f-a175-4dc2-9f0b-67dbaefbfaac","resourceVersion":"1714","creationTimestamp":"2024-03-14T19:19:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"65689a14-ed43-48af-8104-53ea2e3991f3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:19:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"65689a14-ed43-48af-8104-53ea2e3991f3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I0314 19:41:43.732631    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:41:43.732631    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:43.732631    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:43.732631    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:43.735636    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:41:43.735959    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:43.735959    8428 round_trippers.go:580]     Audit-Id: b23b7a93-0d29-4cc3-b018-7ed68b875015
	I0314 19:41:43.736062    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:43.736062    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:43.736062    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:43.736105    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:43.736129    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:43 GMT
	I0314 19:41:43.736408    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1867","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5357 chars]
	I0314 19:41:43.737128    8428 pod_ready.go:102] pod "coredns-5dd5756b68-d22jc" in "kube-system" namespace has status "Ready":"False"
	I0314 19:41:44.241265    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-d22jc
	I0314 19:41:44.241407    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:44.241407    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:44.241407    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:44.245567    8428 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 19:41:44.245644    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:44.245644    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:44.245644    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:44 GMT
	I0314 19:41:44.245644    8428 round_trippers.go:580]     Audit-Id: 584dcd68-36d2-4116-baca-d1a18fd29ceb
	I0314 19:41:44.245644    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:44.245717    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:44.245717    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:44.245756    8428 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-d22jc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2a563b3f-a175-4dc2-9f0b-67dbaefbfaac","resourceVersion":"1714","creationTimestamp":"2024-03-14T19:19:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"65689a14-ed43-48af-8104-53ea2e3991f3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:19:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"65689a14-ed43-48af-8104-53ea2e3991f3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I0314 19:41:44.246727    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:41:44.246727    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:44.246798    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:44.246798    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:44.249946    8428 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0314 19:41:44.249982    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:44.249982    8428 round_trippers.go:580]     Audit-Id: e4e08b0e-dfde-4f43-b12d-9a0af751dcf0
	I0314 19:41:44.249982    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:44.250019    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:44.250019    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:44.250019    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:44.250019    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:44 GMT
	I0314 19:41:44.250193    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1868","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0314 19:41:44.727030    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-d22jc
	I0314 19:41:44.727120    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:44.727120    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:44.727120    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:44.730910    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:41:44.730910    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:44.730910    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:44 GMT
	I0314 19:41:44.730910    8428 round_trippers.go:580]     Audit-Id: 3110bc10-3305-47a3-a749-914a214634a6
	I0314 19:41:44.731283    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:44.731283    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:44.731283    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:44.731373    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:44.731643    8428 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-d22jc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2a563b3f-a175-4dc2-9f0b-67dbaefbfaac","resourceVersion":"1714","creationTimestamp":"2024-03-14T19:19:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"65689a14-ed43-48af-8104-53ea2e3991f3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:19:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"65689a14-ed43-48af-8104-53ea2e3991f3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I0314 19:41:44.732587    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:41:44.732587    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:44.732587    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:44.732587    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:44.736273    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:41:44.736273    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:44.736273    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:44.736273    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:44.736273    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:44 GMT
	I0314 19:41:44.736273    8428 round_trippers.go:580]     Audit-Id: f62f6f7f-95a6-4353-beb9-e80878ffde8a
	I0314 19:41:44.736273    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:44.736273    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:44.736273    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1868","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0314 19:41:45.227108    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-d22jc
	I0314 19:41:45.227108    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:45.227108    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:45.227108    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:45.231696    8428 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 19:41:45.231990    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:45.231990    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:45 GMT
	I0314 19:41:45.231990    8428 round_trippers.go:580]     Audit-Id: 97b8382c-47ac-416e-8716-2df36bd5a581
	I0314 19:41:45.231990    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:45.231990    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:45.231990    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:45.231990    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:45.231990    8428 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-d22jc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2a563b3f-a175-4dc2-9f0b-67dbaefbfaac","resourceVersion":"1714","creationTimestamp":"2024-03-14T19:19:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"65689a14-ed43-48af-8104-53ea2e3991f3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:19:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"65689a14-ed43-48af-8104-53ea2e3991f3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I0314 19:41:45.232844    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:41:45.232844    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:45.232844    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:45.232844    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:45.236196    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:41:45.236196    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:45.236196    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:45.236196    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:45 GMT
	I0314 19:41:45.236196    8428 round_trippers.go:580]     Audit-Id: cf556e50-baf0-4c08-9d7f-7c4a66616030
	I0314 19:41:45.236196    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:45.236196    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:45.236196    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:45.236196    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1868","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0314 19:41:45.740583    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-d22jc
	I0314 19:41:45.740583    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:45.740583    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:45.740583    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:45.746440    8428 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 19:41:45.746440    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:45.746506    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:45.746506    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:46 GMT
	I0314 19:41:45.746506    8428 round_trippers.go:580]     Audit-Id: d9d5ca48-f1fb-4c5e-8f6f-5bdae1731f49
	I0314 19:41:45.746506    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:45.746506    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:45.746506    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:45.746506    8428 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-d22jc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2a563b3f-a175-4dc2-9f0b-67dbaefbfaac","resourceVersion":"1714","creationTimestamp":"2024-03-14T19:19:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"65689a14-ed43-48af-8104-53ea2e3991f3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:19:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"65689a14-ed43-48af-8104-53ea2e3991f3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I0314 19:41:45.747838    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:41:45.747905    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:45.747905    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:45.747905    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:45.751281    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:41:45.751281    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:45.751281    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:45.751281    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:46 GMT
	I0314 19:41:45.751281    8428 round_trippers.go:580]     Audit-Id: 0c4766c7-1604-40b9-bb87-8f37a401dd23
	I0314 19:41:45.751281    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:45.751281    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:45.751281    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:45.751551    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1868","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0314 19:41:45.752086    8428 pod_ready.go:102] pod "coredns-5dd5756b68-d22jc" in "kube-system" namespace has status "Ready":"False"
	I0314 19:41:46.241471    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-d22jc
	I0314 19:41:46.241471    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:46.241471    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:46.241572    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:46.247054    8428 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0314 19:41:46.247054    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:46.247054    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:46.247054    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:46 GMT
	I0314 19:41:46.247054    8428 round_trippers.go:580]     Audit-Id: c1dad6d1-7449-479c-abeb-2282320122ee
	I0314 19:41:46.247054    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:46.247054    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:46.247054    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:46.247730    8428 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-d22jc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2a563b3f-a175-4dc2-9f0b-67dbaefbfaac","resourceVersion":"1714","creationTimestamp":"2024-03-14T19:19:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"65689a14-ed43-48af-8104-53ea2e3991f3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:19:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"65689a14-ed43-48af-8104-53ea2e3991f3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I0314 19:41:46.248313    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:41:46.248399    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:46.248431    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:46.248431    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:46.251703    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:41:46.251909    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:46.251909    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:46.251909    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:46.251909    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:46.251955    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:46 GMT
	I0314 19:41:46.251955    8428 round_trippers.go:580]     Audit-Id: fa1c67ec-7b0e-4db5-aac9-165132cc7099
	I0314 19:41:46.251955    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:46.252053    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1868","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0314 19:41:46.741033    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-d22jc
	I0314 19:41:46.741033    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:46.741033    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:46.741033    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:46.745201    8428 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 19:41:46.745201    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:46.745201    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:46.745201    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:46.745201    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:47 GMT
	I0314 19:41:46.745201    8428 round_trippers.go:580]     Audit-Id: cc3b7c35-34e2-42d2-9419-6fea1ee700eb
	I0314 19:41:46.745201    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:46.745201    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:46.745201    8428 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-d22jc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2a563b3f-a175-4dc2-9f0b-67dbaefbfaac","resourceVersion":"1714","creationTimestamp":"2024-03-14T19:19:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"65689a14-ed43-48af-8104-53ea2e3991f3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:19:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"65689a14-ed43-48af-8104-53ea2e3991f3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I0314 19:41:46.746627    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:41:46.746684    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:46.746684    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:46.746740    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:46.749448    8428 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0314 19:41:46.749448    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:46.749448    8428 round_trippers.go:580]     Audit-Id: ae614bd3-d3a7-4628-83a8-3682e4cbbd6c
	I0314 19:41:46.749448    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:46.749448    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:46.749448    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:46.749448    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:46.749448    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:47 GMT
	I0314 19:41:46.750654    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1868","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0314 19:41:47.238686    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-d22jc
	I0314 19:41:47.238686    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:47.238686    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:47.238686    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:47.242009    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:41:47.242009    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:47.242009    8428 round_trippers.go:580]     Audit-Id: d2208470-c7bc-43de-a253-0f0ffbfcfd90
	I0314 19:41:47.242009    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:47.242009    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:47.242009    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:47.242009    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:47.242009    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:47 GMT
	I0314 19:41:47.242935    8428 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-d22jc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2a563b3f-a175-4dc2-9f0b-67dbaefbfaac","resourceVersion":"1714","creationTimestamp":"2024-03-14T19:19:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"65689a14-ed43-48af-8104-53ea2e3991f3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:19:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"65689a14-ed43-48af-8104-53ea2e3991f3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I0314 19:41:47.243985    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:41:47.243985    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:47.244061    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:47.244061    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:47.247654    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:41:47.247654    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:47.247654    8428 round_trippers.go:580]     Audit-Id: 24bb94a0-efaa-42e8-9b9f-c9241da45bc6
	I0314 19:41:47.247654    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:47.247654    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:47.247654    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:47.247654    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:47.247654    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:47 GMT
	I0314 19:41:47.247654    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1868","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0314 19:41:47.736515    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-d22jc
	I0314 19:41:47.736515    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:47.736515    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:47.736515    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:47.740100    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:41:47.741005    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:47.741005    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:47.741005    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:47.741005    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:47.741005    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:48 GMT
	I0314 19:41:47.741005    8428 round_trippers.go:580]     Audit-Id: 5261e89f-cc9c-4705-9bd4-f65118544af1
	I0314 19:41:47.741005    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:47.741232    8428 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-d22jc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2a563b3f-a175-4dc2-9f0b-67dbaefbfaac","resourceVersion":"1714","creationTimestamp":"2024-03-14T19:19:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"65689a14-ed43-48af-8104-53ea2e3991f3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:19:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"65689a14-ed43-48af-8104-53ea2e3991f3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I0314 19:41:47.741918    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:41:47.741918    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:47.741918    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:47.741976    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:47.745656    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:41:47.745656    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:47.745656    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:47.745656    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:47.745656    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:47.745656    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:47.745656    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:48 GMT
	I0314 19:41:47.745656    8428 round_trippers.go:580]     Audit-Id: a2a4b706-ae08-4f3e-9fe2-200535ce7ba3
	I0314 19:41:47.745656    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1868","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0314 19:41:48.235940    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-d22jc
	I0314 19:41:48.235940    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:48.235940    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:48.235940    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:48.239513    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:41:48.240145    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:48.240145    8428 round_trippers.go:580]     Audit-Id: 3217eaed-1637-43ad-bcc7-b0fc46882d37
	I0314 19:41:48.240145    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:48.240145    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:48.240145    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:48.240207    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:48.240207    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:48 GMT
	I0314 19:41:48.240546    8428 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-d22jc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2a563b3f-a175-4dc2-9f0b-67dbaefbfaac","resourceVersion":"1714","creationTimestamp":"2024-03-14T19:19:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"65689a14-ed43-48af-8104-53ea2e3991f3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:19:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"65689a14-ed43-48af-8104-53ea2e3991f3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I0314 19:41:48.241232    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:41:48.241232    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:48.241232    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:48.241232    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:48.247394    8428 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0314 19:41:48.247394    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:48.247394    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:48.247394    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:48.247394    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:48 GMT
	I0314 19:41:48.247394    8428 round_trippers.go:580]     Audit-Id: 843378bf-de35-4827-a8cb-3f161b3eda27
	I0314 19:41:48.247394    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:48.247394    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:48.247925    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1868","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0314 19:41:48.248061    8428 pod_ready.go:102] pod "coredns-5dd5756b68-d22jc" in "kube-system" namespace has status "Ready":"False"
	I0314 19:41:48.737654    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-d22jc
	I0314 19:41:48.737654    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:48.737654    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:48.737654    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:48.743116    8428 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0314 19:41:48.743649    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:48.743649    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:48.743649    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:48.743649    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:49 GMT
	I0314 19:41:48.743649    8428 round_trippers.go:580]     Audit-Id: 7e5c8c94-1664-44e8-b72d-0c8d80f907a7
	I0314 19:41:48.743649    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:48.743649    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:48.743954    8428 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-d22jc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2a563b3f-a175-4dc2-9f0b-67dbaefbfaac","resourceVersion":"1714","creationTimestamp":"2024-03-14T19:19:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"65689a14-ed43-48af-8104-53ea2e3991f3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:19:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"65689a14-ed43-48af-8104-53ea2e3991f3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I0314 19:41:48.744713    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:41:48.744713    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:48.744713    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:48.744713    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:48.748424    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:41:48.748424    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:48.748424    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:48.748424    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:48.748424    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:48.748424    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:49 GMT
	I0314 19:41:48.748424    8428 round_trippers.go:580]     Audit-Id: c9ff291c-66fd-4679-b6ef-7a6cd2d63484
	I0314 19:41:48.748424    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:48.748424    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1868","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0314 19:41:49.237453    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-d22jc
	I0314 19:41:49.237520    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:49.237575    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:49.237575    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:49.243325    8428 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0314 19:41:49.243325    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:49.243325    8428 round_trippers.go:580]     Audit-Id: 385afd4b-5ebe-4a86-875f-c8bd54987cc8
	I0314 19:41:49.243325    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:49.243325    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:49.243325    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:49.243325    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:49.243325    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:49 GMT
	I0314 19:41:49.243325    8428 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-d22jc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2a563b3f-a175-4dc2-9f0b-67dbaefbfaac","resourceVersion":"1714","creationTimestamp":"2024-03-14T19:19:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"65689a14-ed43-48af-8104-53ea2e3991f3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:19:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"65689a14-ed43-48af-8104-53ea2e3991f3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I0314 19:41:49.244173    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:41:49.244281    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:49.244281    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:49.244281    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:49.247889    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:41:49.247889    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:49.247889    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:49.247889    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:49.247889    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:49.247889    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:49.247889    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:49 GMT
	I0314 19:41:49.247889    8428 round_trippers.go:580]     Audit-Id: 35d46784-af67-4676-83ef-fc873ff549ba
	I0314 19:41:49.247889    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1868","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0314 19:41:49.734075    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-d22jc
	I0314 19:41:49.734075    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:49.734075    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:49.734075    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:49.737547    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:41:49.737547    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:49.737547    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:50 GMT
	I0314 19:41:49.737547    8428 round_trippers.go:580]     Audit-Id: 02d243c5-fb2d-402f-945b-ba9bac53d9d0
	I0314 19:41:49.737547    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:49.737547    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:49.737547    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:49.737547    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:49.738320    8428 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-d22jc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2a563b3f-a175-4dc2-9f0b-67dbaefbfaac","resourceVersion":"1714","creationTimestamp":"2024-03-14T19:19:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"65689a14-ed43-48af-8104-53ea2e3991f3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:19:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"65689a14-ed43-48af-8104-53ea2e3991f3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I0314 19:41:49.738947    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:41:49.739027    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:49.739027    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:49.739027    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:49.742171    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:41:49.742171    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:49.742547    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:49.742547    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:49.742547    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:50 GMT
	I0314 19:41:49.742547    8428 round_trippers.go:580]     Audit-Id: a78c0188-d2db-461b-9f7f-9356cdcbe18e
	I0314 19:41:49.742547    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:49.742547    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:49.743013    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1868","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0314 19:41:50.232310    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-d22jc
	I0314 19:41:50.232310    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:50.232310    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:50.232310    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:50.235903    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:41:50.235903    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:50.235903    8428 round_trippers.go:580]     Audit-Id: d6100e54-68ff-438a-8532-332fd7561488
	I0314 19:41:50.235903    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:50.236732    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:50.236732    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:50.236732    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:50.236732    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:50 GMT
	I0314 19:41:50.236812    8428 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-d22jc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2a563b3f-a175-4dc2-9f0b-67dbaefbfaac","resourceVersion":"1714","creationTimestamp":"2024-03-14T19:19:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"65689a14-ed43-48af-8104-53ea2e3991f3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:19:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"65689a14-ed43-48af-8104-53ea2e3991f3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I0314 19:41:50.237941    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:41:50.237941    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:50.238013    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:50.238013    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:50.243647    8428 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0314 19:41:50.243647    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:50.243647    8428 round_trippers.go:580]     Audit-Id: 5f589a5a-5cb6-4cb7-b5af-682fc6eb04ea
	I0314 19:41:50.243647    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:50.243647    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:50.243647    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:50.243647    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:50.243647    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:50 GMT
	I0314 19:41:50.243647    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1868","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0314 19:41:50.736406    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-d22jc
	I0314 19:41:50.736459    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:50.736512    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:50.736512    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:50.739743    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:41:50.739743    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:50.739743    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:51 GMT
	I0314 19:41:50.739743    8428 round_trippers.go:580]     Audit-Id: 6c2b19ec-101b-4222-99e5-6693e17bca16
	I0314 19:41:50.739743    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:50.739743    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:50.739743    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:50.739743    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:50.740345    8428 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-d22jc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2a563b3f-a175-4dc2-9f0b-67dbaefbfaac","resourceVersion":"1714","creationTimestamp":"2024-03-14T19:19:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"65689a14-ed43-48af-8104-53ea2e3991f3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:19:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"65689a14-ed43-48af-8104-53ea2e3991f3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I0314 19:41:50.742395    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:41:50.742395    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:50.742395    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:50.742395    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:50.747035    8428 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 19:41:50.748055    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:50.748055    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:50.748055    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:51 GMT
	I0314 19:41:50.748100    8428 round_trippers.go:580]     Audit-Id: de2a2856-5d20-4199-a3b7-b99d26947ef8
	I0314 19:41:50.748100    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:50.748100    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:50.748100    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:50.748434    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1868","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0314 19:41:50.749228    8428 pod_ready.go:102] pod "coredns-5dd5756b68-d22jc" in "kube-system" namespace has status "Ready":"False"
	I0314 19:41:51.241145    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-d22jc
	I0314 19:41:51.241145    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:51.241145    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:51.241145    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:51.244992    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:41:51.245474    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:51.245474    8428 round_trippers.go:580]     Audit-Id: 5b556003-1bca-4780-8667-e672da262494
	I0314 19:41:51.245474    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:51.245474    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:51.245474    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:51.245474    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:51.245474    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:51 GMT
	I0314 19:41:51.245474    8428 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-d22jc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2a563b3f-a175-4dc2-9f0b-67dbaefbfaac","resourceVersion":"1714","creationTimestamp":"2024-03-14T19:19:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"65689a14-ed43-48af-8104-53ea2e3991f3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:19:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"65689a14-ed43-48af-8104-53ea2e3991f3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I0314 19:41:51.246175    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:41:51.246175    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:51.246175    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:51.246175    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:51.249390    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:41:51.249390    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:51.249390    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:51.249390    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:51.249390    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:51.249390    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:51 GMT
	I0314 19:41:51.249390    8428 round_trippers.go:580]     Audit-Id: e4d510dd-3726-4496-99c4-51f527015b16
	I0314 19:41:51.249390    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:51.249390    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1868","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0314 19:41:51.740068    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-d22jc
	I0314 19:41:51.740068    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:51.740068    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:51.740068    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:51.744445    8428 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 19:41:51.744445    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:51.744445    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:51.744445    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:52 GMT
	I0314 19:41:51.744445    8428 round_trippers.go:580]     Audit-Id: 6e203b11-4300-4326-ad82-3dda87102f01
	I0314 19:41:51.744445    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:51.744445    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:51.744445    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:51.744445    8428 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-d22jc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2a563b3f-a175-4dc2-9f0b-67dbaefbfaac","resourceVersion":"1714","creationTimestamp":"2024-03-14T19:19:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"65689a14-ed43-48af-8104-53ea2e3991f3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:19:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"65689a14-ed43-48af-8104-53ea2e3991f3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I0314 19:41:51.745189    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:41:51.745282    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:51.745282    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:51.745282    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:51.748583    8428 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0314 19:41:51.748583    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:51.748583    8428 round_trippers.go:580]     Audit-Id: 42ddeaaf-b878-4272-bbc4-5bf85e5bd669
	I0314 19:41:51.748583    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:51.748583    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:51.748583    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:51.748583    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:51.748583    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:52 GMT
	I0314 19:41:51.748583    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1868","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0314 19:41:52.240983    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-d22jc
	I0314 19:41:52.240983    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:52.240983    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:52.240983    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:52.245656    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:41:52.245656    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:52.245656    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:52.245656    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:52 GMT
	I0314 19:41:52.245656    8428 round_trippers.go:580]     Audit-Id: f79c0237-2bb3-4edf-ad81-093b0acadba7
	I0314 19:41:52.245656    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:52.245656    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:52.245656    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:52.245882    8428 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-d22jc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2a563b3f-a175-4dc2-9f0b-67dbaefbfaac","resourceVersion":"1714","creationTimestamp":"2024-03-14T19:19:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"65689a14-ed43-48af-8104-53ea2e3991f3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:19:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"65689a14-ed43-48af-8104-53ea2e3991f3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I0314 19:41:52.246348    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:41:52.246348    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:52.246348    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:52.246348    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:52.250199    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:41:52.250199    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:52.250199    8428 round_trippers.go:580]     Audit-Id: 77784c83-5ef4-4188-900a-0c33cfbe7fdb
	I0314 19:41:52.250199    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:52.250199    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:52.250199    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:52.250199    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:52.250199    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:52 GMT
	I0314 19:41:52.250461    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1868","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0314 19:41:52.740949    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-d22jc
	I0314 19:41:52.740949    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:52.740949    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:52.740949    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:52.744521    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:41:52.745302    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:52.745446    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:52.745495    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:52.745495    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:52.745538    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:52.745538    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:53 GMT
	I0314 19:41:52.745538    8428 round_trippers.go:580]     Audit-Id: ae3ed482-02ea-468b-9fbc-f88ee73df7a3
	I0314 19:41:52.745538    8428 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-d22jc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2a563b3f-a175-4dc2-9f0b-67dbaefbfaac","resourceVersion":"1714","creationTimestamp":"2024-03-14T19:19:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"65689a14-ed43-48af-8104-53ea2e3991f3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:19:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"65689a14-ed43-48af-8104-53ea2e3991f3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I0314 19:41:52.746139    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:41:52.746139    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:52.746139    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:52.746139    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:52.749789    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:41:52.749789    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:52.749789    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:52.749789    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:53 GMT
	I0314 19:41:52.749789    8428 round_trippers.go:580]     Audit-Id: 1cbd399f-4bdd-4263-bde3-1e8c70e0f4ee
	I0314 19:41:52.749789    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:52.749789    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:52.749789    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:52.749789    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1868","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0314 19:41:52.749789    8428 pod_ready.go:102] pod "coredns-5dd5756b68-d22jc" in "kube-system" namespace has status "Ready":"False"
	I0314 19:41:53.242741    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-d22jc
	I0314 19:41:53.242816    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:53.242816    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:53.242816    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:53.251110    8428 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0314 19:41:53.251110    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:53.251110    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:53.251110    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:53.251110    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:53.251110    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:53.251110    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:53 GMT
	I0314 19:41:53.251110    8428 round_trippers.go:580]     Audit-Id: b58e29e5-31b8-4827-af82-5fce39f6a3a6
	I0314 19:41:53.251320    8428 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-d22jc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2a563b3f-a175-4dc2-9f0b-67dbaefbfaac","resourceVersion":"1714","creationTimestamp":"2024-03-14T19:19:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"65689a14-ed43-48af-8104-53ea2e3991f3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:19:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"65689a14-ed43-48af-8104-53ea2e3991f3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I0314 19:41:53.252077    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:41:53.252130    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:53.252130    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:53.252130    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:53.261917    8428 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0314 19:41:53.261917    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:53.262672    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:53.262672    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:53.262672    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:53 GMT
	I0314 19:41:53.262672    8428 round_trippers.go:580]     Audit-Id: 27c11afa-2538-45c2-ac85-8c6da5e883e5
	I0314 19:41:53.262672    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:53.262672    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:53.262870    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1868","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0314 19:41:53.731603    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-d22jc
	I0314 19:41:53.731697    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:53.731697    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:53.731697    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:53.735003    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:41:53.735003    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:53.735003    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:53 GMT
	I0314 19:41:53.735003    8428 round_trippers.go:580]     Audit-Id: 4ba9afd3-943a-4005-b634-47a8d090d386
	I0314 19:41:53.735003    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:53.735003    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:53.735003    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:53.735003    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:53.736241    8428 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-d22jc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2a563b3f-a175-4dc2-9f0b-67dbaefbfaac","resourceVersion":"1714","creationTimestamp":"2024-03-14T19:19:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"65689a14-ed43-48af-8104-53ea2e3991f3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:19:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"65689a14-ed43-48af-8104-53ea2e3991f3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I0314 19:41:53.736661    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:41:53.736661    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:53.736661    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:53.736661    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:53.741612    8428 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 19:41:53.741612    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:53.741612    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:54 GMT
	I0314 19:41:53.741612    8428 round_trippers.go:580]     Audit-Id: ab5aea84-1b1b-4625-829e-1cd5ec19ce09
	I0314 19:41:53.741612    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:53.741612    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:53.741612    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:53.741612    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:53.741612    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1868","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0314 19:41:54.232631    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-d22jc
	I0314 19:41:54.232702    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:54.232702    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:54.232702    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:54.237118    8428 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 19:41:54.237118    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:54.237118    8428 round_trippers.go:580]     Audit-Id: d51376bd-bdf4-4df7-ac56-52dbb0a5ed83
	I0314 19:41:54.237118    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:54.237118    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:54.237118    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:54.237118    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:54.237118    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:54 GMT
	I0314 19:41:54.237118    8428 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-d22jc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2a563b3f-a175-4dc2-9f0b-67dbaefbfaac","resourceVersion":"1714","creationTimestamp":"2024-03-14T19:19:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"65689a14-ed43-48af-8104-53ea2e3991f3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:19:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"65689a14-ed43-48af-8104-53ea2e3991f3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I0314 19:41:54.237946    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:41:54.237946    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:54.238035    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:54.238035    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:54.241168    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:41:54.241168    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:54.241168    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:54.241168    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:54.241168    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:54.241168    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:54.241407    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:54 GMT
	I0314 19:41:54.241407    8428 round_trippers.go:580]     Audit-Id: ba7b9edb-ba56-4dd0-82fb-6e338a923bea
	I0314 19:41:54.241688    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1868","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0314 19:41:54.735452    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-d22jc
	I0314 19:41:54.735681    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:54.735681    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:54.735681    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:54.739830    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:41:54.739830    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:54.739830    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:55 GMT
	I0314 19:41:54.739830    8428 round_trippers.go:580]     Audit-Id: 7769d62e-01cc-4e9b-9ca4-163dff0075f8
	I0314 19:41:54.739910    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:54.739910    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:54.739910    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:54.739910    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:54.740058    8428 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-d22jc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2a563b3f-a175-4dc2-9f0b-67dbaefbfaac","resourceVersion":"1714","creationTimestamp":"2024-03-14T19:19:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"65689a14-ed43-48af-8104-53ea2e3991f3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:19:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"65689a14-ed43-48af-8104-53ea2e3991f3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I0314 19:41:54.740648    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:41:54.740648    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:54.740741    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:54.740741    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:54.743856    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:41:54.744066    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:54.744066    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:54.744066    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:54.744066    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:54.744066    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:54.744066    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:55 GMT
	I0314 19:41:54.744066    8428 round_trippers.go:580]     Audit-Id: b12d7112-5dd2-492b-858b-938837f3ae8f
	I0314 19:41:54.744333    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1868","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0314 19:41:55.235138    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-d22jc
	I0314 19:41:55.235200    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:55.235200    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:55.235200    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:55.239334    8428 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 19:41:55.239334    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:55.239334    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:55.239334    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:55.239334    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:55 GMT
	I0314 19:41:55.239334    8428 round_trippers.go:580]     Audit-Id: cd5954d9-323a-4a86-a178-85ceb9b09d8e
	I0314 19:41:55.239334    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:55.239334    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:55.239334    8428 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-d22jc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2a563b3f-a175-4dc2-9f0b-67dbaefbfaac","resourceVersion":"1714","creationTimestamp":"2024-03-14T19:19:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"65689a14-ed43-48af-8104-53ea2e3991f3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:19:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"65689a14-ed43-48af-8104-53ea2e3991f3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I0314 19:41:55.240666    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:41:55.240666    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:55.240666    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:55.240666    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:55.243468    8428 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0314 19:41:55.244171    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:55.244171    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:55.244171    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:55 GMT
	I0314 19:41:55.244171    8428 round_trippers.go:580]     Audit-Id: d9dac7b5-10eb-4532-81b2-4793c32f00b7
	I0314 19:41:55.244171    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:55.244171    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:55.244171    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:55.244260    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1868","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0314 19:41:55.244260    8428 pod_ready.go:102] pod "coredns-5dd5756b68-d22jc" in "kube-system" namespace has status "Ready":"False"
	I0314 19:41:55.732956    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-d22jc
	I0314 19:41:55.733040    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:55.733040    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:55.733040    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:55.739580    8428 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0314 19:41:55.739580    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:55.739580    8428 round_trippers.go:580]     Audit-Id: aceb49bb-f93d-4ca1-8a97-0e26f4c29e3c
	I0314 19:41:55.739580    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:55.739580    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:55.739580    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:55.739580    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:55.739580    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:56 GMT
	I0314 19:41:55.739580    8428 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-d22jc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2a563b3f-a175-4dc2-9f0b-67dbaefbfaac","resourceVersion":"1714","creationTimestamp":"2024-03-14T19:19:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"65689a14-ed43-48af-8104-53ea2e3991f3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:19:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"65689a14-ed43-48af-8104-53ea2e3991f3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I0314 19:41:55.740251    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:41:55.740251    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:55.740251    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:55.740251    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:55.743931    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:41:55.743931    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:55.743931    8428 round_trippers.go:580]     Audit-Id: e3197417-e96b-461e-9a13-2cb3173e135e
	I0314 19:41:55.743931    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:55.743931    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:55.743931    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:55.743931    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:55.743931    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:56 GMT
	I0314 19:41:55.743931    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1868","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0314 19:41:56.233760    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-d22jc
	I0314 19:41:56.233760    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:56.233760    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:56.233954    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:56.237689    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:41:56.238016    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:56.238078    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:56.238078    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:56.238078    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:56 GMT
	I0314 19:41:56.238109    8428 round_trippers.go:580]     Audit-Id: 9528c104-50cb-4376-858d-1405c722b092
	I0314 19:41:56.238109    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:56.238109    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:56.238620    8428 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-d22jc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2a563b3f-a175-4dc2-9f0b-67dbaefbfaac","resourceVersion":"1714","creationTimestamp":"2024-03-14T19:19:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"65689a14-ed43-48af-8104-53ea2e3991f3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:19:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"65689a14-ed43-48af-8104-53ea2e3991f3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I0314 19:41:56.239454    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:41:56.239489    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:56.239538    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:56.239538    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:56.242341    8428 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0314 19:41:56.243190    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:56.243190    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:56.243190    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:56.243249    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:56 GMT
	I0314 19:41:56.243249    8428 round_trippers.go:580]     Audit-Id: ba502ac7-f672-45dd-a1c9-01359b92f829
	I0314 19:41:56.243249    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:56.243249    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:56.243490    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1868","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0314 19:41:56.736161    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-d22jc
	I0314 19:41:56.736395    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:56.736492    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:56.736492    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:56.739940    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:41:56.739940    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:56.739940    8428 round_trippers.go:580]     Audit-Id: 7cd1e3df-1d36-4139-a505-6b4ef9fbfc38
	I0314 19:41:56.739940    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:56.739940    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:56.739940    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:56.739940    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:56.739940    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:57 GMT
	I0314 19:41:56.740521    8428 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-d22jc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2a563b3f-a175-4dc2-9f0b-67dbaefbfaac","resourceVersion":"1714","creationTimestamp":"2024-03-14T19:19:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"65689a14-ed43-48af-8104-53ea2e3991f3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:19:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"65689a14-ed43-48af-8104-53ea2e3991f3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I0314 19:41:56.740832    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:41:56.740832    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:56.740832    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:56.740832    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:56.744515    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:41:56.744515    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:56.744515    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:56.744515    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:56.744515    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:57 GMT
	I0314 19:41:56.744515    8428 round_trippers.go:580]     Audit-Id: fd91bcae-a05d-4e01-8a97-e0dcd67588b7
	I0314 19:41:56.744515    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:56.744515    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:56.744515    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1868","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0314 19:41:57.236132    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-d22jc
	I0314 19:41:57.236226    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:57.236226    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:57.236226    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:57.239674    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:41:57.240134    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:57.240134    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:57.240134    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:57.240134    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:57.240316    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:57 GMT
	I0314 19:41:57.240316    8428 round_trippers.go:580]     Audit-Id: 4b1c797b-3ff3-48ca-aab3-36ac1c0711af
	I0314 19:41:57.240382    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:57.240603    8428 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-d22jc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2a563b3f-a175-4dc2-9f0b-67dbaefbfaac","resourceVersion":"1714","creationTimestamp":"2024-03-14T19:19:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"65689a14-ed43-48af-8104-53ea2e3991f3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:19:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"65689a14-ed43-48af-8104-53ea2e3991f3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I0314 19:41:57.241386    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:41:57.241465    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:57.241465    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:57.241465    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:57.243719    8428 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0314 19:41:57.243719    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:57.243719    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:57.243719    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:57.243719    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:57 GMT
	I0314 19:41:57.243719    8428 round_trippers.go:580]     Audit-Id: 26450d5b-26a3-4cd8-8f33-02d5ab1ae860
	I0314 19:41:57.243719    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:57.243719    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:57.244898    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1868","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0314 19:41:57.245413    8428 pod_ready.go:102] pod "coredns-5dd5756b68-d22jc" in "kube-system" namespace has status "Ready":"False"
	I0314 19:41:57.738333    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-d22jc
	I0314 19:41:57.738333    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:57.738333    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:57.738488    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:57.741530    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:41:57.742606    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:57.742606    8428 round_trippers.go:580]     Audit-Id: 830f79ab-9c16-4427-8397-41ac517a92a1
	I0314 19:41:57.742606    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:57.742606    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:57.742606    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:57.742606    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:57.742606    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:58 GMT
	I0314 19:41:57.742606    8428 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-d22jc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2a563b3f-a175-4dc2-9f0b-67dbaefbfaac","resourceVersion":"1714","creationTimestamp":"2024-03-14T19:19:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"65689a14-ed43-48af-8104-53ea2e3991f3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:19:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"65689a14-ed43-48af-8104-53ea2e3991f3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I0314 19:41:57.743222    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:41:57.743222    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:57.743222    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:57.743222    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:57.746912    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:41:57.746912    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:57.746912    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:57.746912    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:57.746912    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:57.746912    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:57.746912    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:58 GMT
	I0314 19:41:57.746912    8428 round_trippers.go:580]     Audit-Id: 893542fa-f875-439d-aad8-28747860d32a
	I0314 19:41:57.747905    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1868","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0314 19:41:58.236358    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-d22jc
	I0314 19:41:58.236627    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:58.236627    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:58.236627    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:58.243493    8428 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0314 19:41:58.243493    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:58.243493    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:58.243493    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:58.243493    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:58.243493    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:58.243493    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:58 GMT
	I0314 19:41:58.243493    8428 round_trippers.go:580]     Audit-Id: c80943cd-5242-403d-9d55-1e999b5e636a
	I0314 19:41:58.243493    8428 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-d22jc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2a563b3f-a175-4dc2-9f0b-67dbaefbfaac","resourceVersion":"1714","creationTimestamp":"2024-03-14T19:19:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"65689a14-ed43-48af-8104-53ea2e3991f3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:19:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"65689a14-ed43-48af-8104-53ea2e3991f3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I0314 19:41:58.244863    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:41:58.244863    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:58.244863    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:58.244922    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:58.247056    8428 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0314 19:41:58.248045    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:58.248045    8428 round_trippers.go:580]     Audit-Id: 8a5d57d2-c0bc-488c-8ec1-338b9bdbc1e2
	I0314 19:41:58.248045    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:58.248045    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:58.248045    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:58.248045    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:58.248045    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:58 GMT
	I0314 19:41:58.248195    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1868","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0314 19:41:58.736525    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-d22jc
	I0314 19:41:58.736525    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:58.736525    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:58.736525    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:58.740082    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:41:58.740697    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:58.740697    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:58.740697    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:58.740697    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:58.740697    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:58.740697    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:59 GMT
	I0314 19:41:58.740697    8428 round_trippers.go:580]     Audit-Id: 698e99c9-f9cf-435f-a9b2-4d55da6aaf9d
	I0314 19:41:58.740697    8428 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-d22jc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2a563b3f-a175-4dc2-9f0b-67dbaefbfaac","resourceVersion":"1714","creationTimestamp":"2024-03-14T19:19:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"65689a14-ed43-48af-8104-53ea2e3991f3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:19:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"65689a14-ed43-48af-8104-53ea2e3991f3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I0314 19:41:58.741651    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:41:58.741651    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:58.741651    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:58.741651    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:58.745370    8428 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0314 19:41:58.745448    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:58.745448    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:58.745448    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:59 GMT
	I0314 19:41:58.745448    8428 round_trippers.go:580]     Audit-Id: 87e15df5-959e-4241-8847-e50d77646b8f
	I0314 19:41:58.745525    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:58.745525    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:58.745525    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:58.745656    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1868","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0314 19:41:59.236628    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-d22jc
	I0314 19:41:59.236704    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:59.236704    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:59.236704    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:59.240032    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:41:59.240032    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:59.240032    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:59.240032    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:59.240032    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:59 GMT
	I0314 19:41:59.240032    8428 round_trippers.go:580]     Audit-Id: 478e8674-e91f-4d38-a9cb-95e94c626c72
	I0314 19:41:59.240032    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:59.240032    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:59.241104    8428 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-d22jc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2a563b3f-a175-4dc2-9f0b-67dbaefbfaac","resourceVersion":"1714","creationTimestamp":"2024-03-14T19:19:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"65689a14-ed43-48af-8104-53ea2e3991f3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:19:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"65689a14-ed43-48af-8104-53ea2e3991f3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I0314 19:41:59.241503    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:41:59.241503    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:59.241503    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:59.241503    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:59.245078    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:41:59.245078    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:59.245078    8428 round_trippers.go:580]     Audit-Id: 2b7f401f-e703-45c2-9a5d-495575c8c0e5
	I0314 19:41:59.245078    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:59.245078    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:59.245078    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:59.245078    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:59.245078    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:41:59 GMT
	I0314 19:41:59.245354    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1868","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0314 19:41:59.245722    8428 pod_ready.go:102] pod "coredns-5dd5756b68-d22jc" in "kube-system" namespace has status "Ready":"False"
	I0314 19:41:59.737360    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-d22jc
	I0314 19:41:59.737360    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:59.737360    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:59.737360    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:59.741585    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:41:59.741585    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:59.741585    8428 round_trippers.go:580]     Audit-Id: 6cf475cf-1b8c-430e-b8e8-b2e6a3c78b4e
	I0314 19:41:59.741585    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:59.741585    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:59.741585    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:59.741585    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:59.741585    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:42:00 GMT
	I0314 19:41:59.741585    8428 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-d22jc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2a563b3f-a175-4dc2-9f0b-67dbaefbfaac","resourceVersion":"1714","creationTimestamp":"2024-03-14T19:19:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"65689a14-ed43-48af-8104-53ea2e3991f3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:19:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"65689a14-ed43-48af-8104-53ea2e3991f3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I0314 19:41:59.742502    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:41:59.742502    8428 round_trippers.go:469] Request Headers:
	I0314 19:41:59.742561    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:41:59.742561    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:41:59.745473    8428 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0314 19:41:59.745473    8428 round_trippers.go:577] Response Headers:
	I0314 19:41:59.745473    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:41:59.745666    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:41:59.745666    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:41:59.745666    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:41:59.745666    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:42:00 GMT
	I0314 19:41:59.745666    8428 round_trippers.go:580]     Audit-Id: 4dde0bf4-b75d-4fc6-98d5-fcf9394192ff
	I0314 19:41:59.745778    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1868","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0314 19:42:00.237659    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-d22jc
	I0314 19:42:00.237659    8428 round_trippers.go:469] Request Headers:
	I0314 19:42:00.237659    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:42:00.237659    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:42:00.242315    8428 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 19:42:00.242315    8428 round_trippers.go:577] Response Headers:
	I0314 19:42:00.242315    8428 round_trippers.go:580]     Audit-Id: 83cd1f48-c08c-4020-ba77-7476a6b0355b
	I0314 19:42:00.242315    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:42:00.242315    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:42:00.242315    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:42:00.242315    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:42:00.242315    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:42:00 GMT
	I0314 19:42:00.242315    8428 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-d22jc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2a563b3f-a175-4dc2-9f0b-67dbaefbfaac","resourceVersion":"1714","creationTimestamp":"2024-03-14T19:19:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"65689a14-ed43-48af-8104-53ea2e3991f3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:19:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"65689a14-ed43-48af-8104-53ea2e3991f3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I0314 19:42:00.244009    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:42:00.244009    8428 round_trippers.go:469] Request Headers:
	I0314 19:42:00.244057    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:42:00.244057    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:42:00.247316    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:42:00.247316    8428 round_trippers.go:577] Response Headers:
	I0314 19:42:00.248137    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:42:00.248137    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:42:00.248137    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:42:00.248137    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:42:00 GMT
	I0314 19:42:00.248137    8428 round_trippers.go:580]     Audit-Id: 271e1356-4e71-4e5e-b664-021974773825
	I0314 19:42:00.248137    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:42:00.248383    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1868","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0314 19:42:00.738502    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-d22jc
	I0314 19:42:00.738502    8428 round_trippers.go:469] Request Headers:
	I0314 19:42:00.738502    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:42:00.738502    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:42:00.741984    8428 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0314 19:42:00.741984    8428 round_trippers.go:577] Response Headers:
	I0314 19:42:00.741984    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:42:01 GMT
	I0314 19:42:00.741984    8428 round_trippers.go:580]     Audit-Id: 6312c984-4977-4cd0-ae1d-06915e634932
	I0314 19:42:00.741984    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:42:00.742066    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:42:00.742066    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:42:00.742066    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:42:00.742262    8428 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-d22jc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2a563b3f-a175-4dc2-9f0b-67dbaefbfaac","resourceVersion":"1714","creationTimestamp":"2024-03-14T19:19:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"65689a14-ed43-48af-8104-53ea2e3991f3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:19:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"65689a14-ed43-48af-8104-53ea2e3991f3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I0314 19:42:00.742849    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:42:00.742849    8428 round_trippers.go:469] Request Headers:
	I0314 19:42:00.742849    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:42:00.742849    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:42:00.746109    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:42:00.746402    8428 round_trippers.go:577] Response Headers:
	I0314 19:42:00.746402    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:42:00.746402    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:42:00.746402    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:42:00.746402    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:42:01 GMT
	I0314 19:42:00.746402    8428 round_trippers.go:580]     Audit-Id: 3b438d53-d996-4b3c-bc42-9f76b1d61219
	I0314 19:42:00.746402    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:42:00.746607    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1868","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0314 19:42:01.239325    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-d22jc
	I0314 19:42:01.239432    8428 round_trippers.go:469] Request Headers:
	I0314 19:42:01.239432    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:42:01.239432    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:42:01.243386    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:42:01.243386    8428 round_trippers.go:577] Response Headers:
	I0314 19:42:01.243443    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:42:01.243443    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:42:01.243443    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:42:01.243443    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:42:01.243443    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:42:01 GMT
	I0314 19:42:01.243443    8428 round_trippers.go:580]     Audit-Id: 9a98388c-e53c-46e4-a571-a6595d25c3fe
	I0314 19:42:01.243618    8428 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-d22jc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2a563b3f-a175-4dc2-9f0b-67dbaefbfaac","resourceVersion":"1714","creationTimestamp":"2024-03-14T19:19:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"65689a14-ed43-48af-8104-53ea2e3991f3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:19:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"65689a14-ed43-48af-8104-53ea2e3991f3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I0314 19:42:01.244170    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:42:01.244170    8428 round_trippers.go:469] Request Headers:
	I0314 19:42:01.244253    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:42:01.244253    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:42:01.247411    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:42:01.247411    8428 round_trippers.go:577] Response Headers:
	I0314 19:42:01.247411    8428 round_trippers.go:580]     Audit-Id: 29a5494b-bfbd-4e9e-8267-7bad36e0193e
	I0314 19:42:01.247411    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:42:01.247411    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:42:01.247411    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:42:01.247411    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:42:01.247411    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:42:01 GMT
	I0314 19:42:01.247636    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1868","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0314 19:42:01.248497    8428 pod_ready.go:102] pod "coredns-5dd5756b68-d22jc" in "kube-system" namespace has status "Ready":"False"
	I0314 19:42:01.741540    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-d22jc
	I0314 19:42:01.741540    8428 round_trippers.go:469] Request Headers:
	I0314 19:42:01.741540    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:42:01.741540    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:42:01.745137    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:42:01.745137    8428 round_trippers.go:577] Response Headers:
	I0314 19:42:01.745781    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:42:01.745781    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:42:01.745781    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:42:01.745781    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:42:02 GMT
	I0314 19:42:01.745781    8428 round_trippers.go:580]     Audit-Id: 7523b4ac-d583-4c2c-a7cf-162269363a9d
	I0314 19:42:01.745781    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:42:01.745952    8428 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-d22jc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2a563b3f-a175-4dc2-9f0b-67dbaefbfaac","resourceVersion":"1714","creationTimestamp":"2024-03-14T19:19:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"65689a14-ed43-48af-8104-53ea2e3991f3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:19:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"65689a14-ed43-48af-8104-53ea2e3991f3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I0314 19:42:01.747078    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:42:01.747123    8428 round_trippers.go:469] Request Headers:
	I0314 19:42:01.747152    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:42:01.747152    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:42:01.754049    8428 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0314 19:42:01.754049    8428 round_trippers.go:577] Response Headers:
	I0314 19:42:01.754049    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:42:02 GMT
	I0314 19:42:01.754049    8428 round_trippers.go:580]     Audit-Id: a4b2916e-1a12-4979-89b7-d30146016d26
	I0314 19:42:01.754049    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:42:01.754049    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:42:01.754049    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:42:01.754049    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:42:01.754728    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1868","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0314 19:42:02.240260    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-d22jc
	I0314 19:42:02.240479    8428 round_trippers.go:469] Request Headers:
	I0314 19:42:02.240479    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:42:02.240479    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:42:02.244198    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:42:02.244885    8428 round_trippers.go:577] Response Headers:
	I0314 19:42:02.244885    8428 round_trippers.go:580]     Audit-Id: ef542ebf-a9cd-434b-99a6-ff7e2ba78cc1
	I0314 19:42:02.244885    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:42:02.244885    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:42:02.244885    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:42:02.244885    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:42:02.244885    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:42:02 GMT
	I0314 19:42:02.244984    8428 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-d22jc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2a563b3f-a175-4dc2-9f0b-67dbaefbfaac","resourceVersion":"1714","creationTimestamp":"2024-03-14T19:19:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"65689a14-ed43-48af-8104-53ea2e3991f3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:19:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"65689a14-ed43-48af-8104-53ea2e3991f3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I0314 19:42:02.245581    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:42:02.245581    8428 round_trippers.go:469] Request Headers:
	I0314 19:42:02.245581    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:42:02.245581    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:42:02.248151    8428 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0314 19:42:02.249023    8428 round_trippers.go:577] Response Headers:
	I0314 19:42:02.249023    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:42:02 GMT
	I0314 19:42:02.249023    8428 round_trippers.go:580]     Audit-Id: 89621acb-a1a8-4f06-bbcc-116916bbc135
	I0314 19:42:02.249072    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:42:02.249072    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:42:02.249072    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:42:02.249072    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:42:02.249072    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1868","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0314 19:42:02.731334    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-d22jc
	I0314 19:42:02.731334    8428 round_trippers.go:469] Request Headers:
	I0314 19:42:02.731334    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:42:02.731334    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:42:02.735042    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:42:02.735042    8428 round_trippers.go:577] Response Headers:
	I0314 19:42:02.735042    8428 round_trippers.go:580]     Audit-Id: be59076a-63ef-443c-a5ca-b85b80e401f1
	I0314 19:42:02.735042    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:42:02.735042    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:42:02.735042    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:42:02.735042    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:42:02.735042    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:42:02 GMT
	I0314 19:42:02.735735    8428 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-d22jc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2a563b3f-a175-4dc2-9f0b-67dbaefbfaac","resourceVersion":"1714","creationTimestamp":"2024-03-14T19:19:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"65689a14-ed43-48af-8104-53ea2e3991f3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:19:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"65689a14-ed43-48af-8104-53ea2e3991f3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I0314 19:42:02.736325    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:42:02.736403    8428 round_trippers.go:469] Request Headers:
	I0314 19:42:02.736403    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:42:02.736403    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:42:02.739187    8428 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0314 19:42:02.739187    8428 round_trippers.go:577] Response Headers:
	I0314 19:42:02.739187    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:42:02.739187    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:42:02.739187    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:42:02.739187    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:42:02.739187    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:42:03 GMT
	I0314 19:42:02.739187    8428 round_trippers.go:580]     Audit-Id: 580d22b5-82e0-46ca-ac4d-1ce2e25b6f79
	I0314 19:42:02.740300    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1868","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0314 19:42:03.231511    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-d22jc
	I0314 19:42:03.231511    8428 round_trippers.go:469] Request Headers:
	I0314 19:42:03.231511    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:42:03.231764    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:42:03.235008    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:42:03.235008    8428 round_trippers.go:577] Response Headers:
	I0314 19:42:03.235008    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:42:03.235855    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:42:03.235855    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:42:03 GMT
	I0314 19:42:03.235855    8428 round_trippers.go:580]     Audit-Id: e906cfeb-5372-419f-841e-36275aef69b9
	I0314 19:42:03.235855    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:42:03.235855    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:42:03.236020    8428 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-d22jc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2a563b3f-a175-4dc2-9f0b-67dbaefbfaac","resourceVersion":"1714","creationTimestamp":"2024-03-14T19:19:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"65689a14-ed43-48af-8104-53ea2e3991f3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:19:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"65689a14-ed43-48af-8104-53ea2e3991f3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I0314 19:42:03.236668    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:42:03.236668    8428 round_trippers.go:469] Request Headers:
	I0314 19:42:03.236668    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:42:03.236668    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:42:03.239949    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:42:03.240265    8428 round_trippers.go:577] Response Headers:
	I0314 19:42:03.240265    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:42:03.240265    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:42:03 GMT
	I0314 19:42:03.240265    8428 round_trippers.go:580]     Audit-Id: ad260da1-e1a4-425a-8de3-0f4dc9f8611d
	I0314 19:42:03.240265    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:42:03.240265    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:42:03.240265    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:42:03.240335    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1868","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0314 19:42:03.729709    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-d22jc
	I0314 19:42:03.729789    8428 round_trippers.go:469] Request Headers:
	I0314 19:42:03.729789    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:42:03.729867    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:42:03.733582    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:42:03.733823    8428 round_trippers.go:577] Response Headers:
	I0314 19:42:03.733823    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:42:03.733823    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:42:03 GMT
	I0314 19:42:03.733823    8428 round_trippers.go:580]     Audit-Id: 9c7ca33a-df0e-48d2-a598-5dcd3e8ebca8
	I0314 19:42:03.733823    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:42:03.733823    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:42:03.733823    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:42:03.733910    8428 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-d22jc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2a563b3f-a175-4dc2-9f0b-67dbaefbfaac","resourceVersion":"1714","creationTimestamp":"2024-03-14T19:19:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"65689a14-ed43-48af-8104-53ea2e3991f3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:19:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"65689a14-ed43-48af-8104-53ea2e3991f3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I0314 19:42:03.734622    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:42:03.734622    8428 round_trippers.go:469] Request Headers:
	I0314 19:42:03.734622    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:42:03.734622    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:42:03.737356    8428 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0314 19:42:03.737356    8428 round_trippers.go:577] Response Headers:
	I0314 19:42:03.737356    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:42:03.737356    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:42:03.737356    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:42:04 GMT
	I0314 19:42:03.737356    8428 round_trippers.go:580]     Audit-Id: 79313240-6cb5-4091-8cdc-1c165c397efc
	I0314 19:42:03.737356    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:42:03.737356    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:42:03.738528    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1868","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0314 19:42:03.738905    8428 pod_ready.go:102] pod "coredns-5dd5756b68-d22jc" in "kube-system" namespace has status "Ready":"False"
	I0314 19:42:04.228336    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-d22jc
	I0314 19:42:04.228642    8428 round_trippers.go:469] Request Headers:
	I0314 19:42:04.228642    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:42:04.228642    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:42:04.232927    8428 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 19:42:04.233000    8428 round_trippers.go:577] Response Headers:
	I0314 19:42:04.233000    8428 round_trippers.go:580]     Audit-Id: eb2d5e11-8e00-4103-abb1-ec6cde0e6c3c
	I0314 19:42:04.233000    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:42:04.233059    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:42:04.233059    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:42:04.233059    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:42:04.233059    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:42:04 GMT
	I0314 19:42:04.233235    8428 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-d22jc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2a563b3f-a175-4dc2-9f0b-67dbaefbfaac","resourceVersion":"1714","creationTimestamp":"2024-03-14T19:19:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"65689a14-ed43-48af-8104-53ea2e3991f3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:19:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"65689a14-ed43-48af-8104-53ea2e3991f3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I0314 19:42:04.233235    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:42:04.233235    8428 round_trippers.go:469] Request Headers:
	I0314 19:42:04.233235    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:42:04.233235    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:42:04.237328    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:42:04.237587    8428 round_trippers.go:577] Response Headers:
	I0314 19:42:04.237587    8428 round_trippers.go:580]     Audit-Id: 7fd398a0-c6b7-499c-ba31-b6af8485812a
	I0314 19:42:04.237587    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:42:04.237587    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:42:04.237587    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:42:04.237587    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:42:04.237671    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:42:04 GMT
	I0314 19:42:04.237717    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1868","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0314 19:42:04.742889    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-d22jc
	I0314 19:42:04.742889    8428 round_trippers.go:469] Request Headers:
	I0314 19:42:04.742889    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:42:04.742889    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:42:04.747035    8428 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 19:42:04.747035    8428 round_trippers.go:577] Response Headers:
	I0314 19:42:04.747035    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:42:04.747275    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:42:05 GMT
	I0314 19:42:04.747275    8428 round_trippers.go:580]     Audit-Id: 26ce9fc8-268f-4ac5-b718-8bffe0eb4bcb
	I0314 19:42:04.747275    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:42:04.747275    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:42:04.747275    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:42:04.747382    8428 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-d22jc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2a563b3f-a175-4dc2-9f0b-67dbaefbfaac","resourceVersion":"1714","creationTimestamp":"2024-03-14T19:19:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"65689a14-ed43-48af-8104-53ea2e3991f3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:19:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"65689a14-ed43-48af-8104-53ea2e3991f3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I0314 19:42:04.747748    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:42:04.747748    8428 round_trippers.go:469] Request Headers:
	I0314 19:42:04.747748    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:42:04.747748    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:42:04.752637    8428 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 19:42:04.752674    8428 round_trippers.go:577] Response Headers:
	I0314 19:42:04.752674    8428 round_trippers.go:580]     Audit-Id: 26aa6448-4af0-4f68-8fd1-335763c40acb
	I0314 19:42:04.752674    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:42:04.752674    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:42:04.752674    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:42:04.752674    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:42:04.752674    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:42:05 GMT
	I0314 19:42:04.752885    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1868","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0314 19:42:05.242615    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-d22jc
	I0314 19:42:05.242729    8428 round_trippers.go:469] Request Headers:
	I0314 19:42:05.242729    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:42:05.242729    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:42:05.247091    8428 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 19:42:05.247202    8428 round_trippers.go:577] Response Headers:
	I0314 19:42:05.247202    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:42:05 GMT
	I0314 19:42:05.247202    8428 round_trippers.go:580]     Audit-Id: 83152fd1-4358-4253-96c4-ed80fec3a0dd
	I0314 19:42:05.247202    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:42:05.247202    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:42:05.247202    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:42:05.247202    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:42:05.247431    8428 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-d22jc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2a563b3f-a175-4dc2-9f0b-67dbaefbfaac","resourceVersion":"1714","creationTimestamp":"2024-03-14T19:19:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"65689a14-ed43-48af-8104-53ea2e3991f3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:19:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"65689a14-ed43-48af-8104-53ea2e3991f3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I0314 19:42:05.248071    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:42:05.248071    8428 round_trippers.go:469] Request Headers:
	I0314 19:42:05.248071    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:42:05.248071    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:42:05.251148    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:42:05.251486    8428 round_trippers.go:577] Response Headers:
	I0314 19:42:05.251486    8428 round_trippers.go:580]     Audit-Id: 1c6b2950-dc46-42b5-993c-5a7839c5f703
	I0314 19:42:05.251486    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:42:05.251486    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:42:05.251486    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:42:05.251486    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:42:05.251486    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:42:05 GMT
	I0314 19:42:05.251706    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1868","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0314 19:42:05.742679    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-d22jc
	I0314 19:42:05.742766    8428 round_trippers.go:469] Request Headers:
	I0314 19:42:05.742766    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:42:05.742766    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:42:05.747090    8428 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 19:42:05.747090    8428 round_trippers.go:577] Response Headers:
	I0314 19:42:05.747180    8428 round_trippers.go:580]     Audit-Id: 3f8e906f-a2b8-4403-923d-01781968ce22
	I0314 19:42:05.747180    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:42:05.747180    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:42:05.747180    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:42:05.747180    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:42:05.747180    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:42:06 GMT
	I0314 19:42:05.747466    8428 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-d22jc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2a563b3f-a175-4dc2-9f0b-67dbaefbfaac","resourceVersion":"1714","creationTimestamp":"2024-03-14T19:19:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"65689a14-ed43-48af-8104-53ea2e3991f3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:19:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"65689a14-ed43-48af-8104-53ea2e3991f3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I0314 19:42:05.748415    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:42:05.748523    8428 round_trippers.go:469] Request Headers:
	I0314 19:42:05.748523    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:42:05.748523    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:42:05.752290    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:42:05.752290    8428 round_trippers.go:577] Response Headers:
	I0314 19:42:05.752290    8428 round_trippers.go:580]     Audit-Id: 664e836c-1dcc-4aec-b9f8-bce2c313e960
	I0314 19:42:05.752290    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:42:05.752290    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:42:05.752290    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:42:05.752290    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:42:05.752290    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:42:06 GMT
	I0314 19:42:05.753281    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1868","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0314 19:42:05.753281    8428 pod_ready.go:102] pod "coredns-5dd5756b68-d22jc" in "kube-system" namespace has status "Ready":"False"
	I0314 19:42:06.241637    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-d22jc
	I0314 19:42:06.241714    8428 round_trippers.go:469] Request Headers:
	I0314 19:42:06.241714    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:42:06.241714    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:42:06.247011    8428 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0314 19:42:06.247011    8428 round_trippers.go:577] Response Headers:
	I0314 19:42:06.247011    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:42:06.247011    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:42:06.247011    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:42:06.247011    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:42:06.247011    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:42:06 GMT
	I0314 19:42:06.247011    8428 round_trippers.go:580]     Audit-Id: b87bb486-f954-4d46-9cff-74be2314856f
	I0314 19:42:06.247545    8428 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-d22jc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2a563b3f-a175-4dc2-9f0b-67dbaefbfaac","resourceVersion":"1714","creationTimestamp":"2024-03-14T19:19:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"65689a14-ed43-48af-8104-53ea2e3991f3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:19:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"65689a14-ed43-48af-8104-53ea2e3991f3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I0314 19:42:06.247773    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:42:06.247773    8428 round_trippers.go:469] Request Headers:
	I0314 19:42:06.247773    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:42:06.247773    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:42:06.250994    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:42:06.250994    8428 round_trippers.go:577] Response Headers:
	I0314 19:42:06.250994    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:42:06.250994    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:42:06.250994    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:42:06.250994    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:42:06 GMT
	I0314 19:42:06.250994    8428 round_trippers.go:580]     Audit-Id: 6afeb2d0-e202-46d4-aef7-05dc411c17a6
	I0314 19:42:06.250994    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:42:06.250994    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1868","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0314 19:42:06.728875    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-d22jc
	I0314 19:42:06.728930    8428 round_trippers.go:469] Request Headers:
	I0314 19:42:06.728930    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:42:06.728989    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:42:06.732622    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:42:06.732622    8428 round_trippers.go:577] Response Headers:
	I0314 19:42:06.732622    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:42:06 GMT
	I0314 19:42:06.732622    8428 round_trippers.go:580]     Audit-Id: 7396783a-6c98-4017-bc95-9bb85a5d0bb4
	I0314 19:42:06.732622    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:42:06.732622    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:42:06.732622    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:42:06.732622    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:42:06.733094    8428 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-d22jc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2a563b3f-a175-4dc2-9f0b-67dbaefbfaac","resourceVersion":"1714","creationTimestamp":"2024-03-14T19:19:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"65689a14-ed43-48af-8104-53ea2e3991f3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:19:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"65689a14-ed43-48af-8104-53ea2e3991f3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I0314 19:42:06.733633    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:42:06.733746    8428 round_trippers.go:469] Request Headers:
	I0314 19:42:06.733746    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:42:06.733746    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:42:06.737912    8428 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 19:42:06.737912    8428 round_trippers.go:577] Response Headers:
	I0314 19:42:06.737912    8428 round_trippers.go:580]     Audit-Id: b352f220-862a-4d87-8daa-e7b8deade649
	I0314 19:42:06.738243    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:42:06.738243    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:42:06.738243    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:42:06.738243    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:42:06.738243    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:42:07 GMT
	I0314 19:42:06.738428    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1868","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0314 19:42:07.229844    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-d22jc
	I0314 19:42:07.229844    8428 round_trippers.go:469] Request Headers:
	I0314 19:42:07.229934    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:42:07.229934    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:42:07.234621    8428 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 19:42:07.235018    8428 round_trippers.go:577] Response Headers:
	I0314 19:42:07.235018    8428 round_trippers.go:580]     Audit-Id: d64d8135-68ba-4a81-b428-4698ce7398aa
	I0314 19:42:07.235018    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:42:07.235018    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:42:07.235018    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:42:07.235018    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:42:07.235018    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:42:07 GMT
	I0314 19:42:07.236873    8428 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-d22jc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2a563b3f-a175-4dc2-9f0b-67dbaefbfaac","resourceVersion":"1714","creationTimestamp":"2024-03-14T19:19:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"65689a14-ed43-48af-8104-53ea2e3991f3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:19:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"65689a14-ed43-48af-8104-53ea2e3991f3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I0314 19:42:07.237494    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:42:07.237569    8428 round_trippers.go:469] Request Headers:
	I0314 19:42:07.237569    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:42:07.237569    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:42:07.240784    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:42:07.240784    8428 round_trippers.go:577] Response Headers:
	I0314 19:42:07.240784    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:42:07.240784    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:42:07.240784    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:42:07 GMT
	I0314 19:42:07.240784    8428 round_trippers.go:580]     Audit-Id: 1d041fa9-1876-44ed-8c7b-a1e3db6260b5
	I0314 19:42:07.240784    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:42:07.240784    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:42:07.241055    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1868","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0314 19:42:07.729149    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-d22jc
	I0314 19:42:07.729149    8428 round_trippers.go:469] Request Headers:
	I0314 19:42:07.729149    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:42:07.729149    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:42:07.734340    8428 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0314 19:42:07.734340    8428 round_trippers.go:577] Response Headers:
	I0314 19:42:07.734340    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:42:07.734340    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:42:07.734340    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:42:07.734340    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:42:07 GMT
	I0314 19:42:07.734340    8428 round_trippers.go:580]     Audit-Id: 7c666071-9683-4eed-802f-e45f37f4feb1
	I0314 19:42:07.734340    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:42:07.734340    8428 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-d22jc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2a563b3f-a175-4dc2-9f0b-67dbaefbfaac","resourceVersion":"1714","creationTimestamp":"2024-03-14T19:19:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"65689a14-ed43-48af-8104-53ea2e3991f3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:19:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"65689a14-ed43-48af-8104-53ea2e3991f3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I0314 19:42:07.735181    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:42:07.735242    8428 round_trippers.go:469] Request Headers:
	I0314 19:42:07.735242    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:42:07.735242    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:42:07.737909    8428 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0314 19:42:07.737909    8428 round_trippers.go:577] Response Headers:
	I0314 19:42:07.737909    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:42:07.737909    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:42:07.737909    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:42:07.737909    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:42:08 GMT
	I0314 19:42:07.737909    8428 round_trippers.go:580]     Audit-Id: 3e5700c3-9c68-44da-a739-c03ad4a563b0
	I0314 19:42:07.737909    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:42:07.738651    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1868","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0314 19:42:08.229377    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-d22jc
	I0314 19:42:08.229377    8428 round_trippers.go:469] Request Headers:
	I0314 19:42:08.229377    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:42:08.229377    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:42:08.233298    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:42:08.233842    8428 round_trippers.go:577] Response Headers:
	I0314 19:42:08.233842    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:42:08 GMT
	I0314 19:42:08.233842    8428 round_trippers.go:580]     Audit-Id: 5c0c8501-fa6d-46ce-8a5e-d87ea412e436
	I0314 19:42:08.233842    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:42:08.233896    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:42:08.233896    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:42:08.233896    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:42:08.234194    8428 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-d22jc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2a563b3f-a175-4dc2-9f0b-67dbaefbfaac","resourceVersion":"1714","creationTimestamp":"2024-03-14T19:19:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"65689a14-ed43-48af-8104-53ea2e3991f3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:19:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"65689a14-ed43-48af-8104-53ea2e3991f3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I0314 19:42:08.235135    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:42:08.235135    8428 round_trippers.go:469] Request Headers:
	I0314 19:42:08.235135    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:42:08.235225    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:42:08.237464    8428 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0314 19:42:08.238456    8428 round_trippers.go:577] Response Headers:
	I0314 19:42:08.238456    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:42:08.238456    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:42:08 GMT
	I0314 19:42:08.238456    8428 round_trippers.go:580]     Audit-Id: f5a0f9a4-c9f6-4803-945e-d70652c0a646
	I0314 19:42:08.238456    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:42:08.238456    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:42:08.238456    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:42:08.238548    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1868","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0314 19:42:08.238548    8428 pod_ready.go:102] pod "coredns-5dd5756b68-d22jc" in "kube-system" namespace has status "Ready":"False"
	I0314 19:42:08.743066    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-d22jc
	I0314 19:42:08.743066    8428 round_trippers.go:469] Request Headers:
	I0314 19:42:08.743066    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:42:08.743066    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:42:08.746635    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:42:08.747021    8428 round_trippers.go:577] Response Headers:
	I0314 19:42:08.747021    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:42:08.747021    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:42:08.747021    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:42:09 GMT
	I0314 19:42:08.747021    8428 round_trippers.go:580]     Audit-Id: 0b27a5a8-562c-4ebc-9d13-568b35903b6f
	I0314 19:42:08.747021    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:42:08.747021    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:42:08.747021    8428 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-d22jc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2a563b3f-a175-4dc2-9f0b-67dbaefbfaac","resourceVersion":"1714","creationTimestamp":"2024-03-14T19:19:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"65689a14-ed43-48af-8104-53ea2e3991f3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:19:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"65689a14-ed43-48af-8104-53ea2e3991f3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I0314 19:42:08.747774    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:42:08.747873    8428 round_trippers.go:469] Request Headers:
	I0314 19:42:08.747873    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:42:08.747873    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:42:08.751078    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:42:08.751078    8428 round_trippers.go:577] Response Headers:
	I0314 19:42:08.751078    8428 round_trippers.go:580]     Audit-Id: 152da8a3-937f-4453-bd42-fd9b50749dad
	I0314 19:42:08.751078    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:42:08.751078    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:42:08.751078    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:42:08.751078    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:42:08.751078    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:42:09 GMT
	I0314 19:42:08.751666    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1868","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0314 19:42:09.243547    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-d22jc
	I0314 19:42:09.243547    8428 round_trippers.go:469] Request Headers:
	I0314 19:42:09.243547    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:42:09.243547    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:42:09.247210    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:42:09.247210    8428 round_trippers.go:577] Response Headers:
	I0314 19:42:09.247210    8428 round_trippers.go:580]     Audit-Id: 68fb300b-914f-4a93-86ad-db4b67e9e8e6
	I0314 19:42:09.247210    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:42:09.247210    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:42:09.247210    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:42:09.247210    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:42:09.247210    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:42:09 GMT
	I0314 19:42:09.247903    8428 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-d22jc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2a563b3f-a175-4dc2-9f0b-67dbaefbfaac","resourceVersion":"1714","creationTimestamp":"2024-03-14T19:19:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"65689a14-ed43-48af-8104-53ea2e3991f3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:19:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"65689a14-ed43-48af-8104-53ea2e3991f3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I0314 19:42:09.248450    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:42:09.248563    8428 round_trippers.go:469] Request Headers:
	I0314 19:42:09.248592    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:42:09.248592    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:42:09.252639    8428 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 19:42:09.252639    8428 round_trippers.go:577] Response Headers:
	I0314 19:42:09.252639    8428 round_trippers.go:580]     Audit-Id: ae0ec6ac-eb04-4861-a718-6628930ff0ba
	I0314 19:42:09.252639    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:42:09.252639    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:42:09.252639    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:42:09.252639    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:42:09.252639    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:42:09 GMT
	I0314 19:42:09.253439    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1868","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0314 19:42:09.728694    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-d22jc
	I0314 19:42:09.728694    8428 round_trippers.go:469] Request Headers:
	I0314 19:42:09.728694    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:42:09.728948    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:42:09.733667    8428 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 19:42:09.733793    8428 round_trippers.go:577] Response Headers:
	I0314 19:42:09.733793    8428 round_trippers.go:580]     Audit-Id: f00c7d01-5c88-4ea4-910d-33b306d4aacf
	I0314 19:42:09.733793    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:42:09.733793    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:42:09.733793    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:42:09.733793    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:42:09.733793    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:42:09 GMT
	I0314 19:42:09.733793    8428 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-d22jc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2a563b3f-a175-4dc2-9f0b-67dbaefbfaac","resourceVersion":"1714","creationTimestamp":"2024-03-14T19:19:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"65689a14-ed43-48af-8104-53ea2e3991f3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:19:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"65689a14-ed43-48af-8104-53ea2e3991f3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I0314 19:42:09.734601    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:42:09.734601    8428 round_trippers.go:469] Request Headers:
	I0314 19:42:09.734601    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:42:09.734601    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:42:09.738417    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:42:09.738417    8428 round_trippers.go:577] Response Headers:
	I0314 19:42:09.738417    8428 round_trippers.go:580]     Audit-Id: abc3d71c-48dc-4fa3-b91f-c371bfa16a2d
	I0314 19:42:09.738417    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:42:09.738417    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:42:09.738417    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:42:09.738417    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:42:09.738417    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:42:10 GMT
	I0314 19:42:09.738669    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1868","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0314 19:42:10.232402    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-d22jc
	I0314 19:42:10.232637    8428 round_trippers.go:469] Request Headers:
	I0314 19:42:10.232690    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:42:10.232690    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:42:10.238460    8428 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 19:42:10.238460    8428 round_trippers.go:577] Response Headers:
	I0314 19:42:10.238523    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:42:10.238523    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:42:10 GMT
	I0314 19:42:10.238523    8428 round_trippers.go:580]     Audit-Id: 971c256b-a813-4d0a-a55a-c59e9e1b460e
	I0314 19:42:10.238523    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:42:10.238523    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:42:10.238523    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:42:10.238976    8428 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-d22jc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2a563b3f-a175-4dc2-9f0b-67dbaefbfaac","resourceVersion":"1714","creationTimestamp":"2024-03-14T19:19:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"65689a14-ed43-48af-8104-53ea2e3991f3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:19:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"65689a14-ed43-48af-8104-53ea2e3991f3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I0314 19:42:10.239913    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:42:10.239960    8428 round_trippers.go:469] Request Headers:
	I0314 19:42:10.239960    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:42:10.239960    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:42:10.242307    8428 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0314 19:42:10.242307    8428 round_trippers.go:577] Response Headers:
	I0314 19:42:10.242307    8428 round_trippers.go:580]     Audit-Id: 309a0d3c-a75d-4ddd-8544-2d3fda2ce586
	I0314 19:42:10.243313    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:42:10.243313    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:42:10.243313    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:42:10.243313    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:42:10.243313    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:42:10 GMT
	I0314 19:42:10.243568    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1868","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0314 19:42:10.243968    8428 pod_ready.go:102] pod "coredns-5dd5756b68-d22jc" in "kube-system" namespace has status "Ready":"False"
	I0314 19:42:10.736996    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-d22jc
	I0314 19:42:10.736996    8428 round_trippers.go:469] Request Headers:
	I0314 19:42:10.736996    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:42:10.736996    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:42:10.743087    8428 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0314 19:42:10.743087    8428 round_trippers.go:577] Response Headers:
	I0314 19:42:10.743087    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:42:10.743087    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:42:10.743087    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:42:10.743087    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:42:10.743087    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:42:11 GMT
	I0314 19:42:10.743087    8428 round_trippers.go:580]     Audit-Id: 661290cf-6117-4781-af8a-804cfab2f5a3
	I0314 19:42:10.743087    8428 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-d22jc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2a563b3f-a175-4dc2-9f0b-67dbaefbfaac","resourceVersion":"1714","creationTimestamp":"2024-03-14T19:19:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"65689a14-ed43-48af-8104-53ea2e3991f3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:19:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"65689a14-ed43-48af-8104-53ea2e3991f3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I0314 19:42:10.743905    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:42:10.743905    8428 round_trippers.go:469] Request Headers:
	I0314 19:42:10.743905    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:42:10.743905    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:42:10.747196    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:42:10.747196    8428 round_trippers.go:577] Response Headers:
	I0314 19:42:10.747196    8428 round_trippers.go:580]     Audit-Id: 63a55bb5-6c16-415f-8f3b-d07dd9c9951b
	I0314 19:42:10.747196    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:42:10.747196    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:42:10.747196    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:42:10.747196    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:42:10.747196    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:42:11 GMT
	I0314 19:42:10.747713    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1868","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0314 19:42:11.241876    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-d22jc
	I0314 19:42:11.242090    8428 round_trippers.go:469] Request Headers:
	I0314 19:42:11.242090    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:42:11.242090    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:42:11.245858    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:42:11.245858    8428 round_trippers.go:577] Response Headers:
	I0314 19:42:11.245996    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:42:11.245996    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:42:11.245996    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:42:11 GMT
	I0314 19:42:11.245996    8428 round_trippers.go:580]     Audit-Id: ee7f495e-c249-435e-b817-cf3b140b6cbe
	I0314 19:42:11.245996    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:42:11.245996    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:42:11.246116    8428 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-d22jc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2a563b3f-a175-4dc2-9f0b-67dbaefbfaac","resourceVersion":"1714","creationTimestamp":"2024-03-14T19:19:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"65689a14-ed43-48af-8104-53ea2e3991f3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:19:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"65689a14-ed43-48af-8104-53ea2e3991f3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I0314 19:42:11.246770    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:42:11.246770    8428 round_trippers.go:469] Request Headers:
	I0314 19:42:11.246770    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:42:11.246770    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:42:11.249101    8428 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0314 19:42:11.249101    8428 round_trippers.go:577] Response Headers:
	I0314 19:42:11.249101    8428 round_trippers.go:580]     Audit-Id: a4a5c67d-4539-4d97-a01d-e9c54b59c140
	I0314 19:42:11.249101    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:42:11.249101    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:42:11.249101    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:42:11.249101    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:42:11.249101    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:42:11 GMT
	I0314 19:42:11.250230    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1868","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0314 19:42:11.743089    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-d22jc
	I0314 19:42:11.743089    8428 round_trippers.go:469] Request Headers:
	I0314 19:42:11.743089    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:42:11.743089    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:42:11.746675    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:42:11.746675    8428 round_trippers.go:577] Response Headers:
	I0314 19:42:11.746675    8428 round_trippers.go:580]     Audit-Id: abdd4982-e798-4fcd-97d6-7f4f7563370d
	I0314 19:42:11.746675    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:42:11.746675    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:42:11.746675    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:42:11.746675    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:42:11.746675    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:42:12 GMT
	I0314 19:42:11.747569    8428 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-d22jc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2a563b3f-a175-4dc2-9f0b-67dbaefbfaac","resourceVersion":"1714","creationTimestamp":"2024-03-14T19:19:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"65689a14-ed43-48af-8104-53ea2e3991f3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:19:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"65689a14-ed43-48af-8104-53ea2e3991f3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I0314 19:42:11.748272    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:42:11.748345    8428 round_trippers.go:469] Request Headers:
	I0314 19:42:11.748345    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:42:11.748345    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:42:11.751002    8428 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0314 19:42:11.751002    8428 round_trippers.go:577] Response Headers:
	I0314 19:42:11.751002    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:42:11.751002    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:42:11.751002    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:42:11.751002    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:42:11.751002    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:42:12 GMT
	I0314 19:42:11.751002    8428 round_trippers.go:580]     Audit-Id: 960555fa-54b5-4555-aeb8-964909247f6e
	I0314 19:42:11.751827    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1868","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0314 19:42:12.233574    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-d22jc
	I0314 19:42:12.233574    8428 round_trippers.go:469] Request Headers:
	I0314 19:42:12.233574    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:42:12.233574    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:42:12.237703    8428 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 19:42:12.237703    8428 round_trippers.go:577] Response Headers:
	I0314 19:42:12.237703    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:42:12.237703    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:42:12.237703    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:42:12.237703    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:42:12.237703    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:42:12 GMT
	I0314 19:42:12.237703    8428 round_trippers.go:580]     Audit-Id: 51fd0700-93c9-4b80-b937-ede353918635
	I0314 19:42:12.237703    8428 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-d22jc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2a563b3f-a175-4dc2-9f0b-67dbaefbfaac","resourceVersion":"1908","creationTimestamp":"2024-03-14T19:19:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"65689a14-ed43-48af-8104-53ea2e3991f3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:19:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"65689a14-ed43-48af-8104-53ea2e3991f3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6493 chars]
	I0314 19:42:12.238597    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:42:12.238597    8428 round_trippers.go:469] Request Headers:
	I0314 19:42:12.238649    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:42:12.238649    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:42:12.241456    8428 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0314 19:42:12.241657    8428 round_trippers.go:577] Response Headers:
	I0314 19:42:12.241715    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:42:12.241715    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:42:12.241715    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:42:12.241715    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:42:12.241715    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:42:12 GMT
	I0314 19:42:12.241715    8428 round_trippers.go:580]     Audit-Id: 9d0a5006-489e-419e-b269-4cbc6810419e
	I0314 19:42:12.241929    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1868","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0314 19:42:12.242206    8428 pod_ready.go:92] pod "coredns-5dd5756b68-d22jc" in "kube-system" namespace has status "Ready":"True"
	I0314 19:42:12.242206    8428 pod_ready.go:81] duration metric: took 30.5137419s for pod "coredns-5dd5756b68-d22jc" in "kube-system" namespace to be "Ready" ...
	I0314 19:42:12.242206    8428 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-442000" in "kube-system" namespace to be "Ready" ...
	I0314 19:42:12.242206    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-442000
	I0314 19:42:12.242206    8428 round_trippers.go:469] Request Headers:
	I0314 19:42:12.242206    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:42:12.242206    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:42:12.244894    8428 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0314 19:42:12.245712    8428 round_trippers.go:577] Response Headers:
	I0314 19:42:12.245712    8428 round_trippers.go:580]     Audit-Id: ed709778-c966-4692-ab45-fcb485388b4d
	I0314 19:42:12.245712    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:42:12.245712    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:42:12.245712    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:42:12.245712    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:42:12.245712    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:42:12 GMT
	I0314 19:42:12.245712    8428 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-442000","namespace":"kube-system","uid":"106cc31d-907f-4853-9e8d-f13c8ac4e398","resourceVersion":"1808","creationTimestamp":"2024-03-14T19:41:06Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.17.93.236:2379","kubernetes.io/config.hash":"fa99a5621d016aa714804afcaa1e0a53","kubernetes.io/config.mirror":"fa99a5621d016aa714804afcaa1e0a53","kubernetes.io/config.seen":"2024-03-14T19:41:00.367789550Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:41:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 5863 chars]
	I0314 19:42:12.246780    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:42:12.246780    8428 round_trippers.go:469] Request Headers:
	I0314 19:42:12.246852    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:42:12.246852    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:42:12.249709    8428 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0314 19:42:12.249751    8428 round_trippers.go:577] Response Headers:
	I0314 19:42:12.249751    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:42:12 GMT
	I0314 19:42:12.249751    8428 round_trippers.go:580]     Audit-Id: 90a3bdad-45ce-47b2-8b2f-5249440ad9b3
	I0314 19:42:12.249751    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:42:12.249751    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:42:12.249751    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:42:12.249825    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:42:12.250135    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1868","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0314 19:42:12.250135    8428 pod_ready.go:92] pod "etcd-multinode-442000" in "kube-system" namespace has status "Ready":"True"
	I0314 19:42:12.250135    8428 pod_ready.go:81] duration metric: took 7.9285ms for pod "etcd-multinode-442000" in "kube-system" namespace to be "Ready" ...
	I0314 19:42:12.250135    8428 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-442000" in "kube-system" namespace to be "Ready" ...
	I0314 19:42:12.250707    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-442000
	I0314 19:42:12.250743    8428 round_trippers.go:469] Request Headers:
	I0314 19:42:12.250743    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:42:12.250743    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:42:12.252964    8428 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0314 19:42:12.252964    8428 round_trippers.go:577] Response Headers:
	I0314 19:42:12.252964    8428 round_trippers.go:580]     Audit-Id: 1fa2a565-92e8-4391-a6d7-66bc22bbc0ee
	I0314 19:42:12.252964    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:42:12.252964    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:42:12.252964    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:42:12.252964    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:42:12.252964    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:42:12 GMT
	I0314 19:42:12.253977    8428 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-442000","namespace":"kube-system","uid":"ebdd5ddf-2b02-4315-bc64-1b10c383d507","resourceVersion":"1817","creationTimestamp":"2024-03-14T19:41:06Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.17.93.236:8443","kubernetes.io/config.hash":"7754d2f32966faec8123dc3b8a2af767","kubernetes.io/config.mirror":"7754d2f32966faec8123dc3b8a2af767","kubernetes.io/config.seen":"2024-03-14T19:41:00.350706636Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:41:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7400 chars]
	I0314 19:42:12.254468    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:42:12.254525    8428 round_trippers.go:469] Request Headers:
	I0314 19:42:12.254525    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:42:12.254525    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:42:12.257014    8428 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0314 19:42:12.257411    8428 round_trippers.go:577] Response Headers:
	I0314 19:42:12.257411    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:42:12.257411    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:42:12.257411    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:42:12.257411    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:42:12 GMT
	I0314 19:42:12.257411    8428 round_trippers.go:580]     Audit-Id: 446cd449-9ee0-40f9-b0ac-290ab6ed6599
	I0314 19:42:12.257411    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:42:12.257570    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1868","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0314 19:42:12.257623    8428 pod_ready.go:92] pod "kube-apiserver-multinode-442000" in "kube-system" namespace has status "Ready":"True"
	I0314 19:42:12.257623    8428 pod_ready.go:81] duration metric: took 7.4873ms for pod "kube-apiserver-multinode-442000" in "kube-system" namespace to be "Ready" ...
	I0314 19:42:12.257623    8428 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-442000" in "kube-system" namespace to be "Ready" ...
	I0314 19:42:12.257623    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-442000
	I0314 19:42:12.257623    8428 round_trippers.go:469] Request Headers:
	I0314 19:42:12.257623    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:42:12.257623    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:42:12.260355    8428 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0314 19:42:12.260355    8428 round_trippers.go:577] Response Headers:
	I0314 19:42:12.260355    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:42:12.260355    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:42:12 GMT
	I0314 19:42:12.260355    8428 round_trippers.go:580]     Audit-Id: 0d35a5dd-89dc-42a6-8d55-25cbe17507ed
	I0314 19:42:12.260355    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:42:12.260355    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:42:12.260355    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:42:12.261203    8428 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-442000","namespace":"kube-system","uid":"b16fc874-ef74-44ca-a54f-bb678bf982df","resourceVersion":"1813","creationTimestamp":"2024-03-14T19:19:01Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"a7ee530f2bd843eddeace8cd6ec0d204","kubernetes.io/config.mirror":"a7ee530f2bd843eddeace8cd6ec0d204","kubernetes.io/config.seen":"2024-03-14T19:18:55.420205308Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:19:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7175 chars]
	I0314 19:42:12.261801    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:42:12.261801    8428 round_trippers.go:469] Request Headers:
	I0314 19:42:12.261861    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:42:12.261861    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:42:12.264651    8428 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0314 19:42:12.264651    8428 round_trippers.go:577] Response Headers:
	I0314 19:42:12.264651    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:42:12.264651    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:42:12.264651    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:42:12.264651    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:42:12 GMT
	I0314 19:42:12.264651    8428 round_trippers.go:580]     Audit-Id: 45fed321-4ad9-4a25-be94-77537a34fc26
	I0314 19:42:12.264651    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:42:12.264651    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1868","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0314 19:42:12.264651    8428 pod_ready.go:92] pod "kube-controller-manager-multinode-442000" in "kube-system" namespace has status "Ready":"True"
	I0314 19:42:12.265185    8428 pod_ready.go:81] duration metric: took 7.5614ms for pod "kube-controller-manager-multinode-442000" in "kube-system" namespace to be "Ready" ...
	I0314 19:42:12.265185    8428 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-72dzs" in "kube-system" namespace to be "Ready" ...
	I0314 19:42:12.265185    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/namespaces/kube-system/pods/kube-proxy-72dzs
	I0314 19:42:12.265305    8428 round_trippers.go:469] Request Headers:
	I0314 19:42:12.265305    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:42:12.265305    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:42:12.267504    8428 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0314 19:42:12.267504    8428 round_trippers.go:577] Response Headers:
	I0314 19:42:12.267504    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:42:12.267504    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:42:12.267504    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:42:12.267504    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:42:12 GMT
	I0314 19:42:12.267504    8428 round_trippers.go:580]     Audit-Id: fc854cd4-bc4e-4993-9bee-909163a89efe
	I0314 19:42:12.267504    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:42:12.268357    8428 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-72dzs","generateName":"kube-proxy-","namespace":"kube-system","uid":"80b840b0-3803-4102-a966-ea73aed74f49","resourceVersion":"1892","creationTimestamp":"2024-03-14T19:22:02Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"6fc4cc4b-ef3f-4f16-8df5-a146058b364e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:22:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6fc4cc4b-ef3f-4f16-8df5-a146058b364e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5767 chars]
	I0314 19:42:12.268821    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000-m02
	I0314 19:42:12.268821    8428 round_trippers.go:469] Request Headers:
	I0314 19:42:12.268821    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:42:12.268821    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:42:12.271025    8428 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0314 19:42:12.271025    8428 round_trippers.go:577] Response Headers:
	I0314 19:42:12.271025    8428 round_trippers.go:580]     Audit-Id: 9cb3f786-8f00-40f6-9fde-bf6ead449876
	I0314 19:42:12.271025    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:42:12.271025    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:42:12.271520    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:42:12.271520    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:42:12.271520    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:42:12 GMT
	I0314 19:42:12.271675    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000-m02","uid":"5f369d83-fce6-47fe-b14b-171ed626975b","resourceVersion":"1896","creationTimestamp":"2024-03-14T19:22:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_14T19_22_02_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:22:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4582 chars]
	I0314 19:42:12.271675    8428 pod_ready.go:97] node "multinode-442000-m02" hosting pod "kube-proxy-72dzs" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-442000-m02" has status "Ready":"Unknown"
	I0314 19:42:12.271675    8428 pod_ready.go:81] duration metric: took 6.4894ms for pod "kube-proxy-72dzs" in "kube-system" namespace to be "Ready" ...
	E0314 19:42:12.271675    8428 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-442000-m02" hosting pod "kube-proxy-72dzs" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-442000-m02" has status "Ready":"Unknown"
	I0314 19:42:12.271675    8428 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-cg28g" in "kube-system" namespace to be "Ready" ...
	I0314 19:42:12.435681    8428 request.go:629] Waited for 163.4705ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.93.236:8443/api/v1/namespaces/kube-system/pods/kube-proxy-cg28g
	I0314 19:42:12.435795    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/namespaces/kube-system/pods/kube-proxy-cg28g
	I0314 19:42:12.435795    8428 round_trippers.go:469] Request Headers:
	I0314 19:42:12.435795    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:42:12.435903    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:42:12.439240    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:42:12.440099    8428 round_trippers.go:577] Response Headers:
	I0314 19:42:12.440099    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:42:12.440099    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:42:12 GMT
	I0314 19:42:12.440099    8428 round_trippers.go:580]     Audit-Id: 2c36939a-e5b0-4793-a24f-88836a45324b
	I0314 19:42:12.440099    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:42:12.440099    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:42:12.440099    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:42:12.440340    8428 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-cg28g","generateName":"kube-proxy-","namespace":"kube-system","uid":"c7f798bf-6722-4731-af8d-ccd5703d116e","resourceVersion":"1728","creationTimestamp":"2024-03-14T19:19:16Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"6fc4cc4b-ef3f-4f16-8df5-a146058b364e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:19:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6fc4cc4b-ef3f-4f16-8df5-a146058b364e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5739 chars]
	I0314 19:42:12.637591    8428 request.go:629] Waited for 196.3988ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:42:12.637712    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:42:12.637712    8428 round_trippers.go:469] Request Headers:
	I0314 19:42:12.637712    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:42:12.637712    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:42:12.642075    8428 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 19:42:12.647477    8428 round_trippers.go:577] Response Headers:
	I0314 19:42:12.647865    8428 round_trippers.go:580]     Audit-Id: cf04f011-97b2-4e74-b284-1cfb245a502c
	I0314 19:42:12.647865    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:42:12.647865    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:42:12.647865    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:42:12.647865    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:42:12.647865    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:42:12 GMT
	I0314 19:42:12.648173    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1868","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0314 19:42:12.648321    8428 pod_ready.go:92] pod "kube-proxy-cg28g" in "kube-system" namespace has status "Ready":"True"
	I0314 19:42:12.648321    8428 pod_ready.go:81] duration metric: took 376.6178ms for pod "kube-proxy-cg28g" in "kube-system" namespace to be "Ready" ...
	I0314 19:42:12.648321    8428 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-w2qls" in "kube-system" namespace to be "Ready" ...
	I0314 19:42:12.841758    8428 request.go:629] Waited for 193.4221ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.93.236:8443/api/v1/namespaces/kube-system/pods/kube-proxy-w2qls
	I0314 19:42:12.842110    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/namespaces/kube-system/pods/kube-proxy-w2qls
	I0314 19:42:12.842110    8428 round_trippers.go:469] Request Headers:
	I0314 19:42:12.842110    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:42:12.842110    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:42:12.845842    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:42:12.845842    8428 round_trippers.go:577] Response Headers:
	I0314 19:42:12.845842    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:42:13 GMT
	I0314 19:42:12.845842    8428 round_trippers.go:580]     Audit-Id: f84cb0eb-1f70-4c2e-945d-34ff75c5056d
	I0314 19:42:12.845842    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:42:12.845842    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:42:12.845842    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:42:12.845842    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:42:12.846256    8428 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-w2qls","generateName":"kube-proxy-","namespace":"kube-system","uid":"7a53e602-282e-4b63-a993-a5d23d3c615f","resourceVersion":"1678","creationTimestamp":"2024-03-14T19:26:25Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"6fc4cc4b-ef3f-4f16-8df5-a146058b364e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:26:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6fc4cc4b-ef3f-4f16-8df5-a146058b364e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5767 chars]
	I0314 19:42:13.043213    8428 request.go:629] Waited for 196.0671ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.93.236:8443/api/v1/nodes/multinode-442000-m03
	I0314 19:42:13.043316    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000-m03
	I0314 19:42:13.043316    8428 round_trippers.go:469] Request Headers:
	I0314 19:42:13.043460    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:42:13.043536    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:42:13.046717    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:42:13.046717    8428 round_trippers.go:577] Response Headers:
	I0314 19:42:13.046717    8428 round_trippers.go:580]     Audit-Id: 653afd05-a719-49ed-90fc-277195de6957
	I0314 19:42:13.046717    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:42:13.046717    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:42:13.046717    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:42:13.046717    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:42:13.046717    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:42:13 GMT
	I0314 19:42:13.047337    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000-m03","uid":"1b8e342b-6e96-49e8-a22c-874445d29fe3","resourceVersion":"1846","creationTimestamp":"2024-03-14T19:36:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_14T19_36_47_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:36:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4399 chars]
	I0314 19:42:13.047455    8428 pod_ready.go:97] node "multinode-442000-m03" hosting pod "kube-proxy-w2qls" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-442000-m03" has status "Ready":"Unknown"
	I0314 19:42:13.047455    8428 pod_ready.go:81] duration metric: took 399.1034ms for pod "kube-proxy-w2qls" in "kube-system" namespace to be "Ready" ...
	E0314 19:42:13.047455    8428 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-442000-m03" hosting pod "kube-proxy-w2qls" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-442000-m03" has status "Ready":"Unknown"
	I0314 19:42:13.047455    8428 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-442000" in "kube-system" namespace to be "Ready" ...
	I0314 19:42:13.246321    8428 request.go:629] Waited for 198.6029ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.93.236:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-442000
	I0314 19:42:13.246864    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-442000
	I0314 19:42:13.246864    8428 round_trippers.go:469] Request Headers:
	I0314 19:42:13.246938    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:42:13.246938    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:42:13.250294    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:42:13.250294    8428 round_trippers.go:577] Response Headers:
	I0314 19:42:13.250294    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:42:13 GMT
	I0314 19:42:13.250294    8428 round_trippers.go:580]     Audit-Id: 082897c3-4608-499b-a9d7-2d539edadd7f
	I0314 19:42:13.250294    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:42:13.250294    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:42:13.250294    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:42:13.250294    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:42:13.250931    8428 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-442000","namespace":"kube-system","uid":"76b10598-fe0d-4a14-a8e4-a32221fbb68f","resourceVersion":"1803","creationTimestamp":"2024-03-14T19:19:01Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"2b2434280023596d1e3c90125a7219ed","kubernetes.io/config.mirror":"2b2434280023596d1e3c90125a7219ed","kubernetes.io/config.seen":"2024-03-14T19:18:55.420206709Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:19:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 4905 chars]
	I0314 19:42:13.434404    8428 request.go:629] Waited for 182.7389ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:42:13.434528    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:42:13.434528    8428 round_trippers.go:469] Request Headers:
	I0314 19:42:13.434528    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:42:13.434528    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:42:13.437885    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:42:13.438393    8428 round_trippers.go:577] Response Headers:
	I0314 19:42:13.438393    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:42:13.438393    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:42:13.438393    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:42:13.438393    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:42:13.438393    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:42:13 GMT
	I0314 19:42:13.438393    8428 round_trippers.go:580]     Audit-Id: 7991ac5c-b2ff-42f2-b767-b0276c04ddff
	I0314 19:42:13.438599    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1868","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0314 19:42:13.438772    8428 pod_ready.go:92] pod "kube-scheduler-multinode-442000" in "kube-system" namespace has status "Ready":"True"
	I0314 19:42:13.438772    8428 pod_ready.go:81] duration metric: took 391.2879ms for pod "kube-scheduler-multinode-442000" in "kube-system" namespace to be "Ready" ...
	I0314 19:42:13.438772    8428 pod_ready.go:38] duration metric: took 31.7217073s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0314 19:42:13.438772    8428 api_server.go:52] waiting for apiserver process to appear ...
	I0314 19:42:13.446450    8428 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0314 19:42:13.471875    8428 command_runner.go:130] > a598d24960de
	I0314 19:42:13.471923    8428 logs.go:276] 1 containers: [a598d24960de]
	I0314 19:42:13.478296    8428 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0314 19:42:13.503761    8428 command_runner.go:130] > a81a9c43c355
	I0314 19:42:13.503916    8428 logs.go:276] 1 containers: [a81a9c43c355]
	I0314 19:42:13.511298    8428 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0314 19:42:13.537609    8428 command_runner.go:130] > b159aedddf94
	I0314 19:42:13.537691    8428 command_runner.go:130] > 8899bc003893
	I0314 19:42:13.537852    8428 logs.go:276] 2 containers: [b159aedddf94 8899bc003893]
	I0314 19:42:13.544603    8428 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0314 19:42:13.572306    8428 command_runner.go:130] > 32d90a3ea213
	I0314 19:42:13.572441    8428 command_runner.go:130] > dbb603289bf1
	I0314 19:42:13.572520    8428 logs.go:276] 2 containers: [32d90a3ea213 dbb603289bf1]
	I0314 19:42:13.580913    8428 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0314 19:42:13.605087    8428 command_runner.go:130] > 497007582e44
	I0314 19:42:13.605087    8428 command_runner.go:130] > 2a62baf3f1b4
	I0314 19:42:13.605087    8428 logs.go:276] 2 containers: [497007582e44 2a62baf3f1b4]
	I0314 19:42:13.614960    8428 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0314 19:42:13.639965    8428 command_runner.go:130] > 12baf105f0bb
	I0314 19:42:13.640856    8428 command_runner.go:130] > 16b80f73683d
	I0314 19:42:13.641108    8428 logs.go:276] 2 containers: [12baf105f0bb 16b80f73683d]
	I0314 19:42:13.648277    8428 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0314 19:42:13.672225    8428 command_runner.go:130] > 999e4c168afe
	I0314 19:42:13.672628    8428 command_runner.go:130] > 1a321c0e8997
	I0314 19:42:13.672824    8428 logs.go:276] 2 containers: [999e4c168afe 1a321c0e8997]
	I0314 19:42:13.672824    8428 logs.go:123] Gathering logs for kindnet [1a321c0e8997] ...
	I0314 19:42:13.672824    8428 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a321c0e8997"
	I0314 19:42:13.710719    8428 command_runner.go:130] ! I0314 19:27:36.366640       1 main.go:227] handling current node
	I0314 19:42:13.710719    8428 command_runner.go:130] ! I0314 19:27:36.366652       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:13.710719    8428 command_runner.go:130] ! I0314 19:27:36.366658       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:13.710719    8428 command_runner.go:130] ! I0314 19:27:36.366818       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:13.710719    8428 command_runner.go:130] ! I0314 19:27:36.366827       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:13.710719    8428 command_runner.go:130] ! I0314 19:27:46.378468       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:13.710719    8428 command_runner.go:130] ! I0314 19:27:46.378496       1 main.go:227] handling current node
	I0314 19:42:13.710719    8428 command_runner.go:130] ! I0314 19:27:46.378506       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:13.710719    8428 command_runner.go:130] ! I0314 19:27:46.378513       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:13.710719    8428 command_runner.go:130] ! I0314 19:27:46.379039       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:13.710719    8428 command_runner.go:130] ! I0314 19:27:46.379130       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:13.710719    8428 command_runner.go:130] ! I0314 19:27:56.393642       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:13.710719    8428 command_runner.go:130] ! I0314 19:27:56.393700       1 main.go:227] handling current node
	I0314 19:42:13.710719    8428 command_runner.go:130] ! I0314 19:27:56.393723       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:13.710719    8428 command_runner.go:130] ! I0314 19:27:56.393733       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:13.710719    8428 command_runner.go:130] ! I0314 19:27:56.394716       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:13.710719    8428 command_runner.go:130] ! I0314 19:27:56.394779       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:13.710719    8428 command_runner.go:130] ! I0314 19:28:06.403171       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:13.710719    8428 command_runner.go:130] ! I0314 19:28:06.403199       1 main.go:227] handling current node
	I0314 19:42:13.710719    8428 command_runner.go:130] ! I0314 19:28:06.403212       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:13.710719    8428 command_runner.go:130] ! I0314 19:28:06.403219       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:13.710719    8428 command_runner.go:130] ! I0314 19:28:06.403663       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:13.710719    8428 command_runner.go:130] ! I0314 19:28:06.403834       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:13.710719    8428 command_runner.go:130] ! I0314 19:28:16.415146       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:13.710719    8428 command_runner.go:130] ! I0314 19:28:16.415237       1 main.go:227] handling current node
	I0314 19:42:13.710719    8428 command_runner.go:130] ! I0314 19:28:16.415250       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:13.710719    8428 command_runner.go:130] ! I0314 19:28:16.415260       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:13.710719    8428 command_runner.go:130] ! I0314 19:28:16.415497       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:13.710719    8428 command_runner.go:130] ! I0314 19:28:16.415703       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:13.710719    8428 command_runner.go:130] ! I0314 19:28:26.430257       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:13.710719    8428 command_runner.go:130] ! I0314 19:28:26.430350       1 main.go:227] handling current node
	I0314 19:42:13.710719    8428 command_runner.go:130] ! I0314 19:28:26.430364       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:13.710719    8428 command_runner.go:130] ! I0314 19:28:26.430372       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:13.711739    8428 command_runner.go:130] ! I0314 19:28:26.430709       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:13.711739    8428 command_runner.go:130] ! I0314 19:28:26.430804       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:13.711739    8428 command_runner.go:130] ! I0314 19:28:36.445854       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:13.711739    8428 command_runner.go:130] ! I0314 19:28:36.445897       1 main.go:227] handling current node
	I0314 19:42:13.711871    8428 command_runner.go:130] ! I0314 19:28:36.445915       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:13.711871    8428 command_runner.go:130] ! I0314 19:28:36.446285       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:13.711871    8428 command_runner.go:130] ! I0314 19:28:36.446702       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:13.711871    8428 command_runner.go:130] ! I0314 19:28:36.446731       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:13.711981    8428 command_runner.go:130] ! I0314 19:28:46.461369       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:13.711981    8428 command_runner.go:130] ! I0314 19:28:46.462057       1 main.go:227] handling current node
	I0314 19:42:13.711981    8428 command_runner.go:130] ! I0314 19:28:46.462235       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:13.711981    8428 command_runner.go:130] ! I0314 19:28:46.462250       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:13.712076    8428 command_runner.go:130] ! I0314 19:28:46.462593       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:13.712076    8428 command_runner.go:130] ! I0314 19:28:46.462770       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:13.712076    8428 command_runner.go:130] ! I0314 19:28:56.477451       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:13.712201    8428 command_runner.go:130] ! I0314 19:28:56.477483       1 main.go:227] handling current node
	I0314 19:42:13.712201    8428 command_runner.go:130] ! I0314 19:28:56.477496       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:13.712201    8428 command_runner.go:130] ! I0314 19:28:56.477508       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:13.712298    8428 command_runner.go:130] ! I0314 19:28:56.478007       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:13.712298    8428 command_runner.go:130] ! I0314 19:28:56.478089       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:13.712298    8428 command_runner.go:130] ! I0314 19:29:06.484423       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:13.712298    8428 command_runner.go:130] ! I0314 19:29:06.484497       1 main.go:227] handling current node
	I0314 19:42:13.712298    8428 command_runner.go:130] ! I0314 19:29:06.484559       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:13.712406    8428 command_runner.go:130] ! I0314 19:29:06.484624       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:13.712756    8428 command_runner.go:130] ! I0314 19:29:06.484852       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:13.712864    8428 command_runner.go:130] ! I0314 19:29:06.484945       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:13.712864    8428 command_runner.go:130] ! I0314 19:29:16.500812       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:13.712864    8428 command_runner.go:130] ! I0314 19:29:16.500909       1 main.go:227] handling current node
	I0314 19:42:13.712961    8428 command_runner.go:130] ! I0314 19:29:16.500924       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:13.712983    8428 command_runner.go:130] ! I0314 19:29:16.500932       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:13.713061    8428 command_runner.go:130] ! I0314 19:29:16.501505       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:13.713061    8428 command_runner.go:130] ! I0314 19:29:16.501585       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:13.713061    8428 command_runner.go:130] ! I0314 19:29:26.508494       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:13.713061    8428 command_runner.go:130] ! I0314 19:29:26.508585       1 main.go:227] handling current node
	I0314 19:42:13.713061    8428 command_runner.go:130] ! I0314 19:29:26.508601       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:13.713171    8428 command_runner.go:130] ! I0314 19:29:26.508609       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:13.713171    8428 command_runner.go:130] ! I0314 19:29:26.508822       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:13.713171    8428 command_runner.go:130] ! I0314 19:29:26.508837       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:13.713171    8428 command_runner.go:130] ! I0314 19:29:36.517002       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:13.713279    8428 command_runner.go:130] ! I0314 19:29:36.517123       1 main.go:227] handling current node
	I0314 19:42:13.713279    8428 command_runner.go:130] ! I0314 19:29:36.517142       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:13.713279    8428 command_runner.go:130] ! I0314 19:29:36.517155       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:13.713279    8428 command_runner.go:130] ! I0314 19:29:36.517648       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:13.713382    8428 command_runner.go:130] ! I0314 19:29:36.517836       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:13.713382    8428 command_runner.go:130] ! I0314 19:29:46.530826       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:13.713382    8428 command_runner.go:130] ! I0314 19:29:46.530962       1 main.go:227] handling current node
	I0314 19:42:13.713476    8428 command_runner.go:130] ! I0314 19:29:46.530978       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:13.713476    8428 command_runner.go:130] ! I0314 19:29:46.531314       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:13.713476    8428 command_runner.go:130] ! I0314 19:29:46.531557       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:13.713568    8428 command_runner.go:130] ! I0314 19:29:46.531706       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:13.713568    8428 command_runner.go:130] ! I0314 19:29:56.551916       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:13.713568    8428 command_runner.go:130] ! I0314 19:29:56.551953       1 main.go:227] handling current node
	I0314 19:42:13.713568    8428 command_runner.go:130] ! I0314 19:29:56.551965       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:13.713702    8428 command_runner.go:130] ! I0314 19:29:56.551971       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:13.713789    8428 command_runner.go:130] ! I0314 19:29:56.552084       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:13.713789    8428 command_runner.go:130] ! I0314 19:29:56.552107       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:13.713789    8428 command_runner.go:130] ! I0314 19:30:06.560066       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:13.713864    8428 command_runner.go:130] ! I0314 19:30:06.560115       1 main.go:227] handling current node
	I0314 19:42:13.713864    8428 command_runner.go:130] ! I0314 19:30:06.560129       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:13.713933    8428 command_runner.go:130] ! I0314 19:30:06.560136       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:13.713956    8428 command_runner.go:130] ! I0314 19:30:06.560429       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:13.714039    8428 command_runner.go:130] ! I0314 19:30:06.560534       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:13.714062    8428 command_runner.go:130] ! I0314 19:30:16.573690       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:13.714135    8428 command_runner.go:130] ! I0314 19:30:16.573731       1 main.go:227] handling current node
	I0314 19:42:13.714135    8428 command_runner.go:130] ! I0314 19:30:16.573978       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:13.714208    8428 command_runner.go:130] ! I0314 19:30:16.574067       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:13.714208    8428 command_runner.go:130] ! I0314 19:30:16.574385       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:13.714256    8428 command_runner.go:130] ! I0314 19:30:16.574414       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:13.714256    8428 command_runner.go:130] ! I0314 19:30:26.589277       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:13.714256    8428 command_runner.go:130] ! I0314 19:30:26.589488       1 main.go:227] handling current node
	I0314 19:42:13.714331    8428 command_runner.go:130] ! I0314 19:30:26.589534       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:13.714415    8428 command_runner.go:130] ! I0314 19:30:26.589557       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:13.714438    8428 command_runner.go:130] ! I0314 19:30:26.589802       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:13.714438    8428 command_runner.go:130] ! I0314 19:30:26.589885       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:13.714438    8428 command_runner.go:130] ! I0314 19:30:36.605356       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:13.714438    8428 command_runner.go:130] ! I0314 19:30:36.605400       1 main.go:227] handling current node
	I0314 19:42:13.714438    8428 command_runner.go:130] ! I0314 19:30:36.605412       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:13.714438    8428 command_runner.go:130] ! I0314 19:30:36.605418       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:13.714438    8428 command_runner.go:130] ! I0314 19:30:36.605556       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:13.714438    8428 command_runner.go:130] ! I0314 19:30:36.605625       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:13.714438    8428 command_runner.go:130] ! I0314 19:30:46.612911       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:13.714438    8428 command_runner.go:130] ! I0314 19:30:46.613010       1 main.go:227] handling current node
	I0314 19:42:13.714438    8428 command_runner.go:130] ! I0314 19:30:46.613025       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:13.714438    8428 command_runner.go:130] ! I0314 19:30:46.613034       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:13.714438    8428 command_runner.go:130] ! I0314 19:30:46.613445       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:13.714438    8428 command_runner.go:130] ! I0314 19:30:46.615380       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:13.714438    8428 command_runner.go:130] ! I0314 19:30:56.630605       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:13.714438    8428 command_runner.go:130] ! I0314 19:30:56.630965       1 main.go:227] handling current node
	I0314 19:42:13.714438    8428 command_runner.go:130] ! I0314 19:30:56.631076       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:13.714438    8428 command_runner.go:130] ! I0314 19:30:56.631132       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:13.714438    8428 command_runner.go:130] ! I0314 19:30:56.631442       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:13.714974    8428 command_runner.go:130] ! I0314 19:30:56.631542       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:13.715094    8428 command_runner.go:130] ! I0314 19:31:06.643588       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:13.715094    8428 command_runner.go:130] ! I0314 19:31:06.643631       1 main.go:227] handling current node
	I0314 19:42:13.715094    8428 command_runner.go:130] ! I0314 19:31:06.643643       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:13.715094    8428 command_runner.go:130] ! I0314 19:31:06.643650       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:13.715198    8428 command_runner.go:130] ! I0314 19:31:06.644160       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:13.715198    8428 command_runner.go:130] ! I0314 19:31:06.644255       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:13.715198    8428 command_runner.go:130] ! I0314 19:31:16.650940       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:13.715198    8428 command_runner.go:130] ! I0314 19:31:16.651187       1 main.go:227] handling current node
	I0314 19:42:13.715309    8428 command_runner.go:130] ! I0314 19:31:16.651208       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:13.715309    8428 command_runner.go:130] ! I0314 19:31:16.651236       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:13.715309    8428 command_runner.go:130] ! I0314 19:31:16.651354       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:13.715309    8428 command_runner.go:130] ! I0314 19:31:16.651374       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:13.715413    8428 command_runner.go:130] ! I0314 19:31:26.665304       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:13.715413    8428 command_runner.go:130] ! I0314 19:31:26.665403       1 main.go:227] handling current node
	I0314 19:42:13.715413    8428 command_runner.go:130] ! I0314 19:31:26.665418       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:13.715509    8428 command_runner.go:130] ! I0314 19:31:26.665427       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:13.715509    8428 command_runner.go:130] ! I0314 19:31:26.665674       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:13.715509    8428 command_runner.go:130] ! I0314 19:31:26.665859       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:13.715603    8428 command_runner.go:130] ! I0314 19:31:36.681645       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:13.715603    8428 command_runner.go:130] ! I0314 19:31:36.681680       1 main.go:227] handling current node
	I0314 19:42:13.715603    8428 command_runner.go:130] ! I0314 19:31:36.681695       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:13.715603    8428 command_runner.go:130] ! I0314 19:31:36.681704       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:13.715699    8428 command_runner.go:130] ! I0314 19:31:36.682032       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:13.715699    8428 command_runner.go:130] ! I0314 19:31:36.682062       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:13.715699    8428 command_runner.go:130] ! I0314 19:31:46.697305       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:13.715699    8428 command_runner.go:130] ! I0314 19:31:46.697415       1 main.go:227] handling current node
	I0314 19:42:13.715804    8428 command_runner.go:130] ! I0314 19:31:46.697432       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:13.715804    8428 command_runner.go:130] ! I0314 19:31:46.697444       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:13.715804    8428 command_runner.go:130] ! I0314 19:31:46.697965       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:13.715804    8428 command_runner.go:130] ! I0314 19:31:46.698093       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:13.715916    8428 command_runner.go:130] ! I0314 19:31:56.705518       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:13.715916    8428 command_runner.go:130] ! I0314 19:31:56.705613       1 main.go:227] handling current node
	I0314 19:42:13.715985    8428 command_runner.go:130] ! I0314 19:31:56.705627       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:13.716020    8428 command_runner.go:130] ! I0314 19:31:56.705635       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:13.716020    8428 command_runner.go:130] ! I0314 19:31:56.706151       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:13.716064    8428 command_runner.go:130] ! I0314 19:31:56.706269       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:13.716097    8428 command_runner.go:130] ! I0314 19:32:06.716977       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:13.716097    8428 command_runner.go:130] ! I0314 19:32:06.717087       1 main.go:227] handling current node
	I0314 19:42:13.716170    8428 command_runner.go:130] ! I0314 19:32:06.717105       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:13.716229    8428 command_runner.go:130] ! I0314 19:32:06.717116       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:13.716229    8428 command_runner.go:130] ! I0314 19:32:06.717701       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:13.716229    8428 command_runner.go:130] ! I0314 19:32:06.717870       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:13.716229    8428 command_runner.go:130] ! I0314 19:32:16.738903       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:13.716229    8428 command_runner.go:130] ! I0314 19:32:16.738946       1 main.go:227] handling current node
	I0314 19:42:13.716229    8428 command_runner.go:130] ! I0314 19:32:16.738962       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:13.716229    8428 command_runner.go:130] ! I0314 19:32:16.738971       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:13.716229    8428 command_runner.go:130] ! I0314 19:32:16.739310       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:13.716229    8428 command_runner.go:130] ! I0314 19:32:16.739420       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:13.716229    8428 command_runner.go:130] ! I0314 19:32:26.749067       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:13.716229    8428 command_runner.go:130] ! I0314 19:32:26.749521       1 main.go:227] handling current node
	I0314 19:42:13.716229    8428 command_runner.go:130] ! I0314 19:32:26.749656       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:13.716229    8428 command_runner.go:130] ! I0314 19:32:26.749670       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:13.716229    8428 command_runner.go:130] ! I0314 19:32:26.750040       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:13.716229    8428 command_runner.go:130] ! I0314 19:32:26.750074       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:13.716229    8428 command_runner.go:130] ! I0314 19:32:36.765313       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:13.716229    8428 command_runner.go:130] ! I0314 19:32:36.765423       1 main.go:227] handling current node
	I0314 19:42:13.716229    8428 command_runner.go:130] ! I0314 19:32:36.765442       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:13.716229    8428 command_runner.go:130] ! I0314 19:32:36.765453       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:13.716229    8428 command_runner.go:130] ! I0314 19:32:36.766102       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:13.716229    8428 command_runner.go:130] ! I0314 19:32:36.766130       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:13.716229    8428 command_runner.go:130] ! I0314 19:32:46.781715       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:13.716229    8428 command_runner.go:130] ! I0314 19:32:46.781800       1 main.go:227] handling current node
	I0314 19:42:13.716229    8428 command_runner.go:130] ! I0314 19:32:46.782151       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:13.716802    8428 command_runner.go:130] ! I0314 19:32:46.782168       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:13.716942    8428 command_runner.go:130] ! I0314 19:32:46.782370       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:13.716942    8428 command_runner.go:130] ! I0314 19:32:46.782396       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:13.717018    8428 command_runner.go:130] ! I0314 19:32:56.797473       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:13.717041    8428 command_runner.go:130] ! I0314 19:32:56.797568       1 main.go:227] handling current node
	I0314 19:42:13.717041    8428 command_runner.go:130] ! I0314 19:32:56.797583       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:13.717115    8428 command_runner.go:130] ! I0314 19:32:56.797621       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:13.717184    8428 command_runner.go:130] ! I0314 19:32:56.797733       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:13.717219    8428 command_runner.go:130] ! I0314 19:32:56.797772       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:13.717219    8428 command_runner.go:130] ! I0314 19:33:06.803421       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:13.717219    8428 command_runner.go:130] ! I0314 19:33:06.803513       1 main.go:227] handling current node
	I0314 19:42:13.717365    8428 command_runner.go:130] ! I0314 19:33:06.803527       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:13.717418    8428 command_runner.go:130] ! I0314 19:33:06.803534       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:13.717418    8428 command_runner.go:130] ! I0314 19:33:06.804158       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:13.717418    8428 command_runner.go:130] ! I0314 19:33:06.804237       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:13.717495    8428 command_runner.go:130] ! I0314 19:33:16.818983       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:13.717562    8428 command_runner.go:130] ! I0314 19:33:16.819134       1 main.go:227] handling current node
	I0314 19:42:13.717562    8428 command_runner.go:130] ! I0314 19:33:16.819149       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:13.717562    8428 command_runner.go:130] ! I0314 19:33:16.819157       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:13.717562    8428 command_runner.go:130] ! I0314 19:33:16.819421       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:13.717562    8428 command_runner.go:130] ! I0314 19:33:16.819491       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:13.717562    8428 command_runner.go:130] ! I0314 19:33:26.826209       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:13.717562    8428 command_runner.go:130] ! I0314 19:33:26.826474       1 main.go:227] handling current node
	I0314 19:42:13.717562    8428 command_runner.go:130] ! I0314 19:33:26.826509       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:13.717562    8428 command_runner.go:130] ! I0314 19:33:26.826519       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:13.717562    8428 command_runner.go:130] ! I0314 19:33:26.826794       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:13.717562    8428 command_runner.go:130] ! I0314 19:33:26.826886       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:13.717562    8428 command_runner.go:130] ! I0314 19:33:36.839979       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:13.717562    8428 command_runner.go:130] ! I0314 19:33:36.840555       1 main.go:227] handling current node
	I0314 19:42:13.717562    8428 command_runner.go:130] ! I0314 19:33:36.840828       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:13.717562    8428 command_runner.go:130] ! I0314 19:33:36.840855       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:13.717562    8428 command_runner.go:130] ! I0314 19:33:36.841055       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:13.717562    8428 command_runner.go:130] ! I0314 19:33:36.841183       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:13.717562    8428 command_runner.go:130] ! I0314 19:33:46.854483       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:13.717562    8428 command_runner.go:130] ! I0314 19:33:46.854585       1 main.go:227] handling current node
	I0314 19:42:13.717562    8428 command_runner.go:130] ! I0314 19:33:46.854600       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:13.717562    8428 command_runner.go:130] ! I0314 19:33:46.854608       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:13.717562    8428 command_runner.go:130] ! I0314 19:33:46.855303       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:13.717562    8428 command_runner.go:130] ! I0314 19:33:46.855389       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:13.718092    8428 command_runner.go:130] ! I0314 19:33:56.867052       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:13.718092    8428 command_runner.go:130] ! I0314 19:33:56.867136       1 main.go:227] handling current node
	I0314 19:42:13.718092    8428 command_runner.go:130] ! I0314 19:33:56.867150       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:13.718194    8428 command_runner.go:130] ! I0314 19:33:56.867158       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:13.718281    8428 command_runner.go:130] ! I0314 19:33:56.867493       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:13.718316    8428 command_runner.go:130] ! I0314 19:33:56.867886       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:13.718316    8428 command_runner.go:130] ! I0314 19:34:06.874298       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:13.718316    8428 command_runner.go:130] ! I0314 19:34:06.874391       1 main.go:227] handling current node
	I0314 19:42:13.718316    8428 command_runner.go:130] ! I0314 19:34:06.874405       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:13.718316    8428 command_runner.go:130] ! I0314 19:34:06.874413       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:13.718316    8428 command_runner.go:130] ! I0314 19:34:06.874932       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:13.718316    8428 command_runner.go:130] ! I0314 19:34:06.874962       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:13.718316    8428 command_runner.go:130] ! I0314 19:34:16.890513       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:13.718316    8428 command_runner.go:130] ! I0314 19:34:16.890589       1 main.go:227] handling current node
	I0314 19:42:13.718316    8428 command_runner.go:130] ! I0314 19:34:16.890604       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:13.718316    8428 command_runner.go:130] ! I0314 19:34:16.890612       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:13.718316    8428 command_runner.go:130] ! I0314 19:34:16.890870       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:13.718316    8428 command_runner.go:130] ! I0314 19:34:16.890953       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:13.718316    8428 command_runner.go:130] ! I0314 19:34:26.908423       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:13.718316    8428 command_runner.go:130] ! I0314 19:34:26.908576       1 main.go:227] handling current node
	I0314 19:42:13.718316    8428 command_runner.go:130] ! I0314 19:34:26.908597       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:13.718316    8428 command_runner.go:130] ! I0314 19:34:26.908606       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:13.718316    8428 command_runner.go:130] ! I0314 19:34:26.909103       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:13.718316    8428 command_runner.go:130] ! I0314 19:34:26.909271       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:13.718316    8428 command_runner.go:130] ! I0314 19:34:36.915794       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:13.718316    8428 command_runner.go:130] ! I0314 19:34:36.915910       1 main.go:227] handling current node
	I0314 19:42:13.718316    8428 command_runner.go:130] ! I0314 19:34:36.915926       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:13.718853    8428 command_runner.go:130] ! I0314 19:34:36.915935       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:13.718853    8428 command_runner.go:130] ! I0314 19:34:36.916282       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:13.718853    8428 command_runner.go:130] ! I0314 19:34:36.916372       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:13.718853    8428 command_runner.go:130] ! I0314 19:34:46.931699       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:13.718853    8428 command_runner.go:130] ! I0314 19:34:46.931833       1 main.go:227] handling current node
	I0314 19:42:13.718974    8428 command_runner.go:130] ! I0314 19:34:46.931849       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:13.719026    8428 command_runner.go:130] ! I0314 19:34:46.931858       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:13.719026    8428 command_runner.go:130] ! I0314 19:34:46.932099       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:13.719026    8428 command_runner.go:130] ! I0314 19:34:46.932124       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:13.719026    8428 command_runner.go:130] ! I0314 19:34:56.946470       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:13.719026    8428 command_runner.go:130] ! I0314 19:34:56.946565       1 main.go:227] handling current node
	I0314 19:42:13.719026    8428 command_runner.go:130] ! I0314 19:34:56.946580       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:13.719026    8428 command_runner.go:130] ! I0314 19:34:56.946588       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:13.719026    8428 command_runner.go:130] ! I0314 19:34:56.946812       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:13.719026    8428 command_runner.go:130] ! I0314 19:34:56.946927       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:13.719026    8428 command_runner.go:130] ! I0314 19:35:06.960844       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:13.719026    8428 command_runner.go:130] ! I0314 19:35:06.960939       1 main.go:227] handling current node
	I0314 19:42:13.719026    8428 command_runner.go:130] ! I0314 19:35:06.960954       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:13.719026    8428 command_runner.go:130] ! I0314 19:35:06.960962       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:13.719026    8428 command_runner.go:130] ! I0314 19:35:06.961467       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:13.719026    8428 command_runner.go:130] ! I0314 19:35:06.961574       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:13.719026    8428 command_runner.go:130] ! I0314 19:35:16.981993       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:13.719026    8428 command_runner.go:130] ! I0314 19:35:16.982080       1 main.go:227] handling current node
	I0314 19:42:13.719026    8428 command_runner.go:130] ! I0314 19:35:16.982095       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:13.719026    8428 command_runner.go:130] ! I0314 19:35:16.982103       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:13.719026    8428 command_runner.go:130] ! I0314 19:35:16.982594       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:13.719026    8428 command_runner.go:130] ! I0314 19:35:16.982673       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:13.719026    8428 command_runner.go:130] ! I0314 19:35:26.993848       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:13.719026    8428 command_runner.go:130] ! I0314 19:35:26.993940       1 main.go:227] handling current node
	I0314 19:42:13.719026    8428 command_runner.go:130] ! I0314 19:35:26.993955       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:13.719026    8428 command_runner.go:130] ! I0314 19:35:26.993963       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:13.719026    8428 command_runner.go:130] ! I0314 19:35:26.994360       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:13.719026    8428 command_runner.go:130] ! I0314 19:35:26.994437       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:13.719026    8428 command_runner.go:130] ! I0314 19:35:37.008613       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:13.719026    8428 command_runner.go:130] ! I0314 19:35:37.008706       1 main.go:227] handling current node
	I0314 19:42:13.719026    8428 command_runner.go:130] ! I0314 19:35:37.008720       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:13.719026    8428 command_runner.go:130] ! I0314 19:35:37.008727       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:13.719026    8428 command_runner.go:130] ! I0314 19:35:37.009233       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:13.719026    8428 command_runner.go:130] ! I0314 19:35:37.009320       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:13.719026    8428 command_runner.go:130] ! I0314 19:35:47.018420       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:13.719026    8428 command_runner.go:130] ! I0314 19:35:47.018526       1 main.go:227] handling current node
	I0314 19:42:13.719026    8428 command_runner.go:130] ! I0314 19:35:47.018541       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:13.719026    8428 command_runner.go:130] ! I0314 19:35:47.018549       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:13.719026    8428 command_runner.go:130] ! I0314 19:35:47.018669       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:13.719026    8428 command_runner.go:130] ! I0314 19:35:47.018680       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:13.719026    8428 command_runner.go:130] ! I0314 19:35:57.025132       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:13.719026    8428 command_runner.go:130] ! I0314 19:35:57.025207       1 main.go:227] handling current node
	I0314 19:42:13.719026    8428 command_runner.go:130] ! I0314 19:35:57.025220       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:13.719561    8428 command_runner.go:130] ! I0314 19:35:57.025228       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:13.719561    8428 command_runner.go:130] ! I0314 19:35:57.026009       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:13.719561    8428 command_runner.go:130] ! I0314 19:35:57.026145       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:13.719561    8428 command_runner.go:130] ! I0314 19:36:07.042281       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:13.719561    8428 command_runner.go:130] ! I0314 19:36:07.042353       1 main.go:227] handling current node
	I0314 19:42:13.719561    8428 command_runner.go:130] ! I0314 19:36:07.042367       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:13.719643    8428 command_runner.go:130] ! I0314 19:36:07.042375       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:13.719643    8428 command_runner.go:130] ! I0314 19:36:07.042493       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:13.719643    8428 command_runner.go:130] ! I0314 19:36:07.042500       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:13.719693    8428 command_runner.go:130] ! I0314 19:36:17.055539       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:13.719693    8428 command_runner.go:130] ! I0314 19:36:17.055567       1 main.go:227] handling current node
	I0314 19:42:13.719693    8428 command_runner.go:130] ! I0314 19:36:17.055581       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:13.719733    8428 command_runner.go:130] ! I0314 19:36:17.055588       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:13.719733    8428 command_runner.go:130] ! I0314 19:36:17.056312       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:13.719733    8428 command_runner.go:130] ! I0314 19:36:17.056341       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:13.719733    8428 command_runner.go:130] ! I0314 19:36:27.067921       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:13.719733    8428 command_runner.go:130] ! I0314 19:36:27.067961       1 main.go:227] handling current node
	I0314 19:42:13.719733    8428 command_runner.go:130] ! I0314 19:36:27.069052       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:13.719733    8428 command_runner.go:130] ! I0314 19:36:27.069179       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:13.719733    8428 command_runner.go:130] ! I0314 19:36:27.069306       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:13.719733    8428 command_runner.go:130] ! I0314 19:36:27.069332       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:13.719733    8428 command_runner.go:130] ! I0314 19:36:37.082322       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:13.719733    8428 command_runner.go:130] ! I0314 19:36:37.082413       1 main.go:227] handling current node
	I0314 19:42:13.719733    8428 command_runner.go:130] ! I0314 19:36:37.082429       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:13.719733    8428 command_runner.go:130] ! I0314 19:36:37.082437       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:13.719733    8428 command_runner.go:130] ! I0314 19:36:37.082972       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:13.719733    8428 command_runner.go:130] ! I0314 19:36:37.083000       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:13.719733    8428 command_runner.go:130] ! I0314 19:36:47.099685       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:13.719733    8428 command_runner.go:130] ! I0314 19:36:47.099830       1 main.go:227] handling current node
	I0314 19:42:13.719733    8428 command_runner.go:130] ! I0314 19:36:47.099862       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:13.719733    8428 command_runner.go:130] ! I0314 19:36:47.099982       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:13.719733    8428 command_runner.go:130] ! I0314 19:36:57.107274       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:13.719733    8428 command_runner.go:130] ! I0314 19:36:57.107368       1 main.go:227] handling current node
	I0314 19:42:13.719733    8428 command_runner.go:130] ! I0314 19:36:57.107382       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:13.719733    8428 command_runner.go:130] ! I0314 19:36:57.107390       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:13.719733    8428 command_runner.go:130] ! I0314 19:36:57.107827       1 main.go:223] Handling node with IPs: map[172.17.84.215:{}]
	I0314 19:42:13.719733    8428 command_runner.go:130] ! I0314 19:36:57.107942       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.3.0/24] 
	I0314 19:42:13.719733    8428 command_runner.go:130] ! I0314 19:36:57.108076       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 172.17.84.215 Flags: [] Table: 0} 
	I0314 19:42:13.719733    8428 command_runner.go:130] ! I0314 19:37:07.120709       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:13.719733    8428 command_runner.go:130] ! I0314 19:37:07.121059       1 main.go:227] handling current node
	I0314 19:42:13.719733    8428 command_runner.go:130] ! I0314 19:37:07.121098       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:13.719733    8428 command_runner.go:130] ! I0314 19:37:07.121109       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:13.719733    8428 command_runner.go:130] ! I0314 19:37:07.121440       1 main.go:223] Handling node with IPs: map[172.17.84.215:{}]
	I0314 19:42:13.719733    8428 command_runner.go:130] ! I0314 19:37:07.121455       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.3.0/24] 
	I0314 19:42:13.719733    8428 command_runner.go:130] ! I0314 19:37:17.137704       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:13.719733    8428 command_runner.go:130] ! I0314 19:37:17.137784       1 main.go:227] handling current node
	I0314 19:42:13.719733    8428 command_runner.go:130] ! I0314 19:37:17.137796       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:13.719733    8428 command_runner.go:130] ! I0314 19:37:17.137803       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:13.719733    8428 command_runner.go:130] ! I0314 19:37:17.138265       1 main.go:223] Handling node with IPs: map[172.17.84.215:{}]
	I0314 19:42:13.719733    8428 command_runner.go:130] ! I0314 19:37:17.138298       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.3.0/24] 
	I0314 19:42:13.719733    8428 command_runner.go:130] ! I0314 19:37:27.144505       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:13.719733    8428 command_runner.go:130] ! I0314 19:37:27.144594       1 main.go:227] handling current node
	I0314 19:42:13.719733    8428 command_runner.go:130] ! I0314 19:37:27.144607       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:13.719733    8428 command_runner.go:130] ! I0314 19:37:27.144615       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:13.719733    8428 command_runner.go:130] ! I0314 19:37:27.145062       1 main.go:223] Handling node with IPs: map[172.17.84.215:{}]
	I0314 19:42:13.719733    8428 command_runner.go:130] ! I0314 19:37:27.145092       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.3.0/24] 
	I0314 19:42:13.719733    8428 command_runner.go:130] ! I0314 19:37:37.154684       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:13.719733    8428 command_runner.go:130] ! I0314 19:37:37.154836       1 main.go:227] handling current node
	I0314 19:42:13.719733    8428 command_runner.go:130] ! I0314 19:37:37.154851       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:13.719733    8428 command_runner.go:130] ! I0314 19:37:37.154860       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:13.720362    8428 command_runner.go:130] ! I0314 19:37:37.155452       1 main.go:223] Handling node with IPs: map[172.17.84.215:{}]
	I0314 19:42:13.720362    8428 command_runner.go:130] ! I0314 19:37:37.155614       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.3.0/24] 
	I0314 19:42:13.720362    8428 command_runner.go:130] ! I0314 19:37:47.168249       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:13.720362    8428 command_runner.go:130] ! I0314 19:37:47.168338       1 main.go:227] handling current node
	I0314 19:42:13.720362    8428 command_runner.go:130] ! I0314 19:37:47.168352       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:13.720362    8428 command_runner.go:130] ! I0314 19:37:47.168360       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:13.720362    8428 command_runner.go:130] ! I0314 19:37:47.168976       1 main.go:223] Handling node with IPs: map[172.17.84.215:{}]
	I0314 19:42:13.720362    8428 command_runner.go:130] ! I0314 19:37:47.169064       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.3.0/24] 
	I0314 19:42:13.720464    8428 command_runner.go:130] ! I0314 19:37:57.176039       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:13.720464    8428 command_runner.go:130] ! I0314 19:37:57.176130       1 main.go:227] handling current node
	I0314 19:42:13.720506    8428 command_runner.go:130] ! I0314 19:37:57.176145       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:13.720506    8428 command_runner.go:130] ! I0314 19:37:57.176153       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:13.720548    8428 command_runner.go:130] ! I0314 19:37:57.176528       1 main.go:223] Handling node with IPs: map[172.17.84.215:{}]
	I0314 19:42:13.720548    8428 command_runner.go:130] ! I0314 19:37:57.176659       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.3.0/24] 
	I0314 19:42:13.720548    8428 command_runner.go:130] ! I0314 19:38:07.189890       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:13.720548    8428 command_runner.go:130] ! I0314 19:38:07.189993       1 main.go:227] handling current node
	I0314 19:42:13.720548    8428 command_runner.go:130] ! I0314 19:38:07.190008       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:13.720548    8428 command_runner.go:130] ! I0314 19:38:07.190016       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:13.720548    8428 command_runner.go:130] ! I0314 19:38:07.190217       1 main.go:223] Handling node with IPs: map[172.17.84.215:{}]
	I0314 19:42:13.720548    8428 command_runner.go:130] ! I0314 19:38:07.190245       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.3.0/24] 
	I0314 19:42:13.720548    8428 command_runner.go:130] ! I0314 19:38:17.196541       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:13.720548    8428 command_runner.go:130] ! I0314 19:38:17.196633       1 main.go:227] handling current node
	I0314 19:42:13.720548    8428 command_runner.go:130] ! I0314 19:38:17.196647       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:13.720548    8428 command_runner.go:130] ! I0314 19:38:17.196655       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:13.720548    8428 command_runner.go:130] ! I0314 19:38:17.196888       1 main.go:223] Handling node with IPs: map[172.17.84.215:{}]
	I0314 19:42:13.720548    8428 command_runner.go:130] ! I0314 19:38:17.197012       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.3.0/24] 
	I0314 19:42:13.720548    8428 command_runner.go:130] ! I0314 19:38:27.217365       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:13.720548    8428 command_runner.go:130] ! I0314 19:38:27.217460       1 main.go:227] handling current node
	I0314 19:42:13.720548    8428 command_runner.go:130] ! I0314 19:38:27.217475       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:13.720548    8428 command_runner.go:130] ! I0314 19:38:27.217483       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:13.720548    8428 command_runner.go:130] ! I0314 19:38:27.217621       1 main.go:223] Handling node with IPs: map[172.17.84.215:{}]
	I0314 19:42:13.720548    8428 command_runner.go:130] ! I0314 19:38:27.217634       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.3.0/24] 
	I0314 19:42:13.720548    8428 command_runner.go:130] ! I0314 19:38:37.229941       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:13.720548    8428 command_runner.go:130] ! I0314 19:38:37.230048       1 main.go:227] handling current node
	I0314 19:42:13.720548    8428 command_runner.go:130] ! I0314 19:38:37.230062       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:13.720548    8428 command_runner.go:130] ! I0314 19:38:37.230070       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:13.720548    8428 command_runner.go:130] ! I0314 19:38:37.230268       1 main.go:223] Handling node with IPs: map[172.17.84.215:{}]
	I0314 19:42:13.720548    8428 command_runner.go:130] ! I0314 19:38:37.230338       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.3.0/24] 
	I0314 19:42:13.737472    8428 logs.go:123] Gathering logs for kubelet ...
	I0314 19:42:13.737472    8428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:42:13.766574    8428 command_runner.go:130] > Mar 14 19:40:57 multinode-442000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0314 19:42:13.766574    8428 command_runner.go:130] > Mar 14 19:40:57 multinode-442000 kubelet[1388]: I0314 19:40:57.516074    1388 server.go:467] "Kubelet version" kubeletVersion="v1.28.4"
	I0314 19:42:13.766574    8428 command_runner.go:130] > Mar 14 19:40:57 multinode-442000 kubelet[1388]: I0314 19:40:57.516440    1388 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0314 19:42:13.766574    8428 command_runner.go:130] > Mar 14 19:40:57 multinode-442000 kubelet[1388]: I0314 19:40:57.516773    1388 server.go:895] "Client rotation is on, will bootstrap in background"
	I0314 19:42:13.766574    8428 command_runner.go:130] > Mar 14 19:40:57 multinode-442000 kubelet[1388]: E0314 19:40:57.516893    1388 run.go:74] "command failed" err="failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory"
	I0314 19:42:13.766574    8428 command_runner.go:130] > Mar 14 19:40:57 multinode-442000 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	I0314 19:42:13.766574    8428 command_runner.go:130] > Mar 14 19:40:57 multinode-442000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	I0314 19:42:13.766574    8428 command_runner.go:130] > Mar 14 19:40:58 multinode-442000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1.
	I0314 19:42:13.766574    8428 command_runner.go:130] > Mar 14 19:40:58 multinode-442000 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I0314 19:42:13.766574    8428 command_runner.go:130] > Mar 14 19:40:58 multinode-442000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0314 19:42:13.766574    8428 command_runner.go:130] > Mar 14 19:40:58 multinode-442000 kubelet[1450]: I0314 19:40:58.293295    1450 server.go:467] "Kubelet version" kubeletVersion="v1.28.4"
	I0314 19:42:13.766574    8428 command_runner.go:130] > Mar 14 19:40:58 multinode-442000 kubelet[1450]: I0314 19:40:58.293422    1450 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0314 19:42:13.766574    8428 command_runner.go:130] > Mar 14 19:40:58 multinode-442000 kubelet[1450]: I0314 19:40:58.293759    1450 server.go:895] "Client rotation is on, will bootstrap in background"
	I0314 19:42:13.766574    8428 command_runner.go:130] > Mar 14 19:40:58 multinode-442000 kubelet[1450]: E0314 19:40:58.293809    1450 run.go:74] "command failed" err="failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory"
	I0314 19:42:13.766574    8428 command_runner.go:130] > Mar 14 19:40:58 multinode-442000 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	I0314 19:42:13.766574    8428 command_runner.go:130] > Mar 14 19:40:58 multinode-442000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	I0314 19:42:13.766574    8428 command_runner.go:130] > Mar 14 19:40:58 multinode-442000 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I0314 19:42:13.766574    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0314 19:42:13.766574    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.270178    1523 server.go:467] "Kubelet version" kubeletVersion="v1.28.4"
	I0314 19:42:13.766574    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.270275    1523 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0314 19:42:13.766574    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.270469    1523 server.go:895] "Client rotation is on, will bootstrap in background"
	I0314 19:42:13.766574    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.272943    1523 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	I0314 19:42:13.766574    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.286808    1523 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0314 19:42:13.766574    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.333673    1523 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
	I0314 19:42:13.766574    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.335204    1523 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
	I0314 19:42:13.766574    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.335543    1523 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","To
pologyManagerPolicyOptions":null}
	I0314 19:42:13.766574    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.335688    1523 topology_manager.go:138] "Creating topology manager with none policy"
	I0314 19:42:13.767603    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.335703    1523 container_manager_linux.go:301] "Creating device plugin manager"
	I0314 19:42:13.767603    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.336879    1523 state_mem.go:36] "Initialized new in-memory state store"
	I0314 19:42:13.767603    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.338507    1523 kubelet.go:393] "Attempting to sync node with API server"
	I0314 19:42:13.767603    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.338606    1523 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests"
	I0314 19:42:13.767603    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.339942    1523 kubelet.go:309] "Adding apiserver pod source"
	I0314 19:42:13.767681    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.339973    1523 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
	I0314 19:42:13.767681    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: W0314 19:41:00.342644    1523 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)multinode-442000&limit=500&resourceVersion=0": dial tcp 172.17.93.236:8443: connect: connection refused
	I0314 19:42:13.767742    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: E0314 19:41:00.342728    1523 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)multinode-442000&limit=500&resourceVersion=0": dial tcp 172.17.93.236:8443: connect: connection refused
	I0314 19:42:13.767810    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: W0314 19:41:00.352846    1523 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.17.93.236:8443: connect: connection refused
	I0314 19:42:13.767833    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: E0314 19:41:00.353005    1523 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.17.93.236:8443: connect: connection refused
	I0314 19:42:13.767833    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.362091    1523 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="docker" version="25.0.4" apiVersion="v1"
	I0314 19:42:13.767833    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: W0314 19:41:00.368654    1523 probe.go:268] Flexvolume plugin directory at /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
	I0314 19:42:13.767886    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.370831    1523 server.go:1232] "Started kubelet"
	I0314 19:42:13.767886    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.376404    1523 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
	I0314 19:42:13.767886    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.381472    1523 server.go:162] "Starting to listen" address="0.0.0.0" port=10250
	I0314 19:42:13.767886    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.381715    1523 volume_manager.go:291] "Starting Kubelet Volume Manager"
	I0314 19:42:13.767941    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.383735    1523 server.go:462] "Adding debug handlers to kubelet server"
	I0314 19:42:13.767941    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.385265    1523 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10
	I0314 19:42:13.767990    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.387577    1523 desired_state_of_world_populator.go:151] "Desired state populator starts to run"
	I0314 19:42:13.768012    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.392182    1523 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock"
	I0314 19:42:13.768079    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: E0314 19:41:00.392853    1523 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-442000?timeout=10s\": dial tcp 172.17.93.236:8443: connect: connection refused" interval="200ms"
	I0314 19:42:13.768116    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: W0314 19:41:00.392921    1523 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.17.93.236:8443: connect: connection refused
	I0314 19:42:13.768116    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: E0314 19:41:00.392970    1523 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.17.93.236:8443: connect: connection refused
	I0314 19:42:13.768116    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: E0314 19:41:00.402867    1523 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"multinode-442000.17bcb8e6e82683f3", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"multinode-442000", UID:"multinode-442000", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"multinode-442000"}, FirstTimestamp:time.Date(2024, ti
me.March, 14, 19, 41, 0, 370772979, time.Local), LastTimestamp:time.Date(2024, time.March, 14, 19, 41, 0, 370772979, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"multinode-442000"}': 'Post "https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events": dial tcp 172.17.93.236:8443: connect: connection refused'(may retry after sleeping)
	I0314 19:42:13.768116    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.431568    1523 reconciler_new.go:29] "Reconciler: start to sync state"
	I0314 19:42:13.768116    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.453043    1523 cpu_manager.go:214] "Starting CPU manager" policy="none"
	I0314 19:42:13.768116    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.453062    1523 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s"
	I0314 19:42:13.768116    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.453088    1523 state_mem.go:36] "Initialized new in-memory state store"
	I0314 19:42:13.768116    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.453812    1523 state_mem.go:88] "Updated default CPUSet" cpuSet=""
	I0314 19:42:13.768116    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.453838    1523 state_mem.go:96] "Updated CPUSet assignments" assignments={}
	I0314 19:42:13.768116    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.453846    1523 policy_none.go:49] "None policy: Start"
	I0314 19:42:13.768116    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.459854    1523 memory_manager.go:169] "Starting memorymanager" policy="None"
	I0314 19:42:13.768116    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.459925    1523 state_mem.go:35] "Initializing new in-memory state store"
	I0314 19:42:13.768116    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.460715    1523 state_mem.go:75] "Updated machine memory state"
	I0314 19:42:13.768116    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.466366    1523 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found"
	I0314 19:42:13.768116    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.471455    1523 plugin_manager.go:118] "Starting Kubelet Plugin Manager"
	I0314 19:42:13.768116    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.475344    1523 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4"
	I0314 19:42:13.768116    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.478780    1523 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6"
	I0314 19:42:13.768116    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.478820    1523 status_manager.go:217] "Starting to sync pod status with apiserver"
	I0314 19:42:13.768116    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.478846    1523 kubelet.go:2303] "Starting kubelet main sync loop"
	I0314 19:42:13.768116    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: E0314 19:41:00.478899    1523 kubelet.go:2327] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful"
	I0314 19:42:13.768116    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: W0314 19:41:00.485952    1523 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.17.93.236:8443: connect: connection refused
	I0314 19:42:13.768116    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: E0314 19:41:00.487569    1523 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.17.93.236:8443: connect: connection refused
	I0314 19:42:13.768116    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: E0314 19:41:00.493845    1523 eviction_manager.go:258] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"multinode-442000\" not found"
	I0314 19:42:13.768116    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.501023    1523 kubelet_node_status.go:70] "Attempting to register node" node="multinode-442000"
	I0314 19:42:13.768644    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: E0314 19:41:00.501915    1523 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.17.93.236:8443: connect: connection refused" node="multinode-442000"
	I0314 19:42:13.768684    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: E0314 19:41:00.503739    1523 iptables.go:575] "Could not set up iptables canary" err=<
	I0314 19:42:13.768716    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	I0314 19:42:13.768716    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	I0314 19:42:13.768752    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]:         Perhaps ip6tables or your kernel needs to be upgraded.
	I0314 19:42:13.768752    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	I0314 19:42:13.768752    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.578961    1523 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="af5b88117f99a24e81a324ab026c69a7058a7c1bc88d9b9a5386134abc257bba"
	I0314 19:42:13.768752    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.578983    1523 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="54e39762d7a6437164a9b2c6dd22b1f36b57514310190ce4acc3349001cb1774"
	I0314 19:42:13.768828    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.579017    1523 topology_manager.go:215] "Topology Admit Handler" podUID="2b2434280023596d1e3c90125a7219ed" podNamespace="kube-system" podName="kube-scheduler-multinode-442000"
	I0314 19:42:13.768828    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.592991    1523 topology_manager.go:215] "Topology Admit Handler" podUID="7754d2f32966faec8123dc3b8a2af767" podNamespace="kube-system" podName="kube-apiserver-multinode-442000"
	I0314 19:42:13.768902    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: E0314 19:41:00.594193    1523 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-442000?timeout=10s\": dial tcp 172.17.93.236:8443: connect: connection refused" interval="400ms"
	I0314 19:42:13.768958    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.609977    1523 topology_manager.go:215] "Topology Admit Handler" podUID="a7ee530f2bd843eddeace8cd6ec0d204" podNamespace="kube-system" podName="kube-controller-manager-multinode-442000"
	I0314 19:42:13.768958    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.622973    1523 topology_manager.go:215] "Topology Admit Handler" podUID="fa99a5621d016aa714804afcaa1e0a53" podNamespace="kube-system" podName="etcd-multinode-442000"
	I0314 19:42:13.769023    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.634832    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2b2434280023596d1e3c90125a7219ed-kubeconfig\") pod \"kube-scheduler-multinode-442000\" (UID: \"2b2434280023596d1e3c90125a7219ed\") " pod="kube-system/kube-scheduler-multinode-442000"
	I0314 19:42:13.769023    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.640587    1523 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b179d157b6b2f71cc980c7ea5060a613be77e84e89947fbcb91a687ea7310eaf"
	I0314 19:42:13.769058    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.640610    1523 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b046b896affe9f3219822b857a6b4dfa1427854d5df420b6b2e1cec631372548"
	I0314 19:42:13.769096    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.640625    1523 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fa0f2372c88eef3de0c7caa0041064157c314aff4c14bf6622f34dd89106f773"
	I0314 19:42:13.769130    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.640637    1523 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9b3244b47278e22e56ab0362b7a74ee80ca2806fb1074d718b0278b5bc70be76"
	I0314 19:42:13.769167    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.640648    1523 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a3dba3fc54c01e7fb1675536e155d6b541ed5782f664675ccd953639013f50b0"
	I0314 19:42:13.769167    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.640663    1523 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="102c907609a3ac28e95d46e2671477684c5a043672e21597c677ee9dbfcb7e08"
	I0314 19:42:13.769204    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.640674    1523 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ab390fc53b998ec55449f16c05933add797f430f2cc6f4b55afabf79cd8b0bc7"
	I0314 19:42:13.769204    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.713400    1523 kubelet_node_status.go:70] "Attempting to register node" node="multinode-442000"
	I0314 19:42:13.769262    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: E0314 19:41:00.714712    1523 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.17.93.236:8443: connect: connection refused" node="multinode-442000"
	I0314 19:42:13.769311    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.736377    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7754d2f32966faec8123dc3b8a2af767-ca-certs\") pod \"kube-apiserver-multinode-442000\" (UID: \"7754d2f32966faec8123dc3b8a2af767\") " pod="kube-system/kube-apiserver-multinode-442000"
	I0314 19:42:13.769346    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.736439    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7754d2f32966faec8123dc3b8a2af767-k8s-certs\") pod \"kube-apiserver-multinode-442000\" (UID: \"7754d2f32966faec8123dc3b8a2af767\") " pod="kube-system/kube-apiserver-multinode-442000"
	I0314 19:42:13.769383    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.736466    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7754d2f32966faec8123dc3b8a2af767-usr-share-ca-certificates\") pod \"kube-apiserver-multinode-442000\" (UID: \"7754d2f32966faec8123dc3b8a2af767\") " pod="kube-system/kube-apiserver-multinode-442000"
	I0314 19:42:13.769383    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.736490    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a7ee530f2bd843eddeace8cd6ec0d204-flexvolume-dir\") pod \"kube-controller-manager-multinode-442000\" (UID: \"a7ee530f2bd843eddeace8cd6ec0d204\") " pod="kube-system/kube-controller-manager-multinode-442000"
	I0314 19:42:13.769383    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.736521    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a7ee530f2bd843eddeace8cd6ec0d204-k8s-certs\") pod \"kube-controller-manager-multinode-442000\" (UID: \"a7ee530f2bd843eddeace8cd6ec0d204\") " pod="kube-system/kube-controller-manager-multinode-442000"
	I0314 19:42:13.769383    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.736546    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/fa99a5621d016aa714804afcaa1e0a53-etcd-certs\") pod \"etcd-multinode-442000\" (UID: \"fa99a5621d016aa714804afcaa1e0a53\") " pod="kube-system/etcd-multinode-442000"
	I0314 19:42:13.769383    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.736609    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a7ee530f2bd843eddeace8cd6ec0d204-ca-certs\") pod \"kube-controller-manager-multinode-442000\" (UID: \"a7ee530f2bd843eddeace8cd6ec0d204\") " pod="kube-system/kube-controller-manager-multinode-442000"
	I0314 19:42:13.769383    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.736642    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a7ee530f2bd843eddeace8cd6ec0d204-kubeconfig\") pod \"kube-controller-manager-multinode-442000\" (UID: \"a7ee530f2bd843eddeace8cd6ec0d204\") " pod="kube-system/kube-controller-manager-multinode-442000"
	I0314 19:42:13.769383    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.736675    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a7ee530f2bd843eddeace8cd6ec0d204-usr-share-ca-certificates\") pod \"kube-controller-manager-multinode-442000\" (UID: \"a7ee530f2bd843eddeace8cd6ec0d204\") " pod="kube-system/kube-controller-manager-multinode-442000"
	I0314 19:42:13.769383    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.736706    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/fa99a5621d016aa714804afcaa1e0a53-etcd-data\") pod \"etcd-multinode-442000\" (UID: \"fa99a5621d016aa714804afcaa1e0a53\") " pod="kube-system/etcd-multinode-442000"
	I0314 19:42:13.769383    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: E0314 19:41:00.996146    1523 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-442000?timeout=10s\": dial tcp 172.17.93.236:8443: connect: connection refused" interval="800ms"
	I0314 19:42:13.769383    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 kubelet[1523]: E0314 19:41:01.009288    1523 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"multinode-442000.17bcb8e6e82683f3", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"multinode-442000", UID:"multinode-442000", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"multinode-442000"}, FirstTimestamp:time.Date(2024, ti
me.March, 14, 19, 41, 0, 370772979, time.Local), LastTimestamp:time.Date(2024, time.March, 14, 19, 41, 0, 370772979, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"multinode-442000"}': 'Post "https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events": dial tcp 172.17.93.236:8443: connect: connection refused'(may retry after sleeping)
	I0314 19:42:13.769383    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 kubelet[1523]: I0314 19:41:01.128790    1523 kubelet_node_status.go:70] "Attempting to register node" node="multinode-442000"
	I0314 19:42:13.769383    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 kubelet[1523]: E0314 19:41:01.130034    1523 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.17.93.236:8443: connect: connection refused" node="multinode-442000"
	I0314 19:42:13.769917    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 kubelet[1523]: W0314 19:41:01.475229    1523 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.17.93.236:8443: connect: connection refused
	I0314 19:42:13.769959    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 kubelet[1523]: E0314 19:41:01.475367    1523 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.17.93.236:8443: connect: connection refused
	I0314 19:42:13.769994    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 kubelet[1523]: W0314 19:41:01.647700    1523 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)multinode-442000&limit=500&resourceVersion=0": dial tcp 172.17.93.236:8443: connect: connection refused
	I0314 19:42:13.770052    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 kubelet[1523]: E0314 19:41:01.647839    1523 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)multinode-442000&limit=500&resourceVersion=0": dial tcp 172.17.93.236:8443: connect: connection refused
	I0314 19:42:13.770052    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 kubelet[1523]: I0314 19:41:01.684558    1523 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c70744e60ac50b50085376d0c124ff15cc884b8a836b0085ef71a65ddb06bcfd"
	I0314 19:42:13.770052    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 kubelet[1523]: W0314 19:41:01.767121    1523 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.17.93.236:8443: connect: connection refused
	I0314 19:42:13.770052    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 kubelet[1523]: E0314 19:41:01.767283    1523 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.17.93.236:8443: connect: connection refused
	I0314 19:42:13.770052    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 kubelet[1523]: E0314 19:41:01.797772    1523 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-442000?timeout=10s\": dial tcp 172.17.93.236:8443: connect: connection refused" interval="1.6s"
	I0314 19:42:13.770052    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 kubelet[1523]: W0314 19:41:01.907277    1523 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.17.93.236:8443: connect: connection refused
	I0314 19:42:13.770052    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 kubelet[1523]: E0314 19:41:01.907408    1523 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.17.93.236:8443: connect: connection refused
	I0314 19:42:13.770052    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 kubelet[1523]: I0314 19:41:01.963548    1523 kubelet_node_status.go:70] "Attempting to register node" node="multinode-442000"
	I0314 19:42:13.770052    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 kubelet[1523]: E0314 19:41:01.967786    1523 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.17.93.236:8443: connect: connection refused" node="multinode-442000"
	I0314 19:42:13.770052    8428 command_runner.go:130] > Mar 14 19:41:03 multinode-442000 kubelet[1523]: I0314 19:41:03.581966    1523 kubelet_node_status.go:70] "Attempting to register node" node="multinode-442000"
	I0314 19:42:13.770052    8428 command_runner.go:130] > Mar 14 19:41:05 multinode-442000 kubelet[1523]: I0314 19:41:05.875219    1523 kubelet_node_status.go:108] "Node was previously registered" node="multinode-442000"
	I0314 19:42:13.770052    8428 command_runner.go:130] > Mar 14 19:41:05 multinode-442000 kubelet[1523]: I0314 19:41:05.875953    1523 kubelet_node_status.go:73] "Successfully registered node" node="multinode-442000"
	I0314 19:42:13.770052    8428 command_runner.go:130] > Mar 14 19:41:05 multinode-442000 kubelet[1523]: I0314 19:41:05.881726    1523 kuberuntime_manager.go:1528] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	I0314 19:42:13.770052    8428 command_runner.go:130] > Mar 14 19:41:05 multinode-442000 kubelet[1523]: I0314 19:41:05.882677    1523 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	I0314 19:42:13.770052    8428 command_runner.go:130] > Mar 14 19:41:05 multinode-442000 kubelet[1523]: I0314 19:41:05.894905    1523 setters.go:552] "Node became not ready" node="multinode-442000" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-03-14T19:41:05Z","lastTransitionTime":"2024-03-14T19:41:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"}
	I0314 19:42:13.770052    8428 command_runner.go:130] > Mar 14 19:41:05 multinode-442000 kubelet[1523]: E0314 19:41:05.973748    1523 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"etcd-multinode-442000\" already exists" pod="kube-system/etcd-multinode-442000"
	I0314 19:42:13.770052    8428 command_runner.go:130] > Mar 14 19:41:06 multinode-442000 kubelet[1523]: I0314 19:41:06.346543    1523 apiserver.go:52] "Watching apiserver"
	I0314 19:42:13.770052    8428 command_runner.go:130] > Mar 14 19:41:06 multinode-442000 kubelet[1523]: I0314 19:41:06.355573    1523 topology_manager.go:215] "Topology Admit Handler" podUID="677b9084-0026-4b21-b041-445940624ed7" podNamespace="kube-system" podName="kindnet-7b9lf"
	I0314 19:42:13.770052    8428 command_runner.go:130] > Mar 14 19:41:06 multinode-442000 kubelet[1523]: I0314 19:41:06.355823    1523 topology_manager.go:215] "Topology Admit Handler" podUID="c7f798bf-6722-4731-af8d-ccd5703d116e" podNamespace="kube-system" podName="kube-proxy-cg28g"
	I0314 19:42:13.770052    8428 command_runner.go:130] > Mar 14 19:41:06 multinode-442000 kubelet[1523]: I0314 19:41:06.355970    1523 topology_manager.go:215] "Topology Admit Handler" podUID="2a563b3f-a175-4dc2-9f0b-67dbaefbfaac" podNamespace="kube-system" podName="coredns-5dd5756b68-d22jc"
	I0314 19:42:13.770580    8428 command_runner.go:130] > Mar 14 19:41:06 multinode-442000 kubelet[1523]: I0314 19:41:06.356220    1523 topology_manager.go:215] "Topology Admit Handler" podUID="65d76566-4401-4b28-8452-10ed98624901" podNamespace="kube-system" podName="storage-provisioner"
	I0314 19:42:13.770619    8428 command_runner.go:130] > Mar 14 19:41:06 multinode-442000 kubelet[1523]: I0314 19:41:06.356515    1523 topology_manager.go:215] "Topology Admit Handler" podUID="6ca0ace6-596a-4504-80b5-0cc0cc11f9a2" podNamespace="default" podName="busybox-5b5d89c9d6-7446n"
	I0314 19:42:13.770691    8428 command_runner.go:130] > Mar 14 19:41:06 multinode-442000 kubelet[1523]: E0314 19:41:06.356776    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-7446n" podUID="6ca0ace6-596a-4504-80b5-0cc0cc11f9a2"
	I0314 19:42:13.770725    8428 command_runner.go:130] > Mar 14 19:41:06 multinode-442000 kubelet[1523]: E0314 19:41:06.356948    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-d22jc" podUID="2a563b3f-a175-4dc2-9f0b-67dbaefbfaac"
	I0314 19:42:13.770762    8428 command_runner.go:130] > Mar 14 19:41:06 multinode-442000 kubelet[1523]: I0314 19:41:06.360847    1523 kubelet.go:1872] "Trying to delete pod" pod="kube-system/kube-apiserver-multinode-442000" podUID="02a2d011-5f4c-451c-9698-a88e42e4b6c9"
	I0314 19:42:13.770762    8428 command_runner.go:130] > Mar 14 19:41:06 multinode-442000 kubelet[1523]: I0314 19:41:06.388530    1523 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
	I0314 19:42:13.770762    8428 command_runner.go:130] > Mar 14 19:41:06 multinode-442000 kubelet[1523]: I0314 19:41:06.394882    1523 kubelet.go:1877] "Deleted mirror pod because it is outdated" pod="kube-system/kube-apiserver-multinode-442000"
	I0314 19:42:13.770762    8428 command_runner.go:130] > Mar 14 19:41:06 multinode-442000 kubelet[1523]: I0314 19:41:06.419699    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c7f798bf-6722-4731-af8d-ccd5703d116e-xtables-lock\") pod \"kube-proxy-cg28g\" (UID: \"c7f798bf-6722-4731-af8d-ccd5703d116e\") " pod="kube-system/kube-proxy-cg28g"
	I0314 19:42:13.770762    8428 command_runner.go:130] > Mar 14 19:41:06 multinode-442000 kubelet[1523]: I0314 19:41:06.419828    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/677b9084-0026-4b21-b041-445940624ed7-cni-cfg\") pod \"kindnet-7b9lf\" (UID: \"677b9084-0026-4b21-b041-445940624ed7\") " pod="kube-system/kindnet-7b9lf"
	I0314 19:42:13.770762    8428 command_runner.go:130] > Mar 14 19:41:06 multinode-442000 kubelet[1523]: I0314 19:41:06.419854    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/677b9084-0026-4b21-b041-445940624ed7-lib-modules\") pod \"kindnet-7b9lf\" (UID: \"677b9084-0026-4b21-b041-445940624ed7\") " pod="kube-system/kindnet-7b9lf"
	I0314 19:42:13.770762    8428 command_runner.go:130] > Mar 14 19:41:06 multinode-442000 kubelet[1523]: I0314 19:41:06.419895    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/65d76566-4401-4b28-8452-10ed98624901-tmp\") pod \"storage-provisioner\" (UID: \"65d76566-4401-4b28-8452-10ed98624901\") " pod="kube-system/storage-provisioner"
	I0314 19:42:13.770762    8428 command_runner.go:130] > Mar 14 19:41:06 multinode-442000 kubelet[1523]: I0314 19:41:06.419943    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/677b9084-0026-4b21-b041-445940624ed7-xtables-lock\") pod \"kindnet-7b9lf\" (UID: \"677b9084-0026-4b21-b041-445940624ed7\") " pod="kube-system/kindnet-7b9lf"
	I0314 19:42:13.770762    8428 command_runner.go:130] > Mar 14 19:41:06 multinode-442000 kubelet[1523]: I0314 19:41:06.420062    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c7f798bf-6722-4731-af8d-ccd5703d116e-lib-modules\") pod \"kube-proxy-cg28g\" (UID: \"c7f798bf-6722-4731-af8d-ccd5703d116e\") " pod="kube-system/kube-proxy-cg28g"
	I0314 19:42:13.770762    8428 command_runner.go:130] > Mar 14 19:41:06 multinode-442000 kubelet[1523]: E0314 19:41:06.420370    1523 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0314 19:42:13.770762    8428 command_runner.go:130] > Mar 14 19:41:06 multinode-442000 kubelet[1523]: E0314 19:41:06.420509    1523 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2a563b3f-a175-4dc2-9f0b-67dbaefbfaac-config-volume podName:2a563b3f-a175-4dc2-9f0b-67dbaefbfaac nodeName:}" failed. No retries permitted until 2024-03-14 19:41:06.920467401 +0000 UTC m=+6.742091622 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/2a563b3f-a175-4dc2-9f0b-67dbaefbfaac-config-volume") pod "coredns-5dd5756b68-d22jc" (UID: "2a563b3f-a175-4dc2-9f0b-67dbaefbfaac") : object "kube-system"/"coredns" not registered
	I0314 19:42:13.770762    8428 command_runner.go:130] > Mar 14 19:41:06 multinode-442000 kubelet[1523]: E0314 19:41:06.447169    1523 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0314 19:42:13.770762    8428 command_runner.go:130] > Mar 14 19:41:06 multinode-442000 kubelet[1523]: E0314 19:41:06.447481    1523 projected.go:198] Error preparing data for projected volume kube-api-access-6hh9s for pod default/busybox-5b5d89c9d6-7446n: object "default"/"kube-root-ca.crt" not registered
	I0314 19:42:13.771292    8428 command_runner.go:130] > Mar 14 19:41:06 multinode-442000 kubelet[1523]: E0314 19:41:06.447769    1523 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6ca0ace6-596a-4504-80b5-0cc0cc11f9a2-kube-api-access-6hh9s podName:6ca0ace6-596a-4504-80b5-0cc0cc11f9a2 nodeName:}" failed. No retries permitted until 2024-03-14 19:41:06.9477485 +0000 UTC m=+6.769372721 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-6hh9s" (UniqueName: "kubernetes.io/projected/6ca0ace6-596a-4504-80b5-0cc0cc11f9a2-kube-api-access-6hh9s") pod "busybox-5b5d89c9d6-7446n" (UID: "6ca0ace6-596a-4504-80b5-0cc0cc11f9a2") : object "default"/"kube-root-ca.crt" not registered
	I0314 19:42:13.771292    8428 command_runner.go:130] > Mar 14 19:41:06 multinode-442000 kubelet[1523]: I0314 19:41:06.496544    1523 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="81fdcd9740169a0b72b7c7316eeac39f" path="/var/lib/kubelet/pods/81fdcd9740169a0b72b7c7316eeac39f/volumes"
	I0314 19:42:13.771292    8428 command_runner.go:130] > Mar 14 19:41:06 multinode-442000 kubelet[1523]: I0314 19:41:06.497856    1523 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="92e70beb375f9f247f5f8395dc065033" path="/var/lib/kubelet/pods/92e70beb375f9f247f5f8395dc065033/volumes"
	I0314 19:42:13.771373    8428 command_runner.go:130] > Mar 14 19:41:06 multinode-442000 kubelet[1523]: I0314 19:41:06.840791    1523 kubelet.go:1872] "Trying to delete pod" pod="kube-system/etcd-multinode-442000" podUID="8974ad44-5d36-48f0-bc6b-9115bab5fb5e"
	I0314 19:42:13.771373    8428 command_runner.go:130] > Mar 14 19:41:06 multinode-442000 kubelet[1523]: I0314 19:41:06.864488    1523 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-multinode-442000" podStartSLOduration=0.864428449 podCreationTimestamp="2024-03-14 19:41:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-03-14 19:41:06.656175631 +0000 UTC m=+6.477799952" watchObservedRunningTime="2024-03-14 19:41:06.864428449 +0000 UTC m=+6.686052670"
	I0314 19:42:13.771443    8428 command_runner.go:130] > Mar 14 19:41:06 multinode-442000 kubelet[1523]: I0314 19:41:06.889820    1523 kubelet.go:1877] "Deleted mirror pod because it is outdated" pod="kube-system/etcd-multinode-442000"
	I0314 19:42:13.771443    8428 command_runner.go:130] > Mar 14 19:41:06 multinode-442000 kubelet[1523]: E0314 19:41:06.925613    1523 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0314 19:42:13.771514    8428 command_runner.go:130] > Mar 14 19:41:06 multinode-442000 kubelet[1523]: E0314 19:41:06.925789    1523 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2a563b3f-a175-4dc2-9f0b-67dbaefbfaac-config-volume podName:2a563b3f-a175-4dc2-9f0b-67dbaefbfaac nodeName:}" failed. No retries permitted until 2024-03-14 19:41:07.925744766 +0000 UTC m=+7.747368987 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/2a563b3f-a175-4dc2-9f0b-67dbaefbfaac-config-volume") pod "coredns-5dd5756b68-d22jc" (UID: "2a563b3f-a175-4dc2-9f0b-67dbaefbfaac") : object "kube-system"/"coredns" not registered
	I0314 19:42:13.771514    8428 command_runner.go:130] > Mar 14 19:41:07 multinode-442000 kubelet[1523]: E0314 19:41:07.026456    1523 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0314 19:42:13.771584    8428 command_runner.go:130] > Mar 14 19:41:07 multinode-442000 kubelet[1523]: E0314 19:41:07.026485    1523 projected.go:198] Error preparing data for projected volume kube-api-access-6hh9s for pod default/busybox-5b5d89c9d6-7446n: object "default"/"kube-root-ca.crt" not registered
	I0314 19:42:13.771584    8428 command_runner.go:130] > Mar 14 19:41:07 multinode-442000 kubelet[1523]: E0314 19:41:07.026583    1523 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6ca0ace6-596a-4504-80b5-0cc0cc11f9a2-kube-api-access-6hh9s podName:6ca0ace6-596a-4504-80b5-0cc0cc11f9a2 nodeName:}" failed. No retries permitted until 2024-03-14 19:41:08.02656612 +0000 UTC m=+7.848190341 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-6hh9s" (UniqueName: "kubernetes.io/projected/6ca0ace6-596a-4504-80b5-0cc0cc11f9a2-kube-api-access-6hh9s") pod "busybox-5b5d89c9d6-7446n" (UID: "6ca0ace6-596a-4504-80b5-0cc0cc11f9a2") : object "default"/"kube-root-ca.crt" not registered
	I0314 19:42:13.771655    8428 command_runner.go:130] > Mar 14 19:41:07 multinode-442000 kubelet[1523]: E0314 19:41:07.479340    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-d22jc" podUID="2a563b3f-a175-4dc2-9f0b-67dbaefbfaac"
	I0314 19:42:13.771728    8428 command_runner.go:130] > Mar 14 19:41:07 multinode-442000 kubelet[1523]: E0314 19:41:07.479540    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-7446n" podUID="6ca0ace6-596a-4504-80b5-0cc0cc11f9a2"
	I0314 19:42:13.771728    8428 command_runner.go:130] > Mar 14 19:41:07 multinode-442000 kubelet[1523]: E0314 19:41:07.934416    1523 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0314 19:42:13.771728    8428 command_runner.go:130] > Mar 14 19:41:07 multinode-442000 kubelet[1523]: E0314 19:41:07.934566    1523 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2a563b3f-a175-4dc2-9f0b-67dbaefbfaac-config-volume podName:2a563b3f-a175-4dc2-9f0b-67dbaefbfaac nodeName:}" failed. No retries permitted until 2024-03-14 19:41:09.934544359 +0000 UTC m=+9.756168580 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/2a563b3f-a175-4dc2-9f0b-67dbaefbfaac-config-volume") pod "coredns-5dd5756b68-d22jc" (UID: "2a563b3f-a175-4dc2-9f0b-67dbaefbfaac") : object "kube-system"/"coredns" not registered
	I0314 19:42:13.771818    8428 command_runner.go:130] > Mar 14 19:41:08 multinode-442000 kubelet[1523]: E0314 19:41:08.035285    1523 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0314 19:42:13.771818    8428 command_runner.go:130] > Mar 14 19:41:08 multinode-442000 kubelet[1523]: E0314 19:41:08.035328    1523 projected.go:198] Error preparing data for projected volume kube-api-access-6hh9s for pod default/busybox-5b5d89c9d6-7446n: object "default"/"kube-root-ca.crt" not registered
	I0314 19:42:13.771818    8428 command_runner.go:130] > Mar 14 19:41:08 multinode-442000 kubelet[1523]: E0314 19:41:08.035382    1523 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6ca0ace6-596a-4504-80b5-0cc0cc11f9a2-kube-api-access-6hh9s podName:6ca0ace6-596a-4504-80b5-0cc0cc11f9a2 nodeName:}" failed. No retries permitted until 2024-03-14 19:41:10.035364414 +0000 UTC m=+9.856988635 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-6hh9s" (UniqueName: "kubernetes.io/projected/6ca0ace6-596a-4504-80b5-0cc0cc11f9a2-kube-api-access-6hh9s") pod "busybox-5b5d89c9d6-7446n" (UID: "6ca0ace6-596a-4504-80b5-0cc0cc11f9a2") : object "default"/"kube-root-ca.crt" not registered
	I0314 19:42:13.771919    8428 command_runner.go:130] > Mar 14 19:41:08 multinode-442000 kubelet[1523]: I0314 19:41:08.192454    1523 kubelet.go:1872] "Trying to delete pod" pod="kube-system/etcd-multinode-442000" podUID="8974ad44-5d36-48f0-bc6b-9115bab5fb5e"
	I0314 19:42:13.771919    8428 command_runner.go:130] > Mar 14 19:41:08 multinode-442000 kubelet[1523]: I0314 19:41:08.232807    1523 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/etcd-multinode-442000" podStartSLOduration=2.232765597 podCreationTimestamp="2024-03-14 19:41:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-03-14 19:41:08.211688076 +0000 UTC m=+8.033312297" watchObservedRunningTime="2024-03-14 19:41:08.232765597 +0000 UTC m=+8.054389818"
	I0314 19:42:13.772000    8428 command_runner.go:130] > Mar 14 19:41:09 multinode-442000 kubelet[1523]: E0314 19:41:09.480285    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-7446n" podUID="6ca0ace6-596a-4504-80b5-0cc0cc11f9a2"
	I0314 19:42:13.772073    8428 command_runner.go:130] > Mar 14 19:41:09 multinode-442000 kubelet[1523]: E0314 19:41:09.480350    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-d22jc" podUID="2a563b3f-a175-4dc2-9f0b-67dbaefbfaac"
	I0314 19:42:13.772073    8428 command_runner.go:130] > Mar 14 19:41:09 multinode-442000 kubelet[1523]: E0314 19:41:09.954598    1523 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0314 19:42:13.772141    8428 command_runner.go:130] > Mar 14 19:41:09 multinode-442000 kubelet[1523]: E0314 19:41:09.954683    1523 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2a563b3f-a175-4dc2-9f0b-67dbaefbfaac-config-volume podName:2a563b3f-a175-4dc2-9f0b-67dbaefbfaac nodeName:}" failed. No retries permitted until 2024-03-14 19:41:13.95466674 +0000 UTC m=+13.776290961 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/2a563b3f-a175-4dc2-9f0b-67dbaefbfaac-config-volume") pod "coredns-5dd5756b68-d22jc" (UID: "2a563b3f-a175-4dc2-9f0b-67dbaefbfaac") : object "kube-system"/"coredns" not registered
	I0314 19:42:13.772141    8428 command_runner.go:130] > Mar 14 19:41:10 multinode-442000 kubelet[1523]: E0314 19:41:10.055917    1523 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0314 19:42:13.772141    8428 command_runner.go:130] > Mar 14 19:41:10 multinode-442000 kubelet[1523]: E0314 19:41:10.055948    1523 projected.go:198] Error preparing data for projected volume kube-api-access-6hh9s for pod default/busybox-5b5d89c9d6-7446n: object "default"/"kube-root-ca.crt" not registered
	I0314 19:42:13.772265    8428 command_runner.go:130] > Mar 14 19:41:10 multinode-442000 kubelet[1523]: E0314 19:41:10.055999    1523 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6ca0ace6-596a-4504-80b5-0cc0cc11f9a2-kube-api-access-6hh9s podName:6ca0ace6-596a-4504-80b5-0cc0cc11f9a2 nodeName:}" failed. No retries permitted until 2024-03-14 19:41:14.055983733 +0000 UTC m=+13.877608054 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-6hh9s" (UniqueName: "kubernetes.io/projected/6ca0ace6-596a-4504-80b5-0cc0cc11f9a2-kube-api-access-6hh9s") pod "busybox-5b5d89c9d6-7446n" (UID: "6ca0ace6-596a-4504-80b5-0cc0cc11f9a2") : object "default"/"kube-root-ca.crt" not registered
	I0314 19:42:13.772265    8428 command_runner.go:130] > Mar 14 19:41:11 multinode-442000 kubelet[1523]: E0314 19:41:11.480167    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-d22jc" podUID="2a563b3f-a175-4dc2-9f0b-67dbaefbfaac"
	I0314 19:42:13.772338    8428 command_runner.go:130] > Mar 14 19:41:11 multinode-442000 kubelet[1523]: E0314 19:41:11.480285    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-7446n" podUID="6ca0ace6-596a-4504-80b5-0cc0cc11f9a2"
	I0314 19:42:13.772415    8428 command_runner.go:130] > Mar 14 19:41:13 multinode-442000 kubelet[1523]: E0314 19:41:13.480095    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-d22jc" podUID="2a563b3f-a175-4dc2-9f0b-67dbaefbfaac"
	I0314 19:42:13.772441    8428 command_runner.go:130] > Mar 14 19:41:13 multinode-442000 kubelet[1523]: E0314 19:41:13.480797    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-7446n" podUID="6ca0ace6-596a-4504-80b5-0cc0cc11f9a2"
	I0314 19:42:13.772476    8428 command_runner.go:130] > Mar 14 19:41:13 multinode-442000 kubelet[1523]: E0314 19:41:13.988392    1523 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0314 19:42:13.772546    8428 command_runner.go:130] > Mar 14 19:41:13 multinode-442000 kubelet[1523]: E0314 19:41:13.988528    1523 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2a563b3f-a175-4dc2-9f0b-67dbaefbfaac-config-volume podName:2a563b3f-a175-4dc2-9f0b-67dbaefbfaac nodeName:}" failed. No retries permitted until 2024-03-14 19:41:21.98850961 +0000 UTC m=+21.810133831 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/2a563b3f-a175-4dc2-9f0b-67dbaefbfaac-config-volume") pod "coredns-5dd5756b68-d22jc" (UID: "2a563b3f-a175-4dc2-9f0b-67dbaefbfaac") : object "kube-system"/"coredns" not registered
	I0314 19:42:13.772593    8428 command_runner.go:130] > Mar 14 19:41:14 multinode-442000 kubelet[1523]: E0314 19:41:14.089208    1523 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0314 19:42:13.772627    8428 command_runner.go:130] > Mar 14 19:41:14 multinode-442000 kubelet[1523]: E0314 19:41:14.089365    1523 projected.go:198] Error preparing data for projected volume kube-api-access-6hh9s for pod default/busybox-5b5d89c9d6-7446n: object "default"/"kube-root-ca.crt" not registered
	I0314 19:42:13.772691    8428 command_runner.go:130] > Mar 14 19:41:14 multinode-442000 kubelet[1523]: E0314 19:41:14.089427    1523 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6ca0ace6-596a-4504-80b5-0cc0cc11f9a2-kube-api-access-6hh9s podName:6ca0ace6-596a-4504-80b5-0cc0cc11f9a2 nodeName:}" failed. No retries permitted until 2024-03-14 19:41:22.089409571 +0000 UTC m=+21.911033792 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-6hh9s" (UniqueName: "kubernetes.io/projected/6ca0ace6-596a-4504-80b5-0cc0cc11f9a2-kube-api-access-6hh9s") pod "busybox-5b5d89c9d6-7446n" (UID: "6ca0ace6-596a-4504-80b5-0cc0cc11f9a2") : object "default"/"kube-root-ca.crt" not registered
	I0314 19:42:13.772739    8428 command_runner.go:130] > Mar 14 19:41:15 multinode-442000 kubelet[1523]: E0314 19:41:15.480116    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-d22jc" podUID="2a563b3f-a175-4dc2-9f0b-67dbaefbfaac"
	I0314 19:42:13.772779    8428 command_runner.go:130] > Mar 14 19:41:15 multinode-442000 kubelet[1523]: E0314 19:41:15.480286    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-7446n" podUID="6ca0ace6-596a-4504-80b5-0cc0cc11f9a2"
	I0314 19:42:13.772779    8428 command_runner.go:130] > Mar 14 19:41:17 multinode-442000 kubelet[1523]: E0314 19:41:17.479583    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-7446n" podUID="6ca0ace6-596a-4504-80b5-0cc0cc11f9a2"
	I0314 19:42:13.772863    8428 command_runner.go:130] > Mar 14 19:41:17 multinode-442000 kubelet[1523]: E0314 19:41:17.480025    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-d22jc" podUID="2a563b3f-a175-4dc2-9f0b-67dbaefbfaac"
	I0314 19:42:13.772863    8428 command_runner.go:130] > Mar 14 19:41:19 multinode-442000 kubelet[1523]: E0314 19:41:19.480562    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-d22jc" podUID="2a563b3f-a175-4dc2-9f0b-67dbaefbfaac"
	I0314 19:42:13.772943    8428 command_runner.go:130] > Mar 14 19:41:19 multinode-442000 kubelet[1523]: E0314 19:41:19.480625    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-7446n" podUID="6ca0ace6-596a-4504-80b5-0cc0cc11f9a2"
	I0314 19:42:13.772970    8428 command_runner.go:130] > Mar 14 19:41:21 multinode-442000 kubelet[1523]: E0314 19:41:21.479895    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-7446n" podUID="6ca0ace6-596a-4504-80b5-0cc0cc11f9a2"
	I0314 19:42:13.772970    8428 command_runner.go:130] > Mar 14 19:41:21 multinode-442000 kubelet[1523]: E0314 19:41:21.480437    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-d22jc" podUID="2a563b3f-a175-4dc2-9f0b-67dbaefbfaac"
	I0314 19:42:13.772970    8428 command_runner.go:130] > Mar 14 19:41:22 multinode-442000 kubelet[1523]: E0314 19:41:22.061436    1523 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0314 19:42:13.772970    8428 command_runner.go:130] > Mar 14 19:41:22 multinode-442000 kubelet[1523]: E0314 19:41:22.061515    1523 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2a563b3f-a175-4dc2-9f0b-67dbaefbfaac-config-volume podName:2a563b3f-a175-4dc2-9f0b-67dbaefbfaac nodeName:}" failed. No retries permitted until 2024-03-14 19:41:38.061499618 +0000 UTC m=+37.883123839 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/2a563b3f-a175-4dc2-9f0b-67dbaefbfaac-config-volume") pod "coredns-5dd5756b68-d22jc" (UID: "2a563b3f-a175-4dc2-9f0b-67dbaefbfaac") : object "kube-system"/"coredns" not registered
	I0314 19:42:13.772970    8428 command_runner.go:130] > Mar 14 19:41:22 multinode-442000 kubelet[1523]: E0314 19:41:22.162555    1523 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0314 19:42:13.772970    8428 command_runner.go:130] > Mar 14 19:41:22 multinode-442000 kubelet[1523]: E0314 19:41:22.162603    1523 projected.go:198] Error preparing data for projected volume kube-api-access-6hh9s for pod default/busybox-5b5d89c9d6-7446n: object "default"/"kube-root-ca.crt" not registered
	I0314 19:42:13.772970    8428 command_runner.go:130] > Mar 14 19:41:22 multinode-442000 kubelet[1523]: E0314 19:41:22.162667    1523 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6ca0ace6-596a-4504-80b5-0cc0cc11f9a2-kube-api-access-6hh9s podName:6ca0ace6-596a-4504-80b5-0cc0cc11f9a2 nodeName:}" failed. No retries permitted until 2024-03-14 19:41:38.162650651 +0000 UTC m=+37.984274872 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-6hh9s" (UniqueName: "kubernetes.io/projected/6ca0ace6-596a-4504-80b5-0cc0cc11f9a2-kube-api-access-6hh9s") pod "busybox-5b5d89c9d6-7446n" (UID: "6ca0ace6-596a-4504-80b5-0cc0cc11f9a2") : object "default"/"kube-root-ca.crt" not registered
	I0314 19:42:13.772970    8428 command_runner.go:130] > Mar 14 19:41:23 multinode-442000 kubelet[1523]: E0314 19:41:23.480157    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-d22jc" podUID="2a563b3f-a175-4dc2-9f0b-67dbaefbfaac"
	I0314 19:42:13.772970    8428 command_runner.go:130] > Mar 14 19:41:23 multinode-442000 kubelet[1523]: E0314 19:41:23.481151    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-7446n" podUID="6ca0ace6-596a-4504-80b5-0cc0cc11f9a2"
	I0314 19:42:13.772970    8428 command_runner.go:130] > Mar 14 19:41:25 multinode-442000 kubelet[1523]: E0314 19:41:25.479970    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-d22jc" podUID="2a563b3f-a175-4dc2-9f0b-67dbaefbfaac"
	I0314 19:42:13.772970    8428 command_runner.go:130] > Mar 14 19:41:25 multinode-442000 kubelet[1523]: E0314 19:41:25.480065    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-7446n" podUID="6ca0ace6-596a-4504-80b5-0cc0cc11f9a2"
	I0314 19:42:13.772970    8428 command_runner.go:130] > Mar 14 19:41:27 multinode-442000 kubelet[1523]: E0314 19:41:27.480032    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-d22jc" podUID="2a563b3f-a175-4dc2-9f0b-67dbaefbfaac"
	I0314 19:42:13.772970    8428 command_runner.go:130] > Mar 14 19:41:27 multinode-442000 kubelet[1523]: E0314 19:41:27.480122    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-7446n" podUID="6ca0ace6-596a-4504-80b5-0cc0cc11f9a2"
	I0314 19:42:13.773497    8428 command_runner.go:130] > Mar 14 19:41:29 multinode-442000 kubelet[1523]: E0314 19:41:29.480034    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-7446n" podUID="6ca0ace6-596a-4504-80b5-0cc0cc11f9a2"
	I0314 19:42:13.773497    8428 command_runner.go:130] > Mar 14 19:41:29 multinode-442000 kubelet[1523]: E0314 19:41:29.480291    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-d22jc" podUID="2a563b3f-a175-4dc2-9f0b-67dbaefbfaac"
	I0314 19:42:13.773588    8428 command_runner.go:130] > Mar 14 19:41:31 multinode-442000 kubelet[1523]: E0314 19:41:31.479554    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-d22jc" podUID="2a563b3f-a175-4dc2-9f0b-67dbaefbfaac"
	I0314 19:42:13.773588    8428 command_runner.go:130] > Mar 14 19:41:31 multinode-442000 kubelet[1523]: E0314 19:41:31.479650    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-7446n" podUID="6ca0ace6-596a-4504-80b5-0cc0cc11f9a2"
	I0314 19:42:13.773662    8428 command_runner.go:130] > Mar 14 19:41:33 multinode-442000 kubelet[1523]: E0314 19:41:33.479299    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-d22jc" podUID="2a563b3f-a175-4dc2-9f0b-67dbaefbfaac"
	I0314 19:42:13.773662    8428 command_runner.go:130] > Mar 14 19:41:33 multinode-442000 kubelet[1523]: E0314 19:41:33.479835    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-7446n" podUID="6ca0ace6-596a-4504-80b5-0cc0cc11f9a2"
	I0314 19:42:13.773735    8428 command_runner.go:130] > Mar 14 19:41:35 multinode-442000 kubelet[1523]: E0314 19:41:35.479778    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-7446n" podUID="6ca0ace6-596a-4504-80b5-0cc0cc11f9a2"
	I0314 19:42:13.773735    8428 command_runner.go:130] > Mar 14 19:41:35 multinode-442000 kubelet[1523]: E0314 19:41:35.480230    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-d22jc" podUID="2a563b3f-a175-4dc2-9f0b-67dbaefbfaac"
	I0314 19:42:13.773808    8428 command_runner.go:130] > Mar 14 19:41:37 multinode-442000 kubelet[1523]: E0314 19:41:37.480388    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-d22jc" podUID="2a563b3f-a175-4dc2-9f0b-67dbaefbfaac"
	I0314 19:42:13.773808    8428 command_runner.go:130] > Mar 14 19:41:37 multinode-442000 kubelet[1523]: E0314 19:41:37.480921    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-7446n" podUID="6ca0ace6-596a-4504-80b5-0cc0cc11f9a2"
	I0314 19:42:13.773808    8428 command_runner.go:130] > Mar 14 19:41:38 multinode-442000 kubelet[1523]: E0314 19:41:38.089907    1523 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0314 19:42:13.773907    8428 command_runner.go:130] > Mar 14 19:41:38 multinode-442000 kubelet[1523]: E0314 19:41:38.090056    1523 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2a563b3f-a175-4dc2-9f0b-67dbaefbfaac-config-volume podName:2a563b3f-a175-4dc2-9f0b-67dbaefbfaac nodeName:}" failed. No retries permitted until 2024-03-14 19:42:10.090036325 +0000 UTC m=+69.911660546 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/2a563b3f-a175-4dc2-9f0b-67dbaefbfaac-config-volume") pod "coredns-5dd5756b68-d22jc" (UID: "2a563b3f-a175-4dc2-9f0b-67dbaefbfaac") : object "kube-system"/"coredns" not registered
	I0314 19:42:13.773907    8428 command_runner.go:130] > Mar 14 19:41:38 multinode-442000 kubelet[1523]: E0314 19:41:38.191172    1523 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0314 19:42:13.773984    8428 command_runner.go:130] > Mar 14 19:41:38 multinode-442000 kubelet[1523]: E0314 19:41:38.191351    1523 projected.go:198] Error preparing data for projected volume kube-api-access-6hh9s for pod default/busybox-5b5d89c9d6-7446n: object "default"/"kube-root-ca.crt" not registered
	I0314 19:42:13.774183    8428 command_runner.go:130] > Mar 14 19:41:38 multinode-442000 kubelet[1523]: E0314 19:41:38.191425    1523 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6ca0ace6-596a-4504-80b5-0cc0cc11f9a2-kube-api-access-6hh9s podName:6ca0ace6-596a-4504-80b5-0cc0cc11f9a2 nodeName:}" failed. No retries permitted until 2024-03-14 19:42:10.191406835 +0000 UTC m=+70.013031056 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-6hh9s" (UniqueName: "kubernetes.io/projected/6ca0ace6-596a-4504-80b5-0cc0cc11f9a2-kube-api-access-6hh9s") pod "busybox-5b5d89c9d6-7446n" (UID: "6ca0ace6-596a-4504-80b5-0cc0cc11f9a2") : object "default"/"kube-root-ca.crt" not registered
	I0314 19:42:13.774183    8428 command_runner.go:130] > Mar 14 19:41:38 multinode-442000 kubelet[1523]: I0314 19:41:38.578418    1523 scope.go:117] "RemoveContainer" containerID="07c2872c48edaa090b20d66267963c0d69c5c9eb97824b199af2d7e611ac596a"
	I0314 19:42:13.774183    8428 command_runner.go:130] > Mar 14 19:41:38 multinode-442000 kubelet[1523]: I0314 19:41:38.578814    1523 scope.go:117] "RemoveContainer" containerID="2876622a2618d9b60f7cb4f182054a8b2d30209e3bd14c5d4afe515101547bc8"
	I0314 19:42:13.774183    8428 command_runner.go:130] > Mar 14 19:41:38 multinode-442000 kubelet[1523]: E0314 19:41:38.579025    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(65d76566-4401-4b28-8452-10ed98624901)\"" pod="kube-system/storage-provisioner" podUID="65d76566-4401-4b28-8452-10ed98624901"
	I0314 19:42:13.774183    8428 command_runner.go:130] > Mar 14 19:41:39 multinode-442000 kubelet[1523]: E0314 19:41:39.479691    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-d22jc" podUID="2a563b3f-a175-4dc2-9f0b-67dbaefbfaac"
	I0314 19:42:13.774183    8428 command_runner.go:130] > Mar 14 19:41:39 multinode-442000 kubelet[1523]: E0314 19:41:39.479909    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-7446n" podUID="6ca0ace6-596a-4504-80b5-0cc0cc11f9a2"
	I0314 19:42:13.774183    8428 command_runner.go:130] > Mar 14 19:41:41 multinode-442000 kubelet[1523]: E0314 19:41:41.479574    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-7446n" podUID="6ca0ace6-596a-4504-80b5-0cc0cc11f9a2"
	I0314 19:42:13.774183    8428 command_runner.go:130] > Mar 14 19:41:41 multinode-442000 kubelet[1523]: E0314 19:41:41.480003    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-d22jc" podUID="2a563b3f-a175-4dc2-9f0b-67dbaefbfaac"
	I0314 19:42:13.774183    8428 command_runner.go:130] > Mar 14 19:41:41 multinode-442000 kubelet[1523]: I0314 19:41:41.518811    1523 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	I0314 19:42:13.774183    8428 command_runner.go:130] > Mar 14 19:41:53 multinode-442000 kubelet[1523]: I0314 19:41:53.480206    1523 scope.go:117] "RemoveContainer" containerID="2876622a2618d9b60f7cb4f182054a8b2d30209e3bd14c5d4afe515101547bc8"
	I0314 19:42:13.774183    8428 command_runner.go:130] > Mar 14 19:42:00 multinode-442000 kubelet[1523]: I0314 19:42:00.447192    1523 scope.go:117] "RemoveContainer" containerID="9585e3eb2ead2f471eb0d22c8e29e4bfd954095774af365d80329ea39fff78e1"
	I0314 19:42:13.774183    8428 command_runner.go:130] > Mar 14 19:42:00 multinode-442000 kubelet[1523]: I0314 19:42:00.490865    1523 scope.go:117] "RemoveContainer" containerID="cd640f130e429bd4182c258358ec791604b8f307f9c45f2e3880e9b1a7df666a"
	I0314 19:42:13.774183    8428 command_runner.go:130] > Mar 14 19:42:00 multinode-442000 kubelet[1523]: E0314 19:42:00.516969    1523 iptables.go:575] "Could not set up iptables canary" err=<
	I0314 19:42:13.774183    8428 command_runner.go:130] > Mar 14 19:42:00 multinode-442000 kubelet[1523]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	I0314 19:42:13.774183    8428 command_runner.go:130] > Mar 14 19:42:00 multinode-442000 kubelet[1523]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	I0314 19:42:13.774183    8428 command_runner.go:130] > Mar 14 19:42:00 multinode-442000 kubelet[1523]:         Perhaps ip6tables or your kernel needs to be upgraded.
	I0314 19:42:13.774183    8428 command_runner.go:130] > Mar 14 19:42:00 multinode-442000 kubelet[1523]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	I0314 19:42:13.774183    8428 command_runner.go:130] > Mar 14 19:42:11 multinode-442000 kubelet[1523]: I0314 19:42:11.167906    1523 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="89f326046d00d990fbe8611867f6438ef498caad91d78b4f265633a7cd56307f"
	I0314 19:42:13.774183    8428 command_runner.go:130] > Mar 14 19:42:11 multinode-442000 kubelet[1523]: I0314 19:42:11.214897    1523 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cddebe360bf3a58d057146523ff9f043ddb40843d3e55a24f8f364524780a439"
	I0314 19:42:13.815729    8428 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:42:13.815729    8428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 19:42:14.033247    8428 command_runner.go:130] > Name:               multinode-442000
	I0314 19:42:14.033324    8428 command_runner.go:130] > Roles:              control-plane
	I0314 19:42:14.033324    8428 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0314 19:42:14.033324    8428 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0314 19:42:14.033324    8428 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0314 19:42:14.033324    8428 command_runner.go:130] >                     kubernetes.io/hostname=multinode-442000
	I0314 19:42:14.033458    8428 command_runner.go:130] >                     kubernetes.io/os=linux
	I0314 19:42:14.033510    8428 command_runner.go:130] >                     minikube.k8s.io/commit=c6f78a3db54ac629870afb44fb5bc8be9e04a8c7
	I0314 19:42:14.033568    8428 command_runner.go:130] >                     minikube.k8s.io/name=multinode-442000
	I0314 19:42:14.033568    8428 command_runner.go:130] >                     minikube.k8s.io/primary=true
	I0314 19:42:14.033640    8428 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_03_14T19_19_05_0700
	I0314 19:42:14.033681    8428 command_runner.go:130] >                     minikube.k8s.io/version=v1.32.0
	I0314 19:42:14.033725    8428 command_runner.go:130] >                     node-role.kubernetes.io/control-plane=
	I0314 19:42:14.033783    8428 command_runner.go:130] >                     node.kubernetes.io/exclude-from-external-load-balancers=
	I0314 19:42:14.033783    8428 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0314 19:42:14.033844    8428 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0314 19:42:14.033844    8428 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0314 19:42:14.033908    8428 command_runner.go:130] > CreationTimestamp:  Thu, 14 Mar 2024 19:19:00 +0000
	I0314 19:42:14.033908    8428 command_runner.go:130] > Taints:             <none>
	I0314 19:42:14.033969    8428 command_runner.go:130] > Unschedulable:      false
	I0314 19:42:14.033969    8428 command_runner.go:130] > Lease:
	I0314 19:42:14.034027    8428 command_runner.go:130] >   HolderIdentity:  multinode-442000
	I0314 19:42:14.034027    8428 command_runner.go:130] >   AcquireTime:     <unset>
	I0314 19:42:14.034088    8428 command_runner.go:130] >   RenewTime:       Thu, 14 Mar 2024 19:42:07 +0000
	I0314 19:42:14.034088    8428 command_runner.go:130] > Conditions:
	I0314 19:42:14.034187    8428 command_runner.go:130] >   Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	I0314 19:42:14.034187    8428 command_runner.go:130] >   ----             ------  -----------------                 ------------------                ------                       -------
	I0314 19:42:14.034255    8428 command_runner.go:130] >   MemoryPressure   False   Thu, 14 Mar 2024 19:41:41 +0000   Thu, 14 Mar 2024 19:18:58 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	I0314 19:42:14.034300    8428 command_runner.go:130] >   DiskPressure     False   Thu, 14 Mar 2024 19:41:41 +0000   Thu, 14 Mar 2024 19:18:58 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	I0314 19:42:14.034393    8428 command_runner.go:130] >   PIDPressure      False   Thu, 14 Mar 2024 19:41:41 +0000   Thu, 14 Mar 2024 19:18:58 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	I0314 19:42:14.034445    8428 command_runner.go:130] >   Ready            True    Thu, 14 Mar 2024 19:41:41 +0000   Thu, 14 Mar 2024 19:41:41 +0000   KubeletReady                 kubelet is posting ready status
	I0314 19:42:14.034484    8428 command_runner.go:130] > Addresses:
	I0314 19:42:14.034539    8428 command_runner.go:130] >   InternalIP:  172.17.93.236
	I0314 19:42:14.034539    8428 command_runner.go:130] >   Hostname:    multinode-442000
	I0314 19:42:14.034580    8428 command_runner.go:130] > Capacity:
	I0314 19:42:14.034580    8428 command_runner.go:130] >   cpu:                2
	I0314 19:42:14.034580    8428 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0314 19:42:14.034622    8428 command_runner.go:130] >   hugepages-2Mi:      0
	I0314 19:42:14.034622    8428 command_runner.go:130] >   memory:             2164268Ki
	I0314 19:42:14.034653    8428 command_runner.go:130] >   pods:               110
	I0314 19:42:14.034653    8428 command_runner.go:130] > Allocatable:
	I0314 19:42:14.034699    8428 command_runner.go:130] >   cpu:                2
	I0314 19:42:14.034699    8428 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0314 19:42:14.034732    8428 command_runner.go:130] >   hugepages-2Mi:      0
	I0314 19:42:14.034732    8428 command_runner.go:130] >   memory:             2164268Ki
	I0314 19:42:14.034732    8428 command_runner.go:130] >   pods:               110
	I0314 19:42:14.034789    8428 command_runner.go:130] > System Info:
	I0314 19:42:14.034840    8428 command_runner.go:130] >   Machine ID:                 37c811f81f1d4d709fd4a6eb79d70749
	I0314 19:42:14.034840    8428 command_runner.go:130] >   System UUID:                8469b663-ea90-da4f-856d-11034a8f65d8
	I0314 19:42:14.034890    8428 command_runner.go:130] >   Boot ID:                    91589624-f8f3-469e-b556-aa6dd64e54de
	I0314 19:42:14.034932    8428 command_runner.go:130] >   Kernel Version:             5.10.207
	I0314 19:42:14.034969    8428 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0314 19:42:14.035003    8428 command_runner.go:130] >   Operating System:           linux
	I0314 19:42:14.035071    8428 command_runner.go:130] >   Architecture:               amd64
	I0314 19:42:14.035071    8428 command_runner.go:130] >   Container Runtime Version:  docker://25.0.4
	I0314 19:42:14.035133    8428 command_runner.go:130] >   Kubelet Version:            v1.28.4
	I0314 19:42:14.035156    8428 command_runner.go:130] >   Kube-Proxy Version:         v1.28.4
	I0314 19:42:14.035185    8428 command_runner.go:130] > PodCIDR:                      10.244.0.0/24
	I0314 19:42:14.035246    8428 command_runner.go:130] > PodCIDRs:                     10.244.0.0/24
	I0314 19:42:14.035352    8428 command_runner.go:130] > Non-terminated Pods:          (9 in total)
	I0314 19:42:14.035393    8428 command_runner.go:130] >   Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0314 19:42:14.035433    8428 command_runner.go:130] >   ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	I0314 19:42:14.035468    8428 command_runner.go:130] >   default                     busybox-5b5d89c9d6-7446n                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	I0314 19:42:14.035468    8428 command_runner.go:130] >   kube-system                 coredns-5dd5756b68-d22jc                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     22m
	I0314 19:42:14.035528    8428 command_runner.go:130] >   kube-system                 etcd-multinode-442000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         68s
	I0314 19:42:14.035528    8428 command_runner.go:130] >   kube-system                 kindnet-7b9lf                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      22m
	I0314 19:42:14.035606    8428 command_runner.go:130] >   kube-system                 kube-apiserver-multinode-442000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         68s
	I0314 19:42:14.035632    8428 command_runner.go:130] >   kube-system                 kube-controller-manager-multinode-442000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	I0314 19:42:14.035632    8428 command_runner.go:130] >   kube-system                 kube-proxy-cg28g                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	I0314 19:42:14.035632    8428 command_runner.go:130] >   kube-system                 kube-scheduler-multinode-442000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	I0314 19:42:14.035632    8428 command_runner.go:130] >   kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	I0314 19:42:14.035632    8428 command_runner.go:130] > Allocated resources:
	I0314 19:42:14.035632    8428 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0314 19:42:14.035632    8428 command_runner.go:130] >   Resource           Requests     Limits
	I0314 19:42:14.035632    8428 command_runner.go:130] >   --------           --------     ------
	I0314 19:42:14.035632    8428 command_runner.go:130] >   cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	I0314 19:42:14.035632    8428 command_runner.go:130] >   memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	I0314 19:42:14.035632    8428 command_runner.go:130] >   ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	I0314 19:42:14.035632    8428 command_runner.go:130] >   hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	I0314 19:42:14.035632    8428 command_runner.go:130] > Events:
	I0314 19:42:14.035632    8428 command_runner.go:130] >   Type    Reason                   Age                From             Message
	I0314 19:42:14.035632    8428 command_runner.go:130] >   ----    ------                   ----               ----             -------
	I0314 19:42:14.035632    8428 command_runner.go:130] >   Normal  Starting                 22m                kube-proxy       
	I0314 19:42:14.035632    8428 command_runner.go:130] >   Normal  Starting                 65s                kube-proxy       
	I0314 19:42:14.035632    8428 command_runner.go:130] >   Normal  NodeHasSufficientMemory  23m (x8 over 23m)  kubelet          Node multinode-442000 status is now: NodeHasSufficientMemory
	I0314 19:42:14.035632    8428 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    23m (x8 over 23m)  kubelet          Node multinode-442000 status is now: NodeHasNoDiskPressure
	I0314 19:42:14.035632    8428 command_runner.go:130] >   Normal  NodeHasSufficientPID     23m (x7 over 23m)  kubelet          Node multinode-442000 status is now: NodeHasSufficientPID
	I0314 19:42:14.036181    8428 command_runner.go:130] >   Normal  NodeAllocatableEnforced  23m                kubelet          Updated Node Allocatable limit across pods
	I0314 19:42:14.036247    8428 command_runner.go:130] >   Normal  NodeHasSufficientMemory  23m                kubelet          Node multinode-442000 status is now: NodeHasSufficientMemory
	I0314 19:42:14.036301    8428 command_runner.go:130] >   Normal  NodeAllocatableEnforced  23m                kubelet          Updated Node Allocatable limit across pods
	I0314 19:42:14.036360    8428 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    23m                kubelet          Node multinode-442000 status is now: NodeHasNoDiskPressure
	I0314 19:42:14.036415    8428 command_runner.go:130] >   Normal  NodeHasSufficientPID     23m                kubelet          Node multinode-442000 status is now: NodeHasSufficientPID
	I0314 19:42:14.036415    8428 command_runner.go:130] >   Normal  Starting                 23m                kubelet          Starting kubelet.
	I0314 19:42:14.036471    8428 command_runner.go:130] >   Normal  RegisteredNode           22m                node-controller  Node multinode-442000 event: Registered Node multinode-442000 in Controller
	I0314 19:42:14.036532    8428 command_runner.go:130] >   Normal  NodeReady                22m                kubelet          Node multinode-442000 status is now: NodeReady
	I0314 19:42:14.036532    8428 command_runner.go:130] >   Normal  Starting                 74s                kubelet          Starting kubelet.
	I0314 19:42:14.036532    8428 command_runner.go:130] >   Normal  NodeHasSufficientMemory  74s (x8 over 74s)  kubelet          Node multinode-442000 status is now: NodeHasSufficientMemory
	I0314 19:42:14.036606    8428 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    74s (x8 over 74s)  kubelet          Node multinode-442000 status is now: NodeHasNoDiskPressure
	I0314 19:42:14.036659    8428 command_runner.go:130] >   Normal  NodeHasSufficientPID     74s (x7 over 74s)  kubelet          Node multinode-442000 status is now: NodeHasSufficientPID
	I0314 19:42:14.036720    8428 command_runner.go:130] >   Normal  NodeAllocatableEnforced  74s                kubelet          Updated Node Allocatable limit across pods
	I0314 19:42:14.036771    8428 command_runner.go:130] >   Normal  RegisteredNode           56s                node-controller  Node multinode-442000 event: Registered Node multinode-442000 in Controller
	I0314 19:42:14.036830    8428 command_runner.go:130] > Name:               multinode-442000-m02
	I0314 19:42:14.036830    8428 command_runner.go:130] > Roles:              <none>
	I0314 19:42:14.036830    8428 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0314 19:42:14.036886    8428 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0314 19:42:14.036886    8428 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0314 19:42:14.036948    8428 command_runner.go:130] >                     kubernetes.io/hostname=multinode-442000-m02
	I0314 19:42:14.036948    8428 command_runner.go:130] >                     kubernetes.io/os=linux
	I0314 19:42:14.037006    8428 command_runner.go:130] >                     minikube.k8s.io/commit=c6f78a3db54ac629870afb44fb5bc8be9e04a8c7
	I0314 19:42:14.037066    8428 command_runner.go:130] >                     minikube.k8s.io/name=multinode-442000
	I0314 19:42:14.037066    8428 command_runner.go:130] >                     minikube.k8s.io/primary=false
	I0314 19:42:14.037121    8428 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_03_14T19_22_02_0700
	I0314 19:42:14.037121    8428 command_runner.go:130] >                     minikube.k8s.io/version=v1.32.0
	I0314 19:42:14.037184    8428 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0314 19:42:14.037184    8428 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0314 19:42:14.037338    8428 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0314 19:42:14.037381    8428 command_runner.go:130] > CreationTimestamp:  Thu, 14 Mar 2024 19:22:02 +0000
	I0314 19:42:14.037422    8428 command_runner.go:130] > Taints:             node.kubernetes.io/unreachable:NoExecute
	I0314 19:42:14.037460    8428 command_runner.go:130] >                     node.kubernetes.io/unreachable:NoSchedule
	I0314 19:42:14.037460    8428 command_runner.go:130] > Unschedulable:      false
	I0314 19:42:14.037541    8428 command_runner.go:130] > Lease:
	I0314 19:42:14.037541    8428 command_runner.go:130] >   HolderIdentity:  multinode-442000-m02
	I0314 19:42:14.037541    8428 command_runner.go:130] >   AcquireTime:     <unset>
	I0314 19:42:14.037541    8428 command_runner.go:130] >   RenewTime:       Thu, 14 Mar 2024 19:38:03 +0000
	I0314 19:42:14.037541    8428 command_runner.go:130] > Conditions:
	I0314 19:42:14.037541    8428 command_runner.go:130] >   Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	I0314 19:42:14.037541    8428 command_runner.go:130] >   ----             ------    -----------------                 ------------------                ------              -------
	I0314 19:42:14.037541    8428 command_runner.go:130] >   MemoryPressure   Unknown   Thu, 14 Mar 2024 19:33:15 +0000   Thu, 14 Mar 2024 19:41:59 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0314 19:42:14.037541    8428 command_runner.go:130] >   DiskPressure     Unknown   Thu, 14 Mar 2024 19:33:15 +0000   Thu, 14 Mar 2024 19:41:59 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0314 19:42:14.037541    8428 command_runner.go:130] >   PIDPressure      Unknown   Thu, 14 Mar 2024 19:33:15 +0000   Thu, 14 Mar 2024 19:41:59 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0314 19:42:14.037541    8428 command_runner.go:130] >   Ready            Unknown   Thu, 14 Mar 2024 19:33:15 +0000   Thu, 14 Mar 2024 19:41:59 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0314 19:42:14.037541    8428 command_runner.go:130] > Addresses:
	I0314 19:42:14.037541    8428 command_runner.go:130] >   InternalIP:  172.17.80.135
	I0314 19:42:14.037541    8428 command_runner.go:130] >   Hostname:    multinode-442000-m02
	I0314 19:42:14.037541    8428 command_runner.go:130] > Capacity:
	I0314 19:42:14.037541    8428 command_runner.go:130] >   cpu:                2
	I0314 19:42:14.037541    8428 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0314 19:42:14.037541    8428 command_runner.go:130] >   hugepages-2Mi:      0
	I0314 19:42:14.037541    8428 command_runner.go:130] >   memory:             2164268Ki
	I0314 19:42:14.037541    8428 command_runner.go:130] >   pods:               110
	I0314 19:42:14.037541    8428 command_runner.go:130] > Allocatable:
	I0314 19:42:14.037541    8428 command_runner.go:130] >   cpu:                2
	I0314 19:42:14.037541    8428 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0314 19:42:14.037541    8428 command_runner.go:130] >   hugepages-2Mi:      0
	I0314 19:42:14.037541    8428 command_runner.go:130] >   memory:             2164268Ki
	I0314 19:42:14.037541    8428 command_runner.go:130] >   pods:               110
	I0314 19:42:14.037541    8428 command_runner.go:130] > System Info:
	I0314 19:42:14.037541    8428 command_runner.go:130] >   Machine ID:                 35b6f7da4d3943d99d8a5913cae1c8fb
	I0314 19:42:14.037541    8428 command_runner.go:130] >   System UUID:                0b9b8376-0767-f940-9973-d373e3dc050d
	I0314 19:42:14.037541    8428 command_runner.go:130] >   Boot ID:                    45d479cc-26e8-46a6-9431-50637071f586
	I0314 19:42:14.037541    8428 command_runner.go:130] >   Kernel Version:             5.10.207
	I0314 19:42:14.037541    8428 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0314 19:42:14.037541    8428 command_runner.go:130] >   Operating System:           linux
	I0314 19:42:14.037541    8428 command_runner.go:130] >   Architecture:               amd64
	I0314 19:42:14.037541    8428 command_runner.go:130] >   Container Runtime Version:  docker://25.0.4
	I0314 19:42:14.037541    8428 command_runner.go:130] >   Kubelet Version:            v1.28.4
	I0314 19:42:14.037541    8428 command_runner.go:130] >   Kube-Proxy Version:         v1.28.4
	I0314 19:42:14.037541    8428 command_runner.go:130] > PodCIDR:                      10.244.1.0/24
	I0314 19:42:14.037541    8428 command_runner.go:130] > PodCIDRs:                     10.244.1.0/24
	I0314 19:42:14.037541    8428 command_runner.go:130] > Non-terminated Pods:          (3 in total)
	I0314 19:42:14.037541    8428 command_runner.go:130] >   Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0314 19:42:14.037541    8428 command_runner.go:130] >   ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	I0314 19:42:14.037541    8428 command_runner.go:130] >   default                     busybox-5b5d89c9d6-8drpb    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	I0314 19:42:14.037541    8428 command_runner.go:130] >   kube-system                 kindnet-c7m4p               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      20m
	I0314 19:42:14.037541    8428 command_runner.go:130] >   kube-system                 kube-proxy-72dzs            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	I0314 19:42:14.037541    8428 command_runner.go:130] > Allocated resources:
	I0314 19:42:14.037541    8428 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0314 19:42:14.037541    8428 command_runner.go:130] >   Resource           Requests   Limits
	I0314 19:42:14.037541    8428 command_runner.go:130] >   --------           --------   ------
	I0314 19:42:14.037541    8428 command_runner.go:130] >   cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	I0314 19:42:14.037541    8428 command_runner.go:130] >   memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	I0314 19:42:14.037541    8428 command_runner.go:130] >   ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	I0314 19:42:14.037541    8428 command_runner.go:130] >   hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	I0314 19:42:14.038642    8428 command_runner.go:130] > Events:
	I0314 19:42:14.038642    8428 command_runner.go:130] >   Type    Reason                   Age                From             Message
	I0314 19:42:14.038642    8428 command_runner.go:130] >   ----    ------                   ----               ----             -------
	I0314 19:42:14.038765    8428 command_runner.go:130] >   Normal  Starting                 20m                kube-proxy       
	I0314 19:42:14.038765    8428 command_runner.go:130] >   Normal  NodeHasSufficientMemory  20m (x5 over 20m)  kubelet          Node multinode-442000-m02 status is now: NodeHasSufficientMemory
	I0314 19:42:14.038765    8428 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    20m (x5 over 20m)  kubelet          Node multinode-442000-m02 status is now: NodeHasNoDiskPressure
	I0314 19:42:14.038765    8428 command_runner.go:130] >   Normal  NodeHasSufficientPID     20m (x5 over 20m)  kubelet          Node multinode-442000-m02 status is now: NodeHasSufficientPID
	I0314 19:42:14.038765    8428 command_runner.go:130] >   Normal  RegisteredNode           20m                node-controller  Node multinode-442000-m02 event: Registered Node multinode-442000-m02 in Controller
	I0314 19:42:14.038765    8428 command_runner.go:130] >   Normal  NodeReady                19m                kubelet          Node multinode-442000-m02 status is now: NodeReady
	I0314 19:42:14.038765    8428 command_runner.go:130] >   Normal  RegisteredNode           56s                node-controller  Node multinode-442000-m02 event: Registered Node multinode-442000-m02 in Controller
	I0314 19:42:14.038765    8428 command_runner.go:130] >   Normal  NodeNotReady             15s                node-controller  Node multinode-442000-m02 status is now: NodeNotReady
	I0314 19:42:14.038765    8428 command_runner.go:130] > Name:               multinode-442000-m03
	I0314 19:42:14.038765    8428 command_runner.go:130] > Roles:              <none>
	I0314 19:42:14.038765    8428 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0314 19:42:14.038765    8428 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0314 19:42:14.038765    8428 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0314 19:42:14.038765    8428 command_runner.go:130] >                     kubernetes.io/hostname=multinode-442000-m03
	I0314 19:42:14.038765    8428 command_runner.go:130] >                     kubernetes.io/os=linux
	I0314 19:42:14.038765    8428 command_runner.go:130] >                     minikube.k8s.io/commit=c6f78a3db54ac629870afb44fb5bc8be9e04a8c7
	I0314 19:42:14.038765    8428 command_runner.go:130] >                     minikube.k8s.io/name=multinode-442000
	I0314 19:42:14.038765    8428 command_runner.go:130] >                     minikube.k8s.io/primary=false
	I0314 19:42:14.038765    8428 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_03_14T19_36_47_0700
	I0314 19:42:14.038765    8428 command_runner.go:130] >                     minikube.k8s.io/version=v1.32.0
	I0314 19:42:14.038765    8428 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0314 19:42:14.038765    8428 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0314 19:42:14.039293    8428 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0314 19:42:14.039293    8428 command_runner.go:130] > CreationTimestamp:  Thu, 14 Mar 2024 19:36:47 +0000
	I0314 19:42:14.039293    8428 command_runner.go:130] > Taints:             node.kubernetes.io/unreachable:NoExecute
	I0314 19:42:14.039293    8428 command_runner.go:130] >                     node.kubernetes.io/unreachable:NoSchedule
	I0314 19:42:14.039293    8428 command_runner.go:130] > Unschedulable:      false
	I0314 19:42:14.039293    8428 command_runner.go:130] > Lease:
	I0314 19:42:14.039412    8428 command_runner.go:130] >   HolderIdentity:  multinode-442000-m03
	I0314 19:42:14.039412    8428 command_runner.go:130] >   AcquireTime:     <unset>
	I0314 19:42:14.039463    8428 command_runner.go:130] >   RenewTime:       Thu, 14 Mar 2024 19:37:37 +0000
	I0314 19:42:14.039463    8428 command_runner.go:130] > Conditions:
	I0314 19:42:14.039463    8428 command_runner.go:130] >   Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	I0314 19:42:14.039463    8428 command_runner.go:130] >   ----             ------    -----------------                 ------------------                ------              -------
	I0314 19:42:14.039463    8428 command_runner.go:130] >   MemoryPressure   Unknown   Thu, 14 Mar 2024 19:36:54 +0000   Thu, 14 Mar 2024 19:38:21 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0314 19:42:14.039463    8428 command_runner.go:130] >   DiskPressure     Unknown   Thu, 14 Mar 2024 19:36:54 +0000   Thu, 14 Mar 2024 19:38:21 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0314 19:42:14.039463    8428 command_runner.go:130] >   PIDPressure      Unknown   Thu, 14 Mar 2024 19:36:54 +0000   Thu, 14 Mar 2024 19:38:21 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0314 19:42:14.039463    8428 command_runner.go:130] >   Ready            Unknown   Thu, 14 Mar 2024 19:36:54 +0000   Thu, 14 Mar 2024 19:38:21 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0314 19:42:14.039463    8428 command_runner.go:130] > Addresses:
	I0314 19:42:14.039463    8428 command_runner.go:130] >   InternalIP:  172.17.84.215
	I0314 19:42:14.039463    8428 command_runner.go:130] >   Hostname:    multinode-442000-m03
	I0314 19:42:14.039463    8428 command_runner.go:130] > Capacity:
	I0314 19:42:14.039463    8428 command_runner.go:130] >   cpu:                2
	I0314 19:42:14.039463    8428 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0314 19:42:14.039463    8428 command_runner.go:130] >   hugepages-2Mi:      0
	I0314 19:42:14.039463    8428 command_runner.go:130] >   memory:             2164268Ki
	I0314 19:42:14.039463    8428 command_runner.go:130] >   pods:               110
	I0314 19:42:14.039463    8428 command_runner.go:130] > Allocatable:
	I0314 19:42:14.039463    8428 command_runner.go:130] >   cpu:                2
	I0314 19:42:14.039463    8428 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0314 19:42:14.039463    8428 command_runner.go:130] >   hugepages-2Mi:      0
	I0314 19:42:14.039463    8428 command_runner.go:130] >   memory:             2164268Ki
	I0314 19:42:14.039463    8428 command_runner.go:130] >   pods:               110
	I0314 19:42:14.039463    8428 command_runner.go:130] > System Info:
	I0314 19:42:14.039463    8428 command_runner.go:130] >   Machine ID:                 dc7772516bfe448db22a5c28796f53ab
	I0314 19:42:14.039463    8428 command_runner.go:130] >   System UUID:                71573585-d564-f043-9154-3d5854ce61b8
	I0314 19:42:14.039463    8428 command_runner.go:130] >   Boot ID:                    fed746b2-110b-43ee-9065-09983ba74a37
	I0314 19:42:14.039995    8428 command_runner.go:130] >   Kernel Version:             5.10.207
	I0314 19:42:14.039995    8428 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0314 19:42:14.039995    8428 command_runner.go:130] >   Operating System:           linux
	I0314 19:42:14.040079    8428 command_runner.go:130] >   Architecture:               amd64
	I0314 19:42:14.040079    8428 command_runner.go:130] >   Container Runtime Version:  docker://25.0.4
	I0314 19:42:14.040141    8428 command_runner.go:130] >   Kubelet Version:            v1.28.4
	I0314 19:42:14.040141    8428 command_runner.go:130] >   Kube-Proxy Version:         v1.28.4
	I0314 19:42:14.040141    8428 command_runner.go:130] > PodCIDR:                      10.244.3.0/24
	I0314 19:42:14.040141    8428 command_runner.go:130] > PodCIDRs:                     10.244.3.0/24
	I0314 19:42:14.040141    8428 command_runner.go:130] > Non-terminated Pods:          (2 in total)
	I0314 19:42:14.040141    8428 command_runner.go:130] >   Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0314 19:42:14.040141    8428 command_runner.go:130] >   ---------                   ----                ------------  ----------  ---------------  -------------  ---
	I0314 19:42:14.040141    8428 command_runner.go:130] >   kube-system                 kindnet-r7zdb       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      15m
	I0314 19:42:14.040141    8428 command_runner.go:130] >   kube-system                 kube-proxy-w2qls    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	I0314 19:42:14.040141    8428 command_runner.go:130] > Allocated resources:
	I0314 19:42:14.040141    8428 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0314 19:42:14.040141    8428 command_runner.go:130] >   Resource           Requests   Limits
	I0314 19:42:14.040141    8428 command_runner.go:130] >   --------           --------   ------
	I0314 19:42:14.040141    8428 command_runner.go:130] >   cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	I0314 19:42:14.040141    8428 command_runner.go:130] >   memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	I0314 19:42:14.040141    8428 command_runner.go:130] >   ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	I0314 19:42:14.040141    8428 command_runner.go:130] >   hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	I0314 19:42:14.040141    8428 command_runner.go:130] > Events:
	I0314 19:42:14.040141    8428 command_runner.go:130] >   Type    Reason                   Age                    From             Message
	I0314 19:42:14.040141    8428 command_runner.go:130] >   ----    ------                   ----                   ----             -------
	I0314 19:42:14.040141    8428 command_runner.go:130] >   Normal  Starting                 15m                    kube-proxy       
	I0314 19:42:14.040141    8428 command_runner.go:130] >   Normal  Starting                 5m25s                  kube-proxy       
	I0314 19:42:14.040141    8428 command_runner.go:130] >   Normal  NodeHasSufficientMemory  15m (x5 over 15m)      kubelet          Node multinode-442000-m03 status is now: NodeHasSufficientMemory
	I0314 19:42:14.040667    8428 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    15m (x5 over 15m)      kubelet          Node multinode-442000-m03 status is now: NodeHasNoDiskPressure
	I0314 19:42:14.040667    8428 command_runner.go:130] >   Normal  NodeHasSufficientPID     15m (x5 over 15m)      kubelet          Node multinode-442000-m03 status is now: NodeHasSufficientPID
	I0314 19:42:14.040748    8428 command_runner.go:130] >   Normal  NodeReady                15m                    kubelet          Node multinode-442000-m03 status is now: NodeReady
	I0314 19:42:14.040826    8428 command_runner.go:130] >   Normal  NodeHasSufficientMemory  5m27s (x5 over 5m29s)  kubelet          Node multinode-442000-m03 status is now: NodeHasSufficientMemory
	I0314 19:42:14.040826    8428 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    5m27s (x5 over 5m29s)  kubelet          Node multinode-442000-m03 status is now: NodeHasNoDiskPressure
	I0314 19:42:14.040917    8428 command_runner.go:130] >   Normal  NodeHasSufficientPID     5m27s (x5 over 5m29s)  kubelet          Node multinode-442000-m03 status is now: NodeHasSufficientPID
	I0314 19:42:14.040917    8428 command_runner.go:130] >   Normal  RegisteredNode           5m23s                  node-controller  Node multinode-442000-m03 event: Registered Node multinode-442000-m03 in Controller
	I0314 19:42:14.040917    8428 command_runner.go:130] >   Normal  NodeReady                5m20s                  kubelet          Node multinode-442000-m03 status is now: NodeReady
	I0314 19:42:14.041025    8428 command_runner.go:130] >   Normal  NodeNotReady             3m53s                  node-controller  Node multinode-442000-m03 status is now: NodeNotReady
	I0314 19:42:14.041025    8428 command_runner.go:130] >   Normal  RegisteredNode           56s                    node-controller  Node multinode-442000-m03 event: Registered Node multinode-442000-m03 in Controller
	I0314 19:42:14.051049    8428 logs.go:123] Gathering logs for kube-scheduler [dbb603289bf1] ...
	I0314 19:42:14.051049    8428 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbb603289bf1"
	I0314 19:42:14.082882    8428 command_runner.go:130] ! I0314 19:18:59.007917       1 serving.go:348] Generated self-signed cert in-memory
	I0314 19:42:14.082882    8428 command_runner.go:130] ! W0314 19:19:00.211611       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	I0314 19:42:14.082882    8428 command_runner.go:130] ! W0314 19:19:00.212802       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	I0314 19:42:14.083479    8428 command_runner.go:130] ! W0314 19:19:00.212990       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	I0314 19:42:14.083479    8428 command_runner.go:130] ! W0314 19:19:00.213108       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0314 19:42:14.083644    8428 command_runner.go:130] ! I0314 19:19:00.283055       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.4"
	I0314 19:42:14.083644    8428 command_runner.go:130] ! I0314 19:19:00.284207       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0314 19:42:14.083644    8428 command_runner.go:130] ! I0314 19:19:00.288027       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0314 19:42:14.083743    8428 command_runner.go:130] ! I0314 19:19:00.288233       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0314 19:42:14.083743    8428 command_runner.go:130] ! I0314 19:19:00.288206       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0314 19:42:14.083743    8428 command_runner.go:130] ! I0314 19:19:00.290233       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0314 19:42:14.083743    8428 command_runner.go:130] ! W0314 19:19:00.293166       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0314 19:42:14.083743    8428 command_runner.go:130] ! E0314 19:19:00.293367       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0314 19:42:14.083863    8428 command_runner.go:130] ! W0314 19:19:00.311723       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0314 19:42:14.083863    8428 command_runner.go:130] ! E0314 19:19:00.311803       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0314 19:42:14.083863    8428 command_runner.go:130] ! W0314 19:19:00.312480       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0314 19:42:14.083863    8428 command_runner.go:130] ! E0314 19:19:00.317665       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0314 19:42:14.083974    8428 command_runner.go:130] ! W0314 19:19:00.313212       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0314 19:42:14.083974    8428 command_runner.go:130] ! W0314 19:19:00.313379       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0314 19:42:14.084069    8428 command_runner.go:130] ! W0314 19:19:00.313450       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0314 19:42:14.084069    8428 command_runner.go:130] ! W0314 19:19:00.313586       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0314 19:42:14.084069    8428 command_runner.go:130] ! W0314 19:19:00.313632       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0314 19:42:14.084161    8428 command_runner.go:130] ! W0314 19:19:00.313705       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0314 19:42:14.084161    8428 command_runner.go:130] ! W0314 19:19:00.313774       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0314 19:42:14.084161    8428 command_runner.go:130] ! W0314 19:19:00.313864       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0314 19:42:14.084161    8428 command_runner.go:130] ! W0314 19:19:00.313910       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0314 19:42:14.084250    8428 command_runner.go:130] ! W0314 19:19:00.313978       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0314 19:42:14.084250    8428 command_runner.go:130] ! W0314 19:19:00.314056       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0314 19:42:14.084250    8428 command_runner.go:130] ! W0314 19:19:00.314091       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0314 19:42:14.084340    8428 command_runner.go:130] ! E0314 19:19:00.318101       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0314 19:42:14.084340    8428 command_runner.go:130] ! E0314 19:19:00.318394       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0314 19:42:14.084340    8428 command_runner.go:130] ! E0314 19:19:00.318606       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0314 19:42:14.084429    8428 command_runner.go:130] ! E0314 19:19:00.318728       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0314 19:42:14.084429    8428 command_runner.go:130] ! E0314 19:19:00.318953       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0314 19:42:14.084429    8428 command_runner.go:130] ! E0314 19:19:00.319076       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0314 19:42:14.084519    8428 command_runner.go:130] ! E0314 19:19:00.319318       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0314 19:42:14.084519    8428 command_runner.go:130] ! E0314 19:19:00.319575       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0314 19:42:14.084519    8428 command_runner.go:130] ! E0314 19:19:00.319588       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0314 19:42:14.084631    8428 command_runner.go:130] ! E0314 19:19:00.319719       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0314 19:42:14.084631    8428 command_runner.go:130] ! E0314 19:19:00.319732       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0314 19:42:14.084631    8428 command_runner.go:130] ! E0314 19:19:00.319788       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0314 19:42:14.084631    8428 command_runner.go:130] ! W0314 19:19:01.268901       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0314 19:42:14.084729    8428 command_runner.go:130] ! E0314 19:19:01.269219       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0314 19:42:14.084729    8428 command_runner.go:130] ! W0314 19:19:01.309661       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0314 19:42:14.084729    8428 command_runner.go:130] ! E0314 19:19:01.309894       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0314 19:42:14.084729    8428 command_runner.go:130] ! W0314 19:19:01.318104       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0314 19:42:14.084834    8428 command_runner.go:130] ! E0314 19:19:01.318410       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0314 19:42:14.084834    8428 command_runner.go:130] ! W0314 19:19:01.382148       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0314 19:42:14.084834    8428 command_runner.go:130] ! E0314 19:19:01.382194       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0314 19:42:14.084941    8428 command_runner.go:130] ! W0314 19:19:01.454259       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0314 19:42:14.084941    8428 command_runner.go:130] ! E0314 19:19:01.454398       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0314 19:42:14.085024    8428 command_runner.go:130] ! W0314 19:19:01.505982       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0314 19:42:14.085024    8428 command_runner.go:130] ! E0314 19:19:01.506182       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0314 19:42:14.085078    8428 command_runner.go:130] ! W0314 19:19:01.640521       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0314 19:42:14.085078    8428 command_runner.go:130] ! E0314 19:19:01.640836       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0314 19:42:14.085078    8428 command_runner.go:130] ! W0314 19:19:01.681052       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0314 19:42:14.085078    8428 command_runner.go:130] ! E0314 19:19:01.681953       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0314 19:42:14.085078    8428 command_runner.go:130] ! W0314 19:19:01.732243       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0314 19:42:14.085078    8428 command_runner.go:130] ! E0314 19:19:01.732288       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0314 19:42:14.085078    8428 command_runner.go:130] ! W0314 19:19:01.767241       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0314 19:42:14.085078    8428 command_runner.go:130] ! E0314 19:19:01.767329       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0314 19:42:14.085078    8428 command_runner.go:130] ! W0314 19:19:01.783665       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0314 19:42:14.085078    8428 command_runner.go:130] ! E0314 19:19:01.783845       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0314 19:42:14.085078    8428 command_runner.go:130] ! W0314 19:19:01.812936       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0314 19:42:14.085078    8428 command_runner.go:130] ! E0314 19:19:01.813027       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0314 19:42:14.085078    8428 command_runner.go:130] ! W0314 19:19:01.821109       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0314 19:42:14.085078    8428 command_runner.go:130] ! E0314 19:19:01.821267       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0314 19:42:14.085078    8428 command_runner.go:130] ! W0314 19:19:01.843311       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0314 19:42:14.085078    8428 command_runner.go:130] ! E0314 19:19:01.843339       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0314 19:42:14.085078    8428 command_runner.go:130] ! W0314 19:19:01.914649       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0314 19:42:14.085078    8428 command_runner.go:130] ! E0314 19:19:01.914986       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0314 19:42:14.085078    8428 command_runner.go:130] ! I0314 19:19:04.090863       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0314 19:42:14.085078    8428 command_runner.go:130] ! I0314 19:38:43.236637       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0314 19:42:14.085620    8428 command_runner.go:130] ! I0314 19:38:43.237145       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0314 19:42:14.085620    8428 command_runner.go:130] ! E0314 19:38:43.237439       1 run.go:74] "command failed" err="finished without leader elect"
	I0314 19:42:14.096261    8428 logs.go:123] Gathering logs for kube-controller-manager [16b80f73683d] ...
	I0314 19:42:14.096291    8428 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16b80f73683d"
	I0314 19:42:14.132585    8428 command_runner.go:130] ! I0314 19:18:57.791996       1 serving.go:348] Generated self-signed cert in-memory
	I0314 19:42:14.133034    8428 command_runner.go:130] ! I0314 19:18:58.802083       1 controllermanager.go:189] "Starting" version="v1.28.4"
	I0314 19:42:14.133107    8428 command_runner.go:130] ! I0314 19:18:58.802123       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0314 19:42:14.133107    8428 command_runner.go:130] ! I0314 19:18:58.803952       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0314 19:42:14.133241    8428 command_runner.go:130] ! I0314 19:18:58.804068       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0314 19:42:14.133241    8428 command_runner.go:130] ! I0314 19:18:58.807259       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0314 19:42:14.133412    8428 command_runner.go:130] ! I0314 19:18:58.807321       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0314 19:42:14.133412    8428 command_runner.go:130] ! I0314 19:19:03.211766       1 shared_informer.go:311] Waiting for caches to sync for tokens
	I0314 19:42:14.133575    8428 command_runner.go:130] ! I0314 19:19:03.241058       1 controllermanager.go:642] "Started controller" controller="endpoints-controller"
	I0314 19:42:14.133655    8428 command_runner.go:130] ! I0314 19:19:03.241394       1 endpoints_controller.go:174] "Starting endpoint controller"
	I0314 19:42:14.133731    8428 command_runner.go:130] ! I0314 19:19:03.241421       1 shared_informer.go:311] Waiting for caches to sync for endpoint
	I0314 19:42:14.133731    8428 command_runner.go:130] ! I0314 19:19:03.277645       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="limitranges"
	I0314 19:42:14.133812    8428 command_runner.go:130] ! I0314 19:19:03.277842       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="serviceaccounts"
	I0314 19:42:14.133987    8428 command_runner.go:130] ! I0314 19:19:03.277987       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="statefulsets.apps"
	I0314 19:42:14.134092    8428 command_runner.go:130] ! I0314 19:19:03.278099       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="roles.rbac.authorization.k8s.io"
	I0314 19:42:14.134092    8428 command_runner.go:130] ! I0314 19:19:03.278176       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="jobs.batch"
	I0314 19:42:14.134181    8428 command_runner.go:130] ! I0314 19:19:03.278283       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="ingresses.networking.k8s.io"
	I0314 19:42:14.134261    8428 command_runner.go:130] ! I0314 19:19:03.278389       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="leases.coordination.k8s.io"
	I0314 19:42:14.134336    8428 command_runner.go:130] ! I0314 19:19:03.278566       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="rolebindings.rbac.authorization.k8s.io"
	I0314 19:42:14.134418    8428 command_runner.go:130] ! W0314 19:19:03.278710       1 shared_informer.go:593] resyncPeriod 13h23m0.648968128s is smaller than resyncCheckPeriod 15h46m21.421594093s and the informer has already started. Changing it to 15h46m21.421594093s
	I0314 19:42:14.134418    8428 command_runner.go:130] ! I0314 19:19:03.278915       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="controllerrevisions.apps"
	I0314 19:42:14.134506    8428 command_runner.go:130] ! I0314 19:19:03.279052       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="daemonsets.apps"
	I0314 19:42:14.134585    8428 command_runner.go:130] ! I0314 19:19:03.279196       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="networkpolicies.networking.k8s.io"
	I0314 19:42:14.134585    8428 command_runner.go:130] ! I0314 19:19:03.279291       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="poddisruptionbudgets.policy"
	I0314 19:42:14.134834    8428 command_runner.go:130] ! I0314 19:19:03.279313       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="endpoints"
	I0314 19:42:14.134834    8428 command_runner.go:130] ! I0314 19:19:03.279560       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="replicasets.apps"
	I0314 19:42:14.134915    8428 command_runner.go:130] ! I0314 19:19:03.279688       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="horizontalpodautoscalers.autoscaling"
	I0314 19:42:14.134991    8428 command_runner.go:130] ! I0314 19:19:03.279834       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="cronjobs.batch"
	I0314 19:42:14.135068    8428 command_runner.go:130] ! I0314 19:19:03.279857       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="deployments.apps"
	I0314 19:42:14.135124    8428 command_runner.go:130] ! I0314 19:19:03.279927       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="csistoragecapacities.storage.k8s.io"
	I0314 19:42:14.135124    8428 command_runner.go:130] ! I0314 19:19:03.280011       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="endpointslices.discovery.k8s.io"
	I0314 19:42:14.135185    8428 command_runner.go:130] ! I0314 19:19:03.280106       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="podtemplates"
	I0314 19:42:14.135185    8428 command_runner.go:130] ! I0314 19:19:03.280148       1 controllermanager.go:642] "Started controller" controller="resourcequota-controller"
	I0314 19:42:14.135249    8428 command_runner.go:130] ! I0314 19:19:03.280224       1 resource_quota_controller.go:294] "Starting resource quota controller"
	I0314 19:42:14.135249    8428 command_runner.go:130] ! I0314 19:19:03.280306       1 shared_informer.go:311] Waiting for caches to sync for resource quota
	I0314 19:42:14.135309    8428 command_runner.go:130] ! I0314 19:19:03.280392       1 resource_quota_monitor.go:305] "QuotaMonitor running"
	I0314 19:42:14.135309    8428 command_runner.go:130] ! I0314 19:19:03.297527       1 controllermanager.go:642] "Started controller" controller="serviceaccount-controller"
	I0314 19:42:14.135365    8428 command_runner.go:130] ! I0314 19:19:03.297675       1 serviceaccounts_controller.go:111] "Starting service account controller"
	I0314 19:42:14.135424    8428 command_runner.go:130] ! I0314 19:19:03.297706       1 shared_informer.go:311] Waiting for caches to sync for service account
	I0314 19:42:14.135424    8428 command_runner.go:130] ! I0314 19:19:03.310691       1 node_lifecycle_controller.go:431] "Controller will reconcile labels"
	I0314 19:42:14.135480    8428 command_runner.go:130] ! I0314 19:19:03.310864       1 controllermanager.go:642] "Started controller" controller="node-lifecycle-controller"
	I0314 19:42:14.135541    8428 command_runner.go:130] ! I0314 19:19:03.311121       1 node_lifecycle_controller.go:465] "Sending events to api server"
	I0314 19:42:14.135596    8428 command_runner.go:130] ! I0314 19:19:03.311163       1 node_lifecycle_controller.go:476] "Starting node controller"
	I0314 19:42:14.135596    8428 command_runner.go:130] ! I0314 19:19:03.311170       1 shared_informer.go:311] Waiting for caches to sync for taint
	I0314 19:42:14.135596    8428 command_runner.go:130] ! I0314 19:19:03.312491       1 shared_informer.go:318] Caches are synced for tokens
	I0314 19:42:14.135656    8428 command_runner.go:130] ! I0314 19:19:03.324271       1 controllermanager.go:642] "Started controller" controller="persistentvolume-attach-detach-controller"
	I0314 19:42:14.135717    8428 command_runner.go:130] ! I0314 19:19:03.324640       1 attach_detach_controller.go:337] "Starting attach detach controller"
	I0314 19:42:14.135717    8428 command_runner.go:130] ! I0314 19:19:03.324856       1 shared_informer.go:311] Waiting for caches to sync for attach detach
	I0314 19:42:14.135778    8428 command_runner.go:130] ! I0314 19:19:03.341489       1 controllermanager.go:642] "Started controller" controller="certificatesigningrequest-cleaner-controller"
	I0314 19:42:14.135778    8428 command_runner.go:130] ! I0314 19:19:03.341829       1 cleaner.go:83] "Starting CSR cleaner controller"
	I0314 19:42:14.135833    8428 command_runner.go:130] ! I0314 19:19:03.359979       1 controllermanager.go:642] "Started controller" controller="bootstrap-signer-controller"
	I0314 19:42:14.135892    8428 command_runner.go:130] ! I0314 19:19:03.360131       1 shared_informer.go:311] Waiting for caches to sync for bootstrap_signer
	I0314 19:42:14.135947    8428 command_runner.go:130] ! I0314 19:19:03.373006       1 controllermanager.go:642] "Started controller" controller="persistentvolume-binder-controller"
	I0314 19:42:14.135947    8428 command_runner.go:130] ! I0314 19:19:03.373343       1 pv_controller_base.go:319] "Starting persistent volume controller"
	I0314 19:42:14.136006    8428 command_runner.go:130] ! I0314 19:19:03.373606       1 shared_informer.go:311] Waiting for caches to sync for persistent volume
	I0314 19:42:14.136062    8428 command_runner.go:130] ! I0314 19:19:03.385026       1 controllermanager.go:642] "Started controller" controller="persistentvolumeclaim-protection-controller"
	I0314 19:42:14.136062    8428 command_runner.go:130] ! I0314 19:19:03.385081       1 pvc_protection_controller.go:102] "Starting PVC protection controller"
	I0314 19:42:14.136121    8428 command_runner.go:130] ! I0314 19:19:03.385807       1 shared_informer.go:311] Waiting for caches to sync for PVC protection
	I0314 19:42:14.136180    8428 command_runner.go:130] ! I0314 19:19:03.399556       1 controllermanager.go:642] "Started controller" controller="token-cleaner-controller"
	I0314 19:42:14.136180    8428 command_runner.go:130] ! I0314 19:19:03.399796       1 core.go:228] "Warning: configure-cloud-routes is set, but no cloud provider specified. Will not configure cloud provider routes."
	I0314 19:42:14.136240    8428 command_runner.go:130] ! I0314 19:19:03.399936       1 controllermanager.go:620] "Warning: skipping controller" controller="node-route-controller"
	I0314 19:42:14.136295    8428 command_runner.go:130] ! I0314 19:19:03.400078       1 tokencleaner.go:112] "Starting token cleaner controller"
	I0314 19:42:14.136354    8428 command_runner.go:130] ! I0314 19:19:03.400349       1 shared_informer.go:311] Waiting for caches to sync for token_cleaner
	I0314 19:42:14.136354    8428 command_runner.go:130] ! I0314 19:19:03.400489       1 shared_informer.go:318] Caches are synced for token_cleaner
	I0314 19:42:14.136411    8428 command_runner.go:130] ! I0314 19:19:03.521977       1 controllermanager.go:642] "Started controller" controller="persistentvolume-protection-controller"
	I0314 19:42:14.136411    8428 command_runner.go:130] ! I0314 19:19:03.522076       1 pv_protection_controller.go:78] "Starting PV protection controller"
	I0314 19:42:14.136471    8428 command_runner.go:130] ! I0314 19:19:03.522086       1 shared_informer.go:311] Waiting for caches to sync for PV protection
	I0314 19:42:14.136471    8428 command_runner.go:130] ! I0314 19:19:03.567446       1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-kubelet-serving"
	I0314 19:42:14.136528    8428 command_runner.go:130] ! I0314 19:19:03.567574       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I0314 19:42:14.136590    8428 command_runner.go:130] ! I0314 19:19:03.567615       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0314 19:42:14.136590    8428 command_runner.go:130] ! I0314 19:19:03.568792       1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-kubelet-client"
	I0314 19:42:14.136706    8428 command_runner.go:130] ! I0314 19:19:03.568891       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I0314 19:42:14.136769    8428 command_runner.go:130] ! I0314 19:19:03.569119       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0314 19:42:14.136820    8428 command_runner.go:130] ! I0314 19:19:03.570147       1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-kube-apiserver-client"
	I0314 19:42:14.136820    8428 command_runner.go:130] ! I0314 19:19:03.570261       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I0314 19:42:14.136875    8428 command_runner.go:130] ! I0314 19:19:03.570356       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0314 19:42:14.136937    8428 command_runner.go:130] ! I0314 19:19:03.571403       1 controllermanager.go:642] "Started controller" controller="certificatesigningrequest-signing-controller"
	I0314 19:42:14.136998    8428 command_runner.go:130] ! I0314 19:19:03.571529       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0314 19:42:14.137042    8428 command_runner.go:130] ! I0314 19:19:03.571434       1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-legacy-unknown"
	I0314 19:42:14.137100    8428 command_runner.go:130] ! I0314 19:19:03.572095       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0314 19:42:14.137100    8428 command_runner.go:130] ! I0314 19:19:03.723142       1 controllermanager.go:642] "Started controller" controller="ttl-controller"
	I0314 19:42:14.137160    8428 command_runner.go:130] ! I0314 19:19:03.723289       1 ttl_controller.go:124] "Starting TTL controller"
	I0314 19:42:14.137160    8428 command_runner.go:130] ! I0314 19:19:03.723300       1 shared_informer.go:311] Waiting for caches to sync for TTL
	I0314 19:42:14.137216    8428 command_runner.go:130] ! I0314 19:19:13.784656       1 range_allocator.go:111] "No Secondary Service CIDR provided. Skipping filtering out secondary service addresses"
	I0314 19:42:14.137276    8428 command_runner.go:130] ! I0314 19:19:13.784710       1 controllermanager.go:642] "Started controller" controller="node-ipam-controller"
	I0314 19:42:14.137276    8428 command_runner.go:130] ! I0314 19:19:13.784891       1 node_ipam_controller.go:162] "Starting ipam controller"
	I0314 19:42:14.137333    8428 command_runner.go:130] ! I0314 19:19:13.784975       1 shared_informer.go:311] Waiting for caches to sync for node
	I0314 19:42:14.137333    8428 command_runner.go:130] ! I0314 19:19:13.813537       1 controllermanager.go:642] "Started controller" controller="namespace-controller"
	I0314 19:42:14.137393    8428 command_runner.go:130] ! I0314 19:19:13.814099       1 namespace_controller.go:197] "Starting namespace controller"
	I0314 19:42:14.137393    8428 command_runner.go:130] ! I0314 19:19:13.814528       1 shared_informer.go:311] Waiting for caches to sync for namespace
	I0314 19:42:14.137450    8428 command_runner.go:130] ! I0314 19:19:13.831516       1 controllermanager.go:642] "Started controller" controller="garbage-collector-controller"
	I0314 19:42:14.137512    8428 command_runner.go:130] ! I0314 19:19:13.831928       1 garbagecollector.go:155] "Starting controller" controller="garbagecollector"
	I0314 19:42:14.137569    8428 command_runner.go:130] ! I0314 19:19:13.832023       1 shared_informer.go:311] Waiting for caches to sync for garbage collector
	I0314 19:42:14.137569    8428 command_runner.go:130] ! I0314 19:19:13.832052       1 graph_builder.go:294] "Running" component="GraphBuilder"
	I0314 19:42:14.137631    8428 command_runner.go:130] ! I0314 19:19:13.876141       1 controllermanager.go:642] "Started controller" controller="horizontal-pod-autoscaler-controller"
	I0314 19:42:14.137631    8428 command_runner.go:130] ! I0314 19:19:13.876437       1 horizontal.go:200] "Starting HPA controller"
	I0314 19:42:14.137690    8428 command_runner.go:130] ! I0314 19:19:13.876448       1 shared_informer.go:311] Waiting for caches to sync for HPA
	I0314 19:42:14.137690    8428 command_runner.go:130] ! I0314 19:19:13.892498       1 controllermanager.go:642] "Started controller" controller="disruption-controller"
	I0314 19:42:14.137751    8428 command_runner.go:130] ! I0314 19:19:13.892891       1 disruption.go:433] "Sending events to api server."
	I0314 19:42:14.137751    8428 command_runner.go:130] ! I0314 19:19:13.893092       1 disruption.go:444] "Starting disruption controller"
	I0314 19:42:14.137751    8428 command_runner.go:130] ! I0314 19:19:13.893185       1 shared_informer.go:311] Waiting for caches to sync for disruption
	I0314 19:42:14.137809    8428 command_runner.go:130] ! I0314 19:19:13.895299       1 controllermanager.go:642] "Started controller" controller="certificatesigningrequest-approving-controller"
	I0314 19:42:14.137860    8428 command_runner.go:130] ! I0314 19:19:13.895861       1 certificate_controller.go:115] "Starting certificate controller" name="csrapproving"
	I0314 19:42:14.137860    8428 command_runner.go:130] ! I0314 19:19:13.896105       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrapproving
	I0314 19:42:14.137898    8428 command_runner.go:130] ! I0314 19:19:13.908480       1 controllermanager.go:642] "Started controller" controller="endpointslice-mirroring-controller"
	I0314 19:42:14.137944    8428 command_runner.go:130] ! I0314 19:19:13.908861       1 endpointslicemirroring_controller.go:223] "Starting EndpointSliceMirroring controller"
	I0314 19:42:14.138009    8428 command_runner.go:130] ! I0314 19:19:13.908873       1 shared_informer.go:311] Waiting for caches to sync for endpoint_slice_mirroring
	I0314 19:42:14.138009    8428 command_runner.go:130] ! I0314 19:19:13.929369       1 controllermanager.go:642] "Started controller" controller="replicationcontroller-controller"
	I0314 19:42:14.138070    8428 command_runner.go:130] ! I0314 19:19:13.929803       1 replica_set.go:214] "Starting controller" name="replicationcontroller"
	I0314 19:42:14.138126    8428 command_runner.go:130] ! I0314 19:19:13.930050       1 shared_informer.go:311] Waiting for caches to sync for ReplicationController
	I0314 19:42:14.138126    8428 command_runner.go:130] ! I0314 19:19:13.974683       1 controllermanager.go:642] "Started controller" controller="replicaset-controller"
	I0314 19:42:14.138187    8428 command_runner.go:130] ! I0314 19:19:13.974899       1 replica_set.go:214] "Starting controller" name="replicaset"
	I0314 19:42:14.138187    8428 command_runner.go:130] ! I0314 19:19:13.975108       1 shared_informer.go:311] Waiting for caches to sync for ReplicaSet
	I0314 19:42:14.138245    8428 command_runner.go:130] ! E0314 19:19:14.134866       1 core.go:92] "Failed to start service controller" err="WARNING: no cloud provider provided, services of type LoadBalancer will fail"
	I0314 19:42:14.138245    8428 command_runner.go:130] ! I0314 19:19:14.135266       1 controllermanager.go:620] "Warning: skipping controller" controller="service-lb-controller"
	I0314 19:42:14.138307    8428 command_runner.go:130] ! E0314 19:19:14.170400       1 core.go:213] "Failed to start cloud node lifecycle controller" err="no cloud provider provided"
	I0314 19:42:14.138362    8428 command_runner.go:130] ! I0314 19:19:14.170426       1 controllermanager.go:620] "Warning: skipping controller" controller="cloud-node-lifecycle-controller"
	I0314 19:42:14.138421    8428 command_runner.go:130] ! I0314 19:19:14.324676       1 controllermanager.go:642] "Started controller" controller="ttl-after-finished-controller"
	I0314 19:42:14.138421    8428 command_runner.go:130] ! I0314 19:19:14.324865       1 ttlafterfinished_controller.go:109] "Starting TTL after finished controller"
	I0314 19:42:14.138478    8428 command_runner.go:130] ! I0314 19:19:14.325169       1 shared_informer.go:311] Waiting for caches to sync for TTL after finished
	I0314 19:42:14.138537    8428 command_runner.go:130] ! I0314 19:19:14.474401       1 controllermanager.go:642] "Started controller" controller="ephemeral-volume-controller"
	I0314 19:42:14.138576    8428 command_runner.go:130] ! I0314 19:19:14.474562       1 controller.go:169] "Starting ephemeral volume controller"
	I0314 19:42:14.138576    8428 command_runner.go:130] ! I0314 19:19:14.474660       1 shared_informer.go:311] Waiting for caches to sync for ephemeral
	I0314 19:42:14.138576    8428 command_runner.go:130] ! I0314 19:19:14.633668       1 controllermanager.go:642] "Started controller" controller="endpointslice-controller"
	I0314 19:42:14.138667    8428 command_runner.go:130] ! I0314 19:19:14.633821       1 endpointslice_controller.go:264] "Starting endpoint slice controller"
	I0314 19:42:14.138667    8428 command_runner.go:130] ! I0314 19:19:14.633832       1 shared_informer.go:311] Waiting for caches to sync for endpoint_slice
	I0314 19:42:14.138773    8428 command_runner.go:130] ! I0314 19:19:14.773955       1 controllermanager.go:642] "Started controller" controller="pod-garbage-collector-controller"
	I0314 19:42:14.138773    8428 command_runner.go:130] ! I0314 19:19:14.774019       1 gc_controller.go:101] "Starting GC controller"
	I0314 19:42:14.138773    8428 command_runner.go:130] ! I0314 19:19:14.774027       1 shared_informer.go:311] Waiting for caches to sync for GC
	I0314 19:42:14.138773    8428 command_runner.go:130] ! I0314 19:19:14.925568       1 controllermanager.go:642] "Started controller" controller="daemonset-controller"
	I0314 19:42:14.138872    8428 command_runner.go:130] ! I0314 19:19:14.925814       1 daemon_controller.go:291] "Starting daemon sets controller"
	I0314 19:42:14.138872    8428 command_runner.go:130] ! I0314 19:19:14.925828       1 shared_informer.go:311] Waiting for caches to sync for daemon sets
	I0314 19:42:14.138872    8428 command_runner.go:130] ! I0314 19:19:15.075328       1 controllermanager.go:642] "Started controller" controller="job-controller"
	I0314 19:42:14.138872    8428 command_runner.go:130] ! I0314 19:19:15.075556       1 job_controller.go:226] "Starting job controller"
	I0314 19:42:14.138872    8428 command_runner.go:130] ! I0314 19:19:15.075634       1 shared_informer.go:311] Waiting for caches to sync for job
	I0314 19:42:14.138981    8428 command_runner.go:130] ! I0314 19:19:15.225929       1 controllermanager.go:642] "Started controller" controller="persistentvolume-expander-controller"
	I0314 19:42:14.138981    8428 command_runner.go:130] ! I0314 19:19:15.226065       1 expand_controller.go:328] "Starting expand controller"
	I0314 19:42:14.138981    8428 command_runner.go:130] ! I0314 19:19:15.226077       1 shared_informer.go:311] Waiting for caches to sync for expand
	I0314 19:42:14.139089    8428 command_runner.go:130] ! I0314 19:19:15.378471       1 controllermanager.go:642] "Started controller" controller="deployment-controller"
	I0314 19:42:14.139089    8428 command_runner.go:130] ! I0314 19:19:15.378640       1 deployment_controller.go:168] "Starting controller" controller="deployment"
	I0314 19:42:14.139089    8428 command_runner.go:130] ! I0314 19:19:15.379237       1 shared_informer.go:311] Waiting for caches to sync for deployment
	I0314 19:42:14.139089    8428 command_runner.go:130] ! I0314 19:19:15.525089       1 controllermanager.go:642] "Started controller" controller="statefulset-controller"
	I0314 19:42:14.139195    8428 command_runner.go:130] ! I0314 19:19:15.525565       1 stateful_set.go:161] "Starting stateful set controller"
	I0314 19:42:14.139195    8428 command_runner.go:130] ! I0314 19:19:15.525643       1 shared_informer.go:311] Waiting for caches to sync for stateful set
	I0314 19:42:14.139304    8428 command_runner.go:130] ! I0314 19:19:15.679545       1 controllermanager.go:642] "Started controller" controller="cronjob-controller"
	I0314 19:42:14.139304    8428 command_runner.go:130] ! I0314 19:19:15.679611       1 cronjob_controllerv2.go:139] "Starting cronjob controller v2"
	I0314 19:42:14.139404    8428 command_runner.go:130] ! I0314 19:19:15.679619       1 shared_informer.go:311] Waiting for caches to sync for cronjob
	I0314 19:42:14.139404    8428 command_runner.go:130] ! I0314 19:19:15.825516       1 controllermanager.go:642] "Started controller" controller="clusterrole-aggregation-controller"
	I0314 19:42:14.139404    8428 command_runner.go:130] ! I0314 19:19:15.825908       1 clusterroleaggregation_controller.go:189] "Starting ClusterRoleAggregator controller"
	I0314 19:42:14.139506    8428 command_runner.go:130] ! I0314 19:19:15.825920       1 shared_informer.go:311] Waiting for caches to sync for ClusterRoleAggregator
	I0314 19:42:14.139554    8428 command_runner.go:130] ! I0314 19:19:15.976308       1 controllermanager.go:642] "Started controller" controller="root-ca-certificate-publisher-controller"
	I0314 19:42:14.139635    8428 command_runner.go:130] ! I0314 19:19:15.976673       1 publisher.go:102] "Starting root CA cert publisher controller"
	I0314 19:42:14.139686    8428 command_runner.go:130] ! I0314 19:19:15.976858       1 shared_informer.go:311] Waiting for caches to sync for crt configmap
	I0314 19:42:14.139686    8428 command_runner.go:130] ! I0314 19:19:15.993409       1 shared_informer.go:311] Waiting for caches to sync for resource quota
	I0314 19:42:14.139686    8428 command_runner.go:130] ! I0314 19:19:16.017841       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-442000\" does not exist"
	I0314 19:42:14.139686    8428 command_runner.go:130] ! I0314 19:19:16.022817       1 shared_informer.go:311] Waiting for caches to sync for garbage collector
	I0314 19:42:14.139686    8428 command_runner.go:130] ! I0314 19:19:16.023332       1 shared_informer.go:318] Caches are synced for TTL
	I0314 19:42:14.139686    8428 command_runner.go:130] ! I0314 19:19:16.025413       1 shared_informer.go:318] Caches are synced for TTL after finished
	I0314 19:42:14.139686    8428 command_runner.go:130] ! I0314 19:19:16.025667       1 shared_informer.go:318] Caches are synced for stateful set
	I0314 19:42:14.139686    8428 command_runner.go:130] ! I0314 19:19:16.025909       1 shared_informer.go:318] Caches are synced for daemon sets
	I0314 19:42:14.139686    8428 command_runner.go:130] ! I0314 19:19:16.026194       1 shared_informer.go:318] Caches are synced for expand
	I0314 19:42:14.139686    8428 command_runner.go:130] ! I0314 19:19:16.030689       1 shared_informer.go:318] Caches are synced for ReplicationController
	I0314 19:42:14.139686    8428 command_runner.go:130] ! I0314 19:19:16.042937       1 shared_informer.go:318] Caches are synced for endpoint
	I0314 19:42:14.139686    8428 command_runner.go:130] ! I0314 19:19:16.063170       1 shared_informer.go:318] Caches are synced for bootstrap_signer
	I0314 19:42:14.139686    8428 command_runner.go:130] ! I0314 19:19:16.069816       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-client
	I0314 19:42:14.139686    8428 command_runner.go:130] ! I0314 19:19:16.069953       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-serving
	I0314 19:42:14.139686    8428 command_runner.go:130] ! I0314 19:19:16.071382       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0314 19:42:14.139686    8428 command_runner.go:130] ! I0314 19:19:16.072881       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-legacy-unknown
	I0314 19:42:14.139686    8428 command_runner.go:130] ! I0314 19:19:16.075260       1 shared_informer.go:318] Caches are synced for GC
	I0314 19:42:14.139686    8428 command_runner.go:130] ! I0314 19:19:16.075273       1 shared_informer.go:318] Caches are synced for ReplicaSet
	I0314 19:42:14.139686    8428 command_runner.go:130] ! I0314 19:19:16.075312       1 shared_informer.go:318] Caches are synced for ephemeral
	I0314 19:42:14.139686    8428 command_runner.go:130] ! I0314 19:19:16.076852       1 shared_informer.go:318] Caches are synced for HPA
	I0314 19:42:14.139686    8428 command_runner.go:130] ! I0314 19:19:16.077008       1 shared_informer.go:318] Caches are synced for crt configmap
	I0314 19:42:14.139686    8428 command_runner.go:130] ! I0314 19:19:16.077022       1 shared_informer.go:318] Caches are synced for job
	I0314 19:42:14.139686    8428 command_runner.go:130] ! I0314 19:19:16.079681       1 shared_informer.go:318] Caches are synced for deployment
	I0314 19:42:14.139686    8428 command_runner.go:130] ! I0314 19:19:16.079893       1 shared_informer.go:318] Caches are synced for cronjob
	I0314 19:42:14.139686    8428 command_runner.go:130] ! I0314 19:19:16.085788       1 shared_informer.go:318] Caches are synced for node
	I0314 19:42:14.139686    8428 command_runner.go:130] ! I0314 19:19:16.085869       1 range_allocator.go:174] "Sending events to api server"
	I0314 19:42:14.139686    8428 command_runner.go:130] ! I0314 19:19:16.085937       1 range_allocator.go:178] "Starting range CIDR allocator"
	I0314 19:42:14.139686    8428 command_runner.go:130] ! I0314 19:19:16.085945       1 shared_informer.go:311] Waiting for caches to sync for cidrallocator
	I0314 19:42:14.139686    8428 command_runner.go:130] ! I0314 19:19:16.085951       1 shared_informer.go:318] Caches are synced for cidrallocator
	I0314 19:42:14.139686    8428 command_runner.go:130] ! I0314 19:19:16.086224       1 shared_informer.go:318] Caches are synced for PVC protection
	I0314 19:42:14.139686    8428 command_runner.go:130] ! I0314 19:19:16.093730       1 shared_informer.go:318] Caches are synced for disruption
	I0314 19:42:14.139686    8428 command_runner.go:130] ! I0314 19:19:16.093802       1 shared_informer.go:318] Caches are synced for resource quota
	I0314 19:42:14.139686    8428 command_runner.go:130] ! I0314 19:19:16.097148       1 shared_informer.go:318] Caches are synced for certificate-csrapproving
	I0314 19:42:14.139686    8428 command_runner.go:130] ! I0314 19:19:16.098688       1 shared_informer.go:318] Caches are synced for service account
	I0314 19:42:14.139686    8428 command_runner.go:130] ! I0314 19:19:16.102404       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-442000" podCIDRs=["10.244.0.0/24"]
	I0314 19:42:14.139686    8428 command_runner.go:130] ! I0314 19:19:16.112396       1 shared_informer.go:318] Caches are synced for taint
	I0314 19:42:14.139686    8428 command_runner.go:130] ! I0314 19:19:16.112849       1 node_lifecycle_controller.go:1225] "Initializing eviction metric for zone" zone=""
	I0314 19:42:14.139686    8428 command_runner.go:130] ! I0314 19:19:16.113070       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-442000"
	I0314 19:42:14.139686    8428 command_runner.go:130] ! I0314 19:19:16.113155       1 node_lifecycle_controller.go:1029] "Controller detected that all Nodes are not-Ready. Entering master disruption mode"
	I0314 19:42:14.139686    8428 command_runner.go:130] ! I0314 19:19:16.112659       1 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring
	I0314 19:42:14.139686    8428 command_runner.go:130] ! I0314 19:19:16.113865       1 taint_manager.go:205] "Starting NoExecuteTaintManager"
	I0314 19:42:14.139686    8428 command_runner.go:130] ! I0314 19:19:16.113966       1 taint_manager.go:210] "Sending events to api server"
	I0314 19:42:14.139686    8428 command_runner.go:130] ! I0314 19:19:16.115068       1 shared_informer.go:318] Caches are synced for namespace
	I0314 19:42:14.140230    8428 command_runner.go:130] ! I0314 19:19:16.118281       1 event.go:307] "Event occurred" object="multinode-442000" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-442000 event: Registered Node multinode-442000 in Controller"
	I0314 19:42:14.140230    8428 command_runner.go:130] ! I0314 19:19:16.134584       1 shared_informer.go:318] Caches are synced for endpoint_slice
	I0314 19:42:14.140230    8428 command_runner.go:130] ! I0314 19:19:16.151625       1 event.go:307] "Event occurred" object="kube-system/kube-scheduler-multinode-442000" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0314 19:42:14.140230    8428 command_runner.go:130] ! I0314 19:19:16.171551       1 event.go:307] "Event occurred" object="kube-system/etcd-multinode-442000" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0314 19:42:14.140351    8428 command_runner.go:130] ! I0314 19:19:16.174341       1 event.go:307] "Event occurred" object="kube-system/kube-apiserver-multinode-442000" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0314 19:42:14.140351    8428 command_runner.go:130] ! I0314 19:19:16.174358       1 event.go:307] "Event occurred" object="kube-system/kube-controller-manager-multinode-442000" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0314 19:42:14.140399    8428 command_runner.go:130] ! I0314 19:19:16.184987       1 shared_informer.go:318] Caches are synced for resource quota
	I0314 19:42:14.140454    8428 command_runner.go:130] ! I0314 19:19:16.223118       1 shared_informer.go:318] Caches are synced for PV protection
	I0314 19:42:14.140454    8428 command_runner.go:130] ! I0314 19:19:16.225526       1 shared_informer.go:318] Caches are synced for attach detach
	I0314 19:42:14.140503    8428 command_runner.go:130] ! I0314 19:19:16.225950       1 shared_informer.go:318] Caches are synced for ClusterRoleAggregator
	I0314 19:42:14.140554    8428 command_runner.go:130] ! I0314 19:19:16.274020       1 shared_informer.go:318] Caches are synced for persistent volume
	I0314 19:42:14.140601    8428 command_runner.go:130] ! I0314 19:19:16.320250       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-7b9lf"
	I0314 19:42:14.140655    8428 command_runner.go:130] ! I0314 19:19:16.328650       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-cg28g"
	I0314 19:42:14.140655    8428 command_runner.go:130] ! I0314 19:19:16.626855       1 shared_informer.go:318] Caches are synced for garbage collector
	I0314 19:42:14.140655    8428 command_runner.go:130] ! I0314 19:19:16.633099       1 shared_informer.go:318] Caches are synced for garbage collector
	I0314 19:42:14.140765    8428 command_runner.go:130] ! I0314 19:19:16.633344       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0314 19:42:14.140765    8428 command_runner.go:130] ! I0314 19:19:16.789964       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5dd5756b68 to 2"
	I0314 19:42:14.140813    8428 command_runner.go:130] ! I0314 19:19:17.099870       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-pvxpr"
	I0314 19:42:14.140922    8428 command_runner.go:130] ! I0314 19:19:17.114819       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-d22jc"
	I0314 19:42:14.140922    8428 command_runner.go:130] ! I0314 19:19:17.146456       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="355.713874ms"
	I0314 19:42:14.140922    8428 command_runner.go:130] ! I0314 19:19:17.166202       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="19.688691ms"
	I0314 19:42:14.140922    8428 command_runner.go:130] ! I0314 19:19:17.169087       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="2.771063ms"
	I0314 19:42:14.140922    8428 command_runner.go:130] ! I0314 19:19:18.399096       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I0314 19:42:14.140922    8428 command_runner.go:130] ! I0314 19:19:18.448322       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-pvxpr"
	I0314 19:42:14.140922    8428 command_runner.go:130] ! I0314 19:19:18.482373       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="82.944747ms"
	I0314 19:42:14.140922    8428 command_runner.go:130] ! I0314 19:19:18.500300       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="17.716936ms"
	I0314 19:42:14.140922    8428 command_runner.go:130] ! I0314 19:19:18.500887       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="99.317µs"
	I0314 19:42:14.140922    8428 command_runner.go:130] ! I0314 19:19:26.475232       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="98.515µs"
	I0314 19:42:14.140922    8428 command_runner.go:130] ! I0314 19:19:26.505160       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="58.309µs"
	I0314 19:42:14.140922    8428 command_runner.go:130] ! I0314 19:19:28.423231       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="23.310782ms"
	I0314 19:42:14.140922    8428 command_runner.go:130] ! I0314 19:19:28.423925       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="72.006µs"
	I0314 19:42:14.140922    8428 command_runner.go:130] ! I0314 19:19:31.116802       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I0314 19:42:14.140922    8428 command_runner.go:130] ! I0314 19:22:02.467925       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-442000-m02\" does not exist"
	I0314 19:42:14.140922    8428 command_runner.go:130] ! I0314 19:22:02.479576       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-442000-m02" podCIDRs=["10.244.1.0/24"]
	I0314 19:42:14.140922    8428 command_runner.go:130] ! I0314 19:22:02.507610       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-72dzs"
	I0314 19:42:14.140922    8428 command_runner.go:130] ! I0314 19:22:02.511169       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-c7m4p"
	I0314 19:42:14.140922    8428 command_runner.go:130] ! I0314 19:22:06.145908       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-442000-m02"
	I0314 19:42:14.140922    8428 command_runner.go:130] ! I0314 19:22:06.146201       1 event.go:307] "Event occurred" object="multinode-442000-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-442000-m02 event: Registered Node multinode-442000-m02 in Controller"
	I0314 19:42:14.140922    8428 command_runner.go:130] ! I0314 19:22:20.862710       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-442000-m02"
	I0314 19:42:14.140922    8428 command_runner.go:130] ! I0314 19:22:45.188036       1 event.go:307] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-5b5d89c9d6 to 2"
	I0314 19:42:14.140922    8428 command_runner.go:130] ! I0314 19:22:45.218022       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5b5d89c9d6-8drpb"
	I0314 19:42:14.140922    8428 command_runner.go:130] ! I0314 19:22:45.241867       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5b5d89c9d6-7446n"
	I0314 19:42:14.140922    8428 command_runner.go:130] ! I0314 19:22:45.267427       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="80.313691ms"
	I0314 19:42:14.140922    8428 command_runner.go:130] ! I0314 19:22:45.292961       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="25.159362ms"
	I0314 19:42:14.141459    8428 command_runner.go:130] ! I0314 19:22:45.311264       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="18.241692ms"
	I0314 19:42:14.141459    8428 command_runner.go:130] ! I0314 19:22:45.311407       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="93.911µs"
	I0314 19:42:14.141459    8428 command_runner.go:130] ! I0314 19:22:48.320252       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="21.515467ms"
	I0314 19:42:14.141459    8428 command_runner.go:130] ! I0314 19:22:48.320403       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="46.303µs"
	I0314 19:42:14.141617    8428 command_runner.go:130] ! I0314 19:22:48.344640       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="8.018521ms"
	I0314 19:42:14.141617    8428 command_runner.go:130] ! I0314 19:22:48.344838       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="42.804µs"
	I0314 19:42:14.141669    8428 command_runner.go:130] ! I0314 19:26:25.208780       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-442000-m02"
	I0314 19:42:14.141717    8428 command_runner.go:130] ! I0314 19:26:25.214591       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-442000-m03\" does not exist"
	I0314 19:42:14.141717    8428 command_runner.go:130] ! I0314 19:26:25.248082       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-442000-m03" podCIDRs=["10.244.2.0/24"]
	I0314 19:42:14.141717    8428 command_runner.go:130] ! I0314 19:26:25.265233       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-r7zdb"
	I0314 19:42:14.141717    8428 command_runner.go:130] ! I0314 19:26:25.273144       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-w2qls"
	I0314 19:42:14.141717    8428 command_runner.go:130] ! I0314 19:26:26.207170       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-442000-m03"
	I0314 19:42:14.141717    8428 command_runner.go:130] ! I0314 19:26:26.207236       1 event.go:307] "Event occurred" object="multinode-442000-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-442000-m03 event: Registered Node multinode-442000-m03 in Controller"
	I0314 19:42:14.141717    8428 command_runner.go:130] ! I0314 19:26:43.758846       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-442000-m02"
	I0314 19:42:14.141717    8428 command_runner.go:130] ! I0314 19:33:46.333556       1 event.go:307] "Event occurred" object="multinode-442000-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-442000-m03 status is now: NodeNotReady"
	I0314 19:42:14.141717    8428 command_runner.go:130] ! I0314 19:33:46.333891       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-442000-m02"
	I0314 19:42:14.141717    8428 command_runner.go:130] ! I0314 19:33:46.348976       1 event.go:307] "Event occurred" object="kube-system/kindnet-r7zdb" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0314 19:42:14.141717    8428 command_runner.go:130] ! I0314 19:33:46.370200       1 event.go:307] "Event occurred" object="kube-system/kube-proxy-w2qls" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0314 19:42:14.141717    8428 command_runner.go:130] ! I0314 19:36:39.868492       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-442000-m02"
	I0314 19:42:14.141717    8428 command_runner.go:130] ! I0314 19:36:41.400896       1 event.go:307] "Event occurred" object="multinode-442000-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RemovingNode" message="Node multinode-442000-m03 event: Removing Node multinode-442000-m03 from Controller"
	I0314 19:42:14.141717    8428 command_runner.go:130] ! I0314 19:36:47.335802       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-442000-m03\" does not exist"
	I0314 19:42:14.141717    8428 command_runner.go:130] ! I0314 19:36:47.336128       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-442000-m02"
	I0314 19:42:14.141717    8428 command_runner.go:130] ! I0314 19:36:47.352987       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-442000-m03" podCIDRs=["10.244.3.0/24"]
	I0314 19:42:14.141717    8428 command_runner.go:130] ! I0314 19:36:51.403261       1 event.go:307] "Event occurred" object="multinode-442000-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-442000-m03 event: Registered Node multinode-442000-m03 in Controller"
	I0314 19:42:14.141717    8428 command_runner.go:130] ! I0314 19:36:54.976864       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-442000-m02"
	I0314 19:42:14.141717    8428 command_runner.go:130] ! I0314 19:38:21.463528       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-442000-m02"
	I0314 19:42:14.141717    8428 command_runner.go:130] ! I0314 19:38:21.463818       1 event.go:307] "Event occurred" object="multinode-442000-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-442000-m03 status is now: NodeNotReady"
	I0314 19:42:14.142314    8428 command_runner.go:130] ! I0314 19:38:21.486796       1 event.go:307] "Event occurred" object="kube-system/kindnet-r7zdb" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0314 19:42:14.142314    8428 command_runner.go:130] ! I0314 19:38:21.501217       1 event.go:307] "Event occurred" object="kube-system/kube-proxy-w2qls" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0314 19:42:14.159016    8428 logs.go:123] Gathering logs for etcd [a81a9c43c355] ...
	I0314 19:42:14.159016    8428 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a81a9c43c355"
	I0314 19:42:14.192397    8428 command_runner.go:130] ! {"level":"warn","ts":"2024-03-14T19:41:01.944953Z","caller":"embed/config.go:673","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I0314 19:42:14.192904    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:01.945607Z","caller":"etcdmain/etcd.go:73","msg":"Running: ","args":["etcd","--advertise-client-urls=https://172.17.93.236:2379","--cert-file=/var/lib/minikube/certs/etcd/server.crt","--client-cert-auth=true","--data-dir=/var/lib/minikube/etcd","--experimental-initial-corrupt-check=true","--experimental-watch-progress-notify-interval=5s","--initial-advertise-peer-urls=https://172.17.93.236:2380","--initial-cluster=multinode-442000=https://172.17.93.236:2380","--key-file=/var/lib/minikube/certs/etcd/server.key","--listen-client-urls=https://127.0.0.1:2379,https://172.17.93.236:2379","--listen-metrics-urls=http://127.0.0.1:2381","--listen-peer-urls=https://172.17.93.236:2380","--name=multinode-442000","--peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt","--peer-client-cert-auth=true","--peer-key-file=/var/lib/minikube/certs/etcd/peer.key","--peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt","--prox
y-refresh-interval=70000","--snapshot-count=10000","--trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt"]}
	I0314 19:42:14.192974    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:01.945676Z","caller":"etcdmain/etcd.go:116","msg":"server has been already initialized","data-dir":"/var/lib/minikube/etcd","dir-type":"member"}
	I0314 19:42:14.192974    8428 command_runner.go:130] ! {"level":"warn","ts":"2024-03-14T19:41:01.945701Z","caller":"embed/config.go:673","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I0314 19:42:14.192974    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:01.94571Z","caller":"embed/etcd.go:127","msg":"configuring peer listeners","listen-peer-urls":["https://172.17.93.236:2380"]}
	I0314 19:42:14.193047    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:01.94582Z","caller":"embed/etcd.go:495","msg":"starting with peer TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I0314 19:42:14.193047    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:01.94751Z","caller":"embed/etcd.go:135","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://172.17.93.236:2379"]}
	I0314 19:42:14.193200    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:01.948798Z","caller":"embed/etcd.go:309","msg":"starting an etcd server","etcd-version":"3.5.9","git-sha":"bdbbde998","go-version":"go1.19.9","go-os":"linux","go-arch":"amd64","max-cpu-set":2,"max-cpu-available":2,"member-initialized":true,"name":"multinode-442000","data-dir":"/var/lib/minikube/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/minikube/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://172.17.93.236:2380"],"listen-peer-urls":["https://172.17.93.236:2380"],"advertise-client-urls":["https://172.17.93.236:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.17.93.236:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"cors":["*"],"host-whitelist":["*"],"initial-
cluster":"","initial-cluster-state":"new","initial-cluster-token":"","quota-backend-bytes":2147483648,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"initial-corrupt-check":true,"corrupt-check-time-interval":"0s","compact-check-time-enabled":false,"compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s"}
	I0314 19:42:14.193234    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:01.989049Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"39.493838ms"}
	I0314 19:42:14.193273    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:02.0258Z","caller":"etcdserver/server.go:530","msg":"No snapshot found. Recovering WAL from scratch!"}
	I0314 19:42:14.193273    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:02.055698Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"76b99849a2fc5549","local-member-id":"fa26a6ed08186c39","commit-index":1967}
	I0314 19:42:14.193334    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:02.067927Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fa26a6ed08186c39 switched to configuration voters=()"}
	I0314 19:42:14.193390    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:02.067975Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fa26a6ed08186c39 became follower at term 2"}
	I0314 19:42:14.193390    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:02.068051Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft fa26a6ed08186c39 [peers: [], term: 2, commit: 1967, applied: 0, lastindex: 1967, lastterm: 2]"}
	I0314 19:42:14.193390    8428 command_runner.go:130] ! {"level":"warn","ts":"2024-03-14T19:41:02.100633Z","caller":"auth/store.go:1238","msg":"simple token is not cryptographically signed"}
	I0314 19:42:14.193441    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:02.113992Z","caller":"mvcc/kvstore.go:323","msg":"restored last compact revision","meta-bucket-name":"meta","meta-bucket-name-key":"finishedCompactRev","restored-compact-revision":1090}
	I0314 19:42:14.193441    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:02.125551Z","caller":"mvcc/kvstore.go:393","msg":"kvstore restored","current-rev":1704}
	I0314 19:42:14.193441    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:02.137052Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	I0314 19:42:14.193507    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:02.152836Z","caller":"etcdserver/corrupt.go:95","msg":"starting initial corruption check","local-member-id":"fa26a6ed08186c39","timeout":"7s"}
	I0314 19:42:14.193507    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:02.153448Z","caller":"etcdserver/corrupt.go:165","msg":"initial corruption checking passed; no corruption","local-member-id":"fa26a6ed08186c39"}
	I0314 19:42:14.193507    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:02.153504Z","caller":"etcdserver/server.go:854","msg":"starting etcd server","local-member-id":"fa26a6ed08186c39","local-server-version":"3.5.9","cluster-version":"to_be_decided"}
	I0314 19:42:14.193568    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:02.154089Z","caller":"etcdserver/server.go:754","msg":"starting initial election tick advance","election-ticks":10}
	I0314 19:42:14.193568    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:02.154894Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	I0314 19:42:14.193624    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:02.154977Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	I0314 19:42:14.193624    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:02.154992Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	I0314 19:42:14.193624    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:02.158559Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fa26a6ed08186c39 switched to configuration voters=(18025278095570267193)"}
	I0314 19:42:14.193676    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:02.158756Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"76b99849a2fc5549","local-member-id":"fa26a6ed08186c39","added-peer-id":"fa26a6ed08186c39","added-peer-peer-urls":["https://172.17.86.124:2380"]}
	I0314 19:42:14.193676    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:02.158933Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"76b99849a2fc5549","local-member-id":"fa26a6ed08186c39","cluster-version":"3.5"}
	I0314 19:42:14.193732    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:02.158969Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	I0314 19:42:14.193732    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:02.159838Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I0314 19:42:14.193783    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:02.160148Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"fa26a6ed08186c39","initial-advertise-peer-urls":["https://172.17.93.236:2380"],"listen-peer-urls":["https://172.17.93.236:2380"],"advertise-client-urls":["https://172.17.93.236:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.17.93.236:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	I0314 19:42:14.193838    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:02.160272Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	I0314 19:42:14.193838    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:02.161335Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"172.17.93.236:2380"}
	I0314 19:42:14.193838    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:02.161389Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"172.17.93.236:2380"}
	I0314 19:42:14.193913    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:03.281331Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fa26a6ed08186c39 is starting a new election at term 2"}
	I0314 19:42:14.193913    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:03.281645Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fa26a6ed08186c39 became pre-candidate at term 2"}
	I0314 19:42:14.193913    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:03.281829Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fa26a6ed08186c39 received MsgPreVoteResp from fa26a6ed08186c39 at term 2"}
	I0314 19:42:14.193976    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:03.281928Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fa26a6ed08186c39 became candidate at term 3"}
	I0314 19:42:14.193976    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:03.282044Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fa26a6ed08186c39 received MsgVoteResp from fa26a6ed08186c39 at term 3"}
	I0314 19:42:14.193976    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:03.282164Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fa26a6ed08186c39 became leader at term 3"}
	I0314 19:42:14.193976    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:03.282332Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: fa26a6ed08186c39 elected leader fa26a6ed08186c39 at term 3"}
	I0314 19:42:14.194060    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:03.292472Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"fa26a6ed08186c39","local-member-attributes":"{Name:multinode-442000 ClientURLs:[https://172.17.93.236:2379]}","request-path":"/0/members/fa26a6ed08186c39/attributes","cluster-id":"76b99849a2fc5549","publish-timeout":"7s"}
	I0314 19:42:14.194060    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:03.292867Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	I0314 19:42:14.194060    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:03.296522Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	I0314 19:42:14.194114    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:03.298446Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	I0314 19:42:14.194114    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:03.311867Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.17.93.236:2379"}
	I0314 19:42:14.194114    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:03.311957Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	I0314 19:42:14.194114    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:03.31205Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	I0314 19:42:14.199853    8428 logs.go:123] Gathering logs for kube-proxy [497007582e44] ...
	I0314 19:42:14.199853    8428 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 497007582e44"
	I0314 19:42:14.232229    8428 command_runner.go:130] ! I0314 19:41:08.342277       1 server_others.go:69] "Using iptables proxy"
	I0314 19:42:14.232229    8428 command_runner.go:130] ! I0314 19:41:08.381589       1 node.go:141] Successfully retrieved node IP: 172.17.93.236
	I0314 19:42:14.232229    8428 command_runner.go:130] ! I0314 19:41:08.703360       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0314 19:42:14.232229    8428 command_runner.go:130] ! I0314 19:41:08.703384       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0314 19:42:14.232229    8428 command_runner.go:130] ! I0314 19:41:08.724122       1 server_others.go:152] "Using iptables Proxier"
	I0314 19:42:14.232229    8428 command_runner.go:130] ! I0314 19:41:08.726554       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0314 19:42:14.232229    8428 command_runner.go:130] ! I0314 19:41:08.729424       1 server.go:846] "Version info" version="v1.28.4"
	I0314 19:42:14.232229    8428 command_runner.go:130] ! I0314 19:41:08.729460       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0314 19:42:14.232229    8428 command_runner.go:130] ! I0314 19:41:08.732062       1 config.go:188] "Starting service config controller"
	I0314 19:42:14.232229    8428 command_runner.go:130] ! I0314 19:41:08.732501       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0314 19:42:14.232229    8428 command_runner.go:130] ! I0314 19:41:08.732571       1 config.go:97] "Starting endpoint slice config controller"
	I0314 19:42:14.232229    8428 command_runner.go:130] ! I0314 19:41:08.732581       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0314 19:42:14.232229    8428 command_runner.go:130] ! I0314 19:41:08.733523       1 config.go:315] "Starting node config controller"
	I0314 19:42:14.232229    8428 command_runner.go:130] ! I0314 19:41:08.733550       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0314 19:42:14.232229    8428 command_runner.go:130] ! I0314 19:41:08.832968       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0314 19:42:14.232229    8428 command_runner.go:130] ! I0314 19:41:08.833049       1 shared_informer.go:318] Caches are synced for service config
	I0314 19:42:14.232229    8428 command_runner.go:130] ! I0314 19:41:08.835501       1 shared_informer.go:318] Caches are synced for node config
	I0314 19:42:14.235918    8428 logs.go:123] Gathering logs for kindnet [999e4c168afe] ...
	I0314 19:42:14.235918    8428 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 999e4c168afe"
	I0314 19:42:14.261684    8428 command_runner.go:130] ! I0314 19:41:08.409720       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0314 19:42:14.262069    8428 command_runner.go:130] ! I0314 19:41:08.410195       1 main.go:107] hostIP = 172.17.93.236
	I0314 19:42:14.262168    8428 command_runner.go:130] ! podIP = 172.17.93.236
	I0314 19:42:14.262168    8428 command_runner.go:130] ! I0314 19:41:08.411178       1 main.go:116] setting mtu 1500 for CNI 
	I0314 19:42:14.262168    8428 command_runner.go:130] ! I0314 19:41:08.411230       1 main.go:146] kindnetd IP family: "ipv4"
	I0314 19:42:14.262215    8428 command_runner.go:130] ! I0314 19:41:08.411277       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0314 19:42:14.262240    8428 command_runner.go:130] ! I0314 19:41:38.747509       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: i/o timeout
	I0314 19:42:14.262240    8428 command_runner.go:130] ! I0314 19:41:38.770843       1 main.go:223] Handling node with IPs: map[172.17.93.236:{}]
	I0314 19:42:14.262240    8428 command_runner.go:130] ! I0314 19:41:38.770994       1 main.go:227] handling current node
	I0314 19:42:14.262240    8428 command_runner.go:130] ! I0314 19:41:38.771413       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:14.262240    8428 command_runner.go:130] ! I0314 19:41:38.771428       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:14.262240    8428 command_runner.go:130] ! I0314 19:41:38.771670       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 172.17.80.135 Flags: [] Table: 0} 
	I0314 19:42:14.262327    8428 command_runner.go:130] ! I0314 19:41:38.771817       1 main.go:223] Handling node with IPs: map[172.17.84.215:{}]
	I0314 19:42:14.262327    8428 command_runner.go:130] ! I0314 19:41:38.771827       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.3.0/24] 
	I0314 19:42:14.262361    8428 command_runner.go:130] ! I0314 19:41:38.771944       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 172.17.84.215 Flags: [] Table: 0} 
	I0314 19:42:14.262361    8428 command_runner.go:130] ! I0314 19:41:48.777997       1 main.go:223] Handling node with IPs: map[172.17.93.236:{}]
	I0314 19:42:14.262361    8428 command_runner.go:130] ! I0314 19:41:48.778091       1 main.go:227] handling current node
	I0314 19:42:14.262361    8428 command_runner.go:130] ! I0314 19:41:48.778105       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:14.262361    8428 command_runner.go:130] ! I0314 19:41:48.778113       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:14.262423    8428 command_runner.go:130] ! I0314 19:41:48.778217       1 main.go:223] Handling node with IPs: map[172.17.84.215:{}]
	I0314 19:42:14.262467    8428 command_runner.go:130] ! I0314 19:41:48.778373       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.3.0/24] 
	I0314 19:42:14.262467    8428 command_runner.go:130] ! I0314 19:41:58.793215       1 main.go:223] Handling node with IPs: map[172.17.93.236:{}]
	I0314 19:42:14.262467    8428 command_runner.go:130] ! I0314 19:41:58.793285       1 main.go:227] handling current node
	I0314 19:42:14.262467    8428 command_runner.go:130] ! I0314 19:41:58.793297       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:14.262467    8428 command_runner.go:130] ! I0314 19:41:58.793304       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:14.262526    8428 command_runner.go:130] ! I0314 19:41:58.793793       1 main.go:223] Handling node with IPs: map[172.17.84.215:{}]
	I0314 19:42:14.262526    8428 command_runner.go:130] ! I0314 19:41:58.793859       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.3.0/24] 
	I0314 19:42:14.262570    8428 command_runner.go:130] ! I0314 19:42:08.808709       1 main.go:223] Handling node with IPs: map[172.17.93.236:{}]
	I0314 19:42:14.262606    8428 command_runner.go:130] ! I0314 19:42:08.808803       1 main.go:227] handling current node
	I0314 19:42:14.262606    8428 command_runner.go:130] ! I0314 19:42:08.808818       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:14.262647    8428 command_runner.go:130] ! I0314 19:42:08.808826       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:14.262647    8428 command_runner.go:130] ! I0314 19:42:08.809153       1 main.go:223] Handling node with IPs: map[172.17.84.215:{}]
	I0314 19:42:14.262647    8428 command_runner.go:130] ! I0314 19:42:08.809168       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.3.0/24] 
	I0314 19:42:14.265362    8428 logs.go:123] Gathering logs for Docker ...
	I0314 19:42:14.265362    8428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0314 19:42:14.288815    8428 command_runner.go:130] > Mar 14 19:39:36 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0314 19:42:14.288888    8428 command_runner.go:130] > Mar 14 19:39:36 minikube cri-dockerd[222]: time="2024-03-14T19:39:36Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0314 19:42:14.288888    8428 command_runner.go:130] > Mar 14 19:39:36 minikube cri-dockerd[222]: time="2024-03-14T19:39:36Z" level=info msg="Start docker client with request timeout 0s"
	I0314 19:42:14.288888    8428 command_runner.go:130] > Mar 14 19:39:36 minikube cri-dockerd[222]: time="2024-03-14T19:39:36Z" level=fatal msg="failed to get docker version: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0314 19:42:14.288888    8428 command_runner.go:130] > Mar 14 19:39:37 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0314 19:42:14.288888    8428 command_runner.go:130] > Mar 14 19:39:37 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0314 19:42:14.288888    8428 command_runner.go:130] > Mar 14 19:39:37 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0314 19:42:14.288888    8428 command_runner.go:130] > Mar 14 19:39:39 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 1.
	I0314 19:42:14.288888    8428 command_runner.go:130] > Mar 14 19:39:39 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0314 19:42:14.288888    8428 command_runner.go:130] > Mar 14 19:39:39 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0314 19:42:14.288888    8428 command_runner.go:130] > Mar 14 19:39:39 minikube cri-dockerd[402]: time="2024-03-14T19:39:39Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0314 19:42:14.288888    8428 command_runner.go:130] > Mar 14 19:39:39 minikube cri-dockerd[402]: time="2024-03-14T19:39:39Z" level=info msg="Start docker client with request timeout 0s"
	I0314 19:42:14.288888    8428 command_runner.go:130] > Mar 14 19:39:39 minikube cri-dockerd[402]: time="2024-03-14T19:39:39Z" level=fatal msg="failed to get docker version: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0314 19:42:14.288888    8428 command_runner.go:130] > Mar 14 19:39:39 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0314 19:42:14.288888    8428 command_runner.go:130] > Mar 14 19:39:39 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0314 19:42:14.288888    8428 command_runner.go:130] > Mar 14 19:39:39 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0314 19:42:14.288888    8428 command_runner.go:130] > Mar 14 19:39:41 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 2.
	I0314 19:42:14.288888    8428 command_runner.go:130] > Mar 14 19:39:41 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0314 19:42:14.288888    8428 command_runner.go:130] > Mar 14 19:39:41 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0314 19:42:14.288888    8428 command_runner.go:130] > Mar 14 19:39:41 minikube cri-dockerd[422]: time="2024-03-14T19:39:41Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0314 19:42:14.288888    8428 command_runner.go:130] > Mar 14 19:39:41 minikube cri-dockerd[422]: time="2024-03-14T19:39:41Z" level=info msg="Start docker client with request timeout 0s"
	I0314 19:42:14.289423    8428 command_runner.go:130] > Mar 14 19:39:41 minikube cri-dockerd[422]: time="2024-03-14T19:39:41Z" level=fatal msg="failed to get docker version: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0314 19:42:14.289423    8428 command_runner.go:130] > Mar 14 19:39:41 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0314 19:42:14.289423    8428 command_runner.go:130] > Mar 14 19:39:41 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0314 19:42:14.289423    8428 command_runner.go:130] > Mar 14 19:39:41 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0314 19:42:14.289423    8428 command_runner.go:130] > Mar 14 19:39:44 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 3.
	I0314 19:42:14.289562    8428 command_runner.go:130] > Mar 14 19:39:44 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0314 19:42:14.289562    8428 command_runner.go:130] > Mar 14 19:39:44 minikube systemd[1]: cri-docker.service: Start request repeated too quickly.
	I0314 19:42:14.289562    8428 command_runner.go:130] > Mar 14 19:39:44 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0314 19:42:14.289664    8428 command_runner.go:130] > Mar 14 19:39:44 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0314 19:42:14.289664    8428 command_runner.go:130] > Mar 14 19:40:26 multinode-442000 systemd[1]: Starting Docker Application Container Engine...
	I0314 19:42:14.289794    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[650]: time="2024-03-14T19:40:27.010258466Z" level=info msg="Starting up"
	I0314 19:42:14.289880    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[650]: time="2024-03-14T19:40:27.011413188Z" level=info msg="containerd not running, starting managed containerd"
	I0314 19:42:14.289880    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[650]: time="2024-03-14T19:40:27.012927209Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=656
	I0314 19:42:14.289969    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.042687292Z" level=info msg="starting containerd" revision=dcf2847247e18caba8dce86522029642f60fe96b version=v1.7.14
	I0314 19:42:14.290051    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.069138554Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0314 19:42:14.290051    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.069242083Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0314 19:42:14.290051    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.069344111Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0314 19:42:14.290051    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.069362416Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0314 19:42:14.290051    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.070081016Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0314 19:42:14.290051    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.070164740Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0314 19:42:14.290051    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.070380400Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0314 19:42:14.290051    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.070511536Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0314 19:42:14.290051    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.070532642Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0314 19:42:14.290051    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.070544145Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0314 19:42:14.290051    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.070983067Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0314 19:42:14.290051    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.071556427Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0314 19:42:14.290051    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.074554061Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0314 19:42:14.290585    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.074645687Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0314 19:42:14.290585    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.074800830Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0314 19:42:14.290675    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.074883153Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0314 19:42:14.290788    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.075687977Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0314 19:42:14.290823    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.075800308Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0314 19:42:14.290823    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.075818813Z" level=info msg="metadata content store policy set" policy=shared
	I0314 19:42:14.290917    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.081334348Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0314 19:42:14.290917    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.081440978Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0314 19:42:14.291002    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.081463484Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0314 19:42:14.291002    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.081526902Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0314 19:42:14.291078    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.081545007Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0314 19:42:14.291157    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.081621128Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0314 19:42:14.291157    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.082036144Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0314 19:42:14.291296    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.082193387Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0314 19:42:14.291296    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.082276711Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0314 19:42:14.291296    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.082349431Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0314 19:42:14.291296    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.082368036Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0314 19:42:14.291296    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.082385141Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0314 19:42:14.291296    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.082401545Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0314 19:42:14.291296    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.082417450Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0314 19:42:14.291296    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.082433154Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0314 19:42:14.291296    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.082457161Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0314 19:42:14.291296    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.082515377Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0314 19:42:14.291296    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.082533482Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0314 19:42:14.291296    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.082554788Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0314 19:42:14.291296    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.082572093Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0314 19:42:14.291296    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.082586997Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0314 19:42:14.291296    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.082601801Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0314 19:42:14.291826    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.082616305Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0314 19:42:14.291826    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.082631109Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0314 19:42:14.291913    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.082643913Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0314 19:42:14.291913    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.082659317Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0314 19:42:14.292002    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.082673721Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0314 19:42:14.292002    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.082690226Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0314 19:42:14.292084    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.082704230Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0314 19:42:14.292140    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.082717333Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0314 19:42:14.292140    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.082730637Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0314 19:42:14.292140    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.082747942Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0314 19:42:14.292140    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.082771048Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0314 19:42:14.292140    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.082785952Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0314 19:42:14.292140    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.082799956Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0314 19:42:14.292140    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.082936994Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0314 19:42:14.292140    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.082973004Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	I0314 19:42:14.292140    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.082986808Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0314 19:42:14.292140    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.082998612Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	I0314 19:42:14.292140    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.083067631Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0314 19:42:14.292140    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.083095839Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0314 19:42:14.292140    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.083107842Z" level=info msg="NRI interface is disabled by configuration."
	I0314 19:42:14.292140    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.083364013Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0314 19:42:14.292662    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.083531860Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0314 19:42:14.292662    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.083575672Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0314 19:42:14.292662    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.083609482Z" level=info msg="containerd successfully booted in 0.043398s"
	I0314 19:42:14.292662    8428 command_runner.go:130] > Mar 14 19:40:28 multinode-442000 dockerd[650]: time="2024-03-14T19:40:28.063674621Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0314 19:42:14.292788    8428 command_runner.go:130] > Mar 14 19:40:28 multinode-442000 dockerd[650]: time="2024-03-14T19:40:28.220876850Z" level=info msg="Loading containers: start."
	I0314 19:42:14.292788    8428 command_runner.go:130] > Mar 14 19:40:28 multinode-442000 dockerd[650]: time="2024-03-14T19:40:28.643208421Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.18.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0314 19:42:14.292788    8428 command_runner.go:130] > Mar 14 19:40:28 multinode-442000 dockerd[650]: time="2024-03-14T19:40:28.726589336Z" level=info msg="Loading containers: done."
	I0314 19:42:14.292879    8428 command_runner.go:130] > Mar 14 19:40:28 multinode-442000 dockerd[650]: time="2024-03-14T19:40:28.750141296Z" level=info msg="Docker daemon" commit=061aa95 containerd-snapshotter=false storage-driver=overlay2 version=25.0.4
	I0314 19:42:14.292879    8428 command_runner.go:130] > Mar 14 19:40:28 multinode-442000 dockerd[650]: time="2024-03-14T19:40:28.750832983Z" level=info msg="Daemon has completed initialization"
	I0314 19:42:14.292963    8428 command_runner.go:130] > Mar 14 19:40:28 multinode-442000 systemd[1]: Started Docker Application Container Engine.
	I0314 19:42:14.292963    8428 command_runner.go:130] > Mar 14 19:40:28 multinode-442000 dockerd[650]: time="2024-03-14T19:40:28.799522730Z" level=info msg="API listen on [::]:2376"
	I0314 19:42:14.292963    8428 command_runner.go:130] > Mar 14 19:40:28 multinode-442000 dockerd[650]: time="2024-03-14T19:40:28.799691776Z" level=info msg="API listen on /var/run/docker.sock"
	I0314 19:42:14.293048    8428 command_runner.go:130] > Mar 14 19:40:52 multinode-442000 systemd[1]: Stopping Docker Application Container Engine...
	I0314 19:42:14.293048    8428 command_runner.go:130] > Mar 14 19:40:52 multinode-442000 dockerd[650]: time="2024-03-14T19:40:52.824796168Z" level=info msg="Processing signal 'terminated'"
	I0314 19:42:14.293131    8428 command_runner.go:130] > Mar 14 19:40:52 multinode-442000 dockerd[650]: time="2024-03-14T19:40:52.825961557Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0314 19:42:14.293131    8428 command_runner.go:130] > Mar 14 19:40:52 multinode-442000 dockerd[650]: time="2024-03-14T19:40:52.826585605Z" level=info msg="Daemon shutdown complete"
	I0314 19:42:14.293131    8428 command_runner.go:130] > Mar 14 19:40:52 multinode-442000 dockerd[650]: time="2024-03-14T19:40:52.826653911Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0314 19:42:14.293215    8428 command_runner.go:130] > Mar 14 19:40:52 multinode-442000 dockerd[650]: time="2024-03-14T19:40:52.826812323Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0314 19:42:14.293291    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 systemd[1]: docker.service: Deactivated successfully.
	I0314 19:42:14.293291    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 systemd[1]: Stopped Docker Application Container Engine.
	I0314 19:42:14.293291    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 systemd[1]: Starting Docker Application Container Engine...
	I0314 19:42:14.293291    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1043]: time="2024-03-14T19:40:53.899936864Z" level=info msg="Starting up"
	I0314 19:42:14.293291    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1043]: time="2024-03-14T19:40:53.900739426Z" level=info msg="containerd not running, starting managed containerd"
	I0314 19:42:14.293291    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1043]: time="2024-03-14T19:40:53.901763504Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1049
	I0314 19:42:14.293291    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.930795337Z" level=info msg="starting containerd" revision=dcf2847247e18caba8dce86522029642f60fe96b version=v1.7.14
	I0314 19:42:14.293291    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.957961927Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0314 19:42:14.293291    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.958063735Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0314 19:42:14.293291    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.958107338Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0314 19:42:14.293291    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.958123339Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0314 19:42:14.293291    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.958150841Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0314 19:42:14.293291    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.958163842Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0314 19:42:14.293291    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.958360458Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0314 19:42:14.293291    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.958444864Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0314 19:42:14.293829    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.958463766Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0314 19:42:14.293829    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.958475466Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0314 19:42:14.293936    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.958502569Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0314 19:42:14.293936    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.958670881Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0314 19:42:14.294024    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.961627209Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0314 19:42:14.294024    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.961715316Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0314 19:42:14.294108    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.961871928Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0314 19:42:14.294191    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.961949634Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0314 19:42:14.294266    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.961985336Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0314 19:42:14.294266    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.962005238Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0314 19:42:14.294266    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.962017139Z" level=info msg="metadata content store policy set" policy=shared
	I0314 19:42:14.294266    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.962188852Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0314 19:42:14.294266    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.962280259Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0314 19:42:14.294266    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.962311462Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0314 19:42:14.294266    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.962328263Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0314 19:42:14.294266    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.962344564Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0314 19:42:14.294266    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.962393368Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0314 19:42:14.294266    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.962810900Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0314 19:42:14.294266    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.962939310Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0314 19:42:14.294266    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.963018216Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0314 19:42:14.294266    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.963036317Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0314 19:42:14.294266    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.963060419Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0314 19:42:14.294266    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.963076820Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0314 19:42:14.294266    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.963091221Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0314 19:42:14.294813    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.963106323Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0314 19:42:14.294813    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.963121324Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0314 19:42:14.294813    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.963135425Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0314 19:42:14.295045    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.963148726Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0314 19:42:14.295045    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.963162027Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0314 19:42:14.295199    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.963184029Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0314 19:42:14.295240    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.963205330Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0314 19:42:14.295240    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.963220631Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0314 19:42:14.295240    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.963270235Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0314 19:42:14.295331    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.963286336Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0314 19:42:14.295331    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.963300438Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0314 19:42:14.295331    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.963313039Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0314 19:42:14.295331    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.963326640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0314 19:42:14.295471    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.963341141Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0314 19:42:14.295471    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.963357642Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0314 19:42:14.295538    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.963369743Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0314 19:42:14.295538    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.963382444Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0314 19:42:14.295597    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.963395545Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0314 19:42:14.295597    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.963411646Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0314 19:42:14.295597    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.963433148Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0314 19:42:14.295597    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.963449149Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0314 19:42:14.295713    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.963461550Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0314 19:42:14.295713    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.963512954Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0314 19:42:14.295713    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.963529855Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	I0314 19:42:14.295713    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.963593860Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0314 19:42:14.295823    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.963606261Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	I0314 19:42:14.295823    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.963665466Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0314 19:42:14.295823    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.963679767Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0314 19:42:14.295823    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.963695368Z" level=info msg="NRI interface is disabled by configuration."
	I0314 19:42:14.295823    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.964176205Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0314 19:42:14.295823    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.964503330Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0314 19:42:14.295823    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.965392899Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0314 19:42:14.296130    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.966787506Z" level=info msg="containerd successfully booted in 0.037267s"
	I0314 19:42:14.296130    8428 command_runner.go:130] > Mar 14 19:40:54 multinode-442000 dockerd[1043]: time="2024-03-14T19:40:54.945087153Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0314 19:42:14.296216    8428 command_runner.go:130] > Mar 14 19:40:54 multinode-442000 dockerd[1043]: time="2024-03-14T19:40:54.972020025Z" level=info msg="Loading containers: start."
	I0314 19:42:14.296251    8428 command_runner.go:130] > Mar 14 19:40:55 multinode-442000 dockerd[1043]: time="2024-03-14T19:40:55.259462934Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.18.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0314 19:42:14.296297    8428 command_runner.go:130] > Mar 14 19:40:55 multinode-442000 dockerd[1043]: time="2024-03-14T19:40:55.336883289Z" level=info msg="Loading containers: done."
	I0314 19:42:14.296336    8428 command_runner.go:130] > Mar 14 19:40:55 multinode-442000 dockerd[1043]: time="2024-03-14T19:40:55.370669888Z" level=info msg="Docker daemon" commit=061aa95 containerd-snapshotter=false storage-driver=overlay2 version=25.0.4
	I0314 19:42:14.296411    8428 command_runner.go:130] > Mar 14 19:40:55 multinode-442000 dockerd[1043]: time="2024-03-14T19:40:55.370874904Z" level=info msg="Daemon has completed initialization"
	I0314 19:42:14.296439    8428 command_runner.go:130] > Mar 14 19:40:55 multinode-442000 dockerd[1043]: time="2024-03-14T19:40:55.415311921Z" level=info msg="API listen on /var/run/docker.sock"
	I0314 19:42:14.296439    8428 command_runner.go:130] > Mar 14 19:40:55 multinode-442000 dockerd[1043]: time="2024-03-14T19:40:55.415467233Z" level=info msg="API listen on [::]:2376"
	I0314 19:42:14.296439    8428 command_runner.go:130] > Mar 14 19:40:55 multinode-442000 systemd[1]: Started Docker Application Container Engine.
	I0314 19:42:14.296439    8428 command_runner.go:130] > Mar 14 19:40:56 multinode-442000 systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0314 19:42:14.296439    8428 command_runner.go:130] > Mar 14 19:40:56 multinode-442000 cri-dockerd[1267]: time="2024-03-14T19:40:56Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0314 19:42:14.296439    8428 command_runner.go:130] > Mar 14 19:40:56 multinode-442000 cri-dockerd[1267]: time="2024-03-14T19:40:56Z" level=info msg="Start docker client with request timeout 0s"
	I0314 19:42:14.296439    8428 command_runner.go:130] > Mar 14 19:40:56 multinode-442000 cri-dockerd[1267]: time="2024-03-14T19:40:56Z" level=info msg="Hairpin mode is set to hairpin-veth"
	I0314 19:42:14.296439    8428 command_runner.go:130] > Mar 14 19:40:56 multinode-442000 cri-dockerd[1267]: time="2024-03-14T19:40:56Z" level=info msg="Loaded network plugin cni"
	I0314 19:42:14.296439    8428 command_runner.go:130] > Mar 14 19:40:56 multinode-442000 cri-dockerd[1267]: time="2024-03-14T19:40:56Z" level=info msg="Docker cri networking managed by network plugin cni"
	I0314 19:42:14.296439    8428 command_runner.go:130] > Mar 14 19:40:56 multinode-442000 cri-dockerd[1267]: time="2024-03-14T19:40:56Z" level=info msg="Docker Info: &{ID:04f4855f-417a-422c-b5bb-3cf8a43fb438 Containers:18 ContainersRunning:0 ContainersPaused:0 ContainersStopped:18 Images:10 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:[] Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:[] Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6tables:true Debug:false NFd:26 OomKillDisable:false NGoroutines:52 SystemTime:2024-03-14T19:40:56.401787998Z LoggingDriver:json-file CgroupDriver:cgroupfs CgroupVersion:2 NEventsListener:0 Ke
rnelVersion:5.10.207 OperatingSystem:Buildroot 2023.02.9 OSVersion:2023.02.9 OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:0xc0004c0150 NCPU:2 MemTotal:2216210432 GenericResources:[] DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:multinode-442000 Labels:[provider=hyperv] ExperimentalBuild:false ServerVersion:25.0.4 ClusterStore: ClusterAdvertise: Runtimes:map[io.containerd.runc.v2:{Path:runc Args:[] Shim:<nil>} runc:{Path:runc Args:[] Shim:<nil>}] DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:[] Nodes:0 Managers:0 Cluster:<nil> Warnings:[]} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dcf2847247e18caba8dce86522029642f60fe96b Expected:dcf2847247e18caba8dce86522029642f60fe96b} RuncCommit:{ID:51d5e94601ceffbbd85688df1c928ecccbfa4685 Expected:51d5e94601ceffbbd85688df1c928ecccbfa4685} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[nam
e=seccomp,profile=builtin name=cgroupns] ProductLicense:Community Engine DefaultAddressPools:[] Warnings:[]}"
	I0314 19:42:14.296439    8428 command_runner.go:130] > Mar 14 19:40:56 multinode-442000 cri-dockerd[1267]: time="2024-03-14T19:40:56Z" level=info msg="Setting cgroupDriver cgroupfs"
	I0314 19:42:14.296439    8428 command_runner.go:130] > Mar 14 19:40:56 multinode-442000 cri-dockerd[1267]: time="2024-03-14T19:40:56Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	I0314 19:42:14.296439    8428 command_runner.go:130] > Mar 14 19:40:56 multinode-442000 cri-dockerd[1267]: time="2024-03-14T19:40:56Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	I0314 19:42:14.296439    8428 command_runner.go:130] > Mar 14 19:40:56 multinode-442000 cri-dockerd[1267]: time="2024-03-14T19:40:56Z" level=info msg="Start cri-dockerd grpc backend"
	I0314 19:42:14.296439    8428 command_runner.go:130] > Mar 14 19:40:56 multinode-442000 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	I0314 19:42:14.296439    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 cri-dockerd[1267]: time="2024-03-14T19:41:00Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"busybox-5b5d89c9d6-7446n_default\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"fa0f2372c88eef3de0c7caa0041064157c314aff4c14bf6622f34dd89106f773\""
	I0314 19:42:14.297011    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 cri-dockerd[1267]: time="2024-03-14T19:41:00Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-5dd5756b68-d22jc_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"a3dba3fc54c01e7fb1675536e155d6b541ed5782f664675ccd953639013f50b0\""
	I0314 19:42:14.297074    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:01.294795352Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0314 19:42:14.297106    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:01.294882858Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0314 19:42:14.297134    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:01.294903860Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:14.297168    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:01.295303891Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:14.297207    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:01.380666857Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0314 19:42:14.297248    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:01.380946878Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0314 19:42:14.297248    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:01.381075288Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:14.297248    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:01.381588628Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:14.297248    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:01.418754186Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0314 19:42:14.297337    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:01.418872295Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0314 19:42:14.297337    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:01.418919499Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:14.297380    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:01.419130315Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:14.297422    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 cri-dockerd[1267]: time="2024-03-14T19:41:01Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/35dd339c8a08d84d0d1a4d2c062b04d44baff78d20c6ed33ce967d50c18eaa3c/resolv.conf as [nameserver 172.17.80.1]"
	I0314 19:42:14.297422    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:01.449937485Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0314 19:42:14.297422    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:01.450067495Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0314 19:42:14.297480    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:01.450100297Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:14.297480    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:01.450295012Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:14.297538    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 cri-dockerd[1267]: time="2024-03-14T19:41:01Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/67475bf80ddd91df7549842450a8d92c27cd16f814cd4e4c750a7cad7d82fc9f/resolv.conf as [nameserver 172.17.80.1]"
	I0314 19:42:14.297538    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 cri-dockerd[1267]: time="2024-03-14T19:41:01Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a27fa2188ee4cf0c44cde0f8cae03a83655bc574c856082192e3261801efcc72/resolv.conf as [nameserver 172.17.80.1]"
	I0314 19:42:14.297598    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 cri-dockerd[1267]: time="2024-03-14T19:41:01Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/c70744e60ac50b50085376d0c124ff15cc884b8a836b0085ef71a65ddb06bcfd/resolv.conf as [nameserver 172.17.80.1]"
	I0314 19:42:14.297640    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:01.782527266Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0314 19:42:14.297640    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:01.782834890Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0314 19:42:14.297682    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:01.782945299Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:14.297682    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:01.783324628Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:14.297682    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:01.950307171Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0314 19:42:14.297682    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:01.950638097Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0314 19:42:14.297682    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:01.950847113Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:14.297682    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:01.951959699Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:14.297682    8428 command_runner.go:130] > Mar 14 19:41:02 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:02.033329657Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0314 19:42:14.297682    8428 command_runner.go:130] > Mar 14 19:41:02 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:02.033826996Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0314 19:42:14.297682    8428 command_runner.go:130] > Mar 14 19:41:02 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:02.034090516Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:14.297682    8428 command_runner.go:130] > Mar 14 19:41:02 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:02.034801671Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:14.297682    8428 command_runner.go:130] > Mar 14 19:41:02 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:02.038389546Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0314 19:42:14.297682    8428 command_runner.go:130] > Mar 14 19:41:02 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:02.038570160Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0314 19:42:14.297682    8428 command_runner.go:130] > Mar 14 19:41:02 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:02.038686569Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:14.297682    8428 command_runner.go:130] > Mar 14 19:41:02 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:02.038972291Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:14.297682    8428 command_runner.go:130] > Mar 14 19:41:05 multinode-442000 cri-dockerd[1267]: time="2024-03-14T19:41:05Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	I0314 19:42:14.297682    8428 command_runner.go:130] > Mar 14 19:41:07 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:07.056067890Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0314 19:42:14.297682    8428 command_runner.go:130] > Mar 14 19:41:07 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:07.056148096Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0314 19:42:14.297682    8428 command_runner.go:130] > Mar 14 19:41:07 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:07.056166397Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:14.297682    8428 command_runner.go:130] > Mar 14 19:41:07 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:07.056406816Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:14.297682    8428 command_runner.go:130] > Mar 14 19:41:07 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:07.109761119Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0314 19:42:14.297682    8428 command_runner.go:130] > Mar 14 19:41:07 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:07.110023440Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0314 19:42:14.297682    8428 command_runner.go:130] > Mar 14 19:41:07 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:07.110099145Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:14.297682    8428 command_runner.go:130] > Mar 14 19:41:07 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:07.110475674Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:14.297682    8428 command_runner.go:130] > Mar 14 19:41:07 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:07.116978275Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0314 19:42:14.298211    8428 command_runner.go:130] > Mar 14 19:41:07 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:07.117046280Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0314 19:42:14.298250    8428 command_runner.go:130] > Mar 14 19:41:07 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:07.117060481Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:14.298250    8428 command_runner.go:130] > Mar 14 19:41:07 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:07.117158888Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:14.298286    8428 command_runner.go:130] > Mar 14 19:41:07 multinode-442000 cri-dockerd[1267]: time="2024-03-14T19:41:07Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a723f141543f2007cc07e048ef5836fca4ae70749b7266630f6c890bb233c09a/resolv.conf as [nameserver 172.17.80.1]"
	I0314 19:42:14.298286    8428 command_runner.go:130] > Mar 14 19:41:07 multinode-442000 cri-dockerd[1267]: time="2024-03-14T19:41:07Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/f513a7aff67200987eb0f28647720ea4cb9bbdb684fc85d1b08c0dd54563517d/resolv.conf as [nameserver 172.17.80.1]"
	I0314 19:42:14.298286    8428 command_runner.go:130] > Mar 14 19:41:07 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:07.432676357Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0314 19:42:14.298286    8428 command_runner.go:130] > Mar 14 19:41:07 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:07.432829669Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0314 19:42:14.298286    8428 command_runner.go:130] > Mar 14 19:41:07 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:07.432849370Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:14.298286    8428 command_runner.go:130] > Mar 14 19:41:07 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:07.433004382Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:14.298286    8428 command_runner.go:130] > Mar 14 19:41:07 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:07.579105320Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0314 19:42:14.298286    8428 command_runner.go:130] > Mar 14 19:41:07 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:07.580432922Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0314 19:42:14.298286    8428 command_runner.go:130] > Mar 14 19:41:07 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:07.580451623Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:14.298286    8428 command_runner.go:130] > Mar 14 19:41:07 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:07.580554931Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:14.298286    8428 command_runner.go:130] > Mar 14 19:41:07 multinode-442000 cri-dockerd[1267]: time="2024-03-14T19:41:07Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a9176b55446637c4407c9a64ce7d85fce2b395bcc0a22061f5f7ff304ff2d47f/resolv.conf as [nameserver 172.17.80.1]"
	I0314 19:42:14.298286    8428 command_runner.go:130] > Mar 14 19:41:07 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:07.897653021Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0314 19:42:14.298286    8428 command_runner.go:130] > Mar 14 19:41:07 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:07.897936143Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0314 19:42:14.298286    8428 command_runner.go:130] > Mar 14 19:41:07 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:07.898062553Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:14.298286    8428 command_runner.go:130] > Mar 14 19:41:07 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:07.898459584Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:14.298286    8428 command_runner.go:130] > Mar 14 19:41:37 multinode-442000 dockerd[1043]: time="2024-03-14T19:41:37.705977514Z" level=info msg="ignoring event" container=2876622a2618d9b60f7cb4f182054a8b2d30209e3bd14c5d4afe515101547bc8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0314 19:42:14.298286    8428 command_runner.go:130] > Mar 14 19:41:37 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:37.706482647Z" level=info msg="shim disconnected" id=2876622a2618d9b60f7cb4f182054a8b2d30209e3bd14c5d4afe515101547bc8 namespace=moby
	I0314 19:42:14.298286    8428 command_runner.go:130] > Mar 14 19:41:37 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:37.706677460Z" level=warning msg="cleaning up after shim disconnected" id=2876622a2618d9b60f7cb4f182054a8b2d30209e3bd14c5d4afe515101547bc8 namespace=moby
	I0314 19:42:14.298286    8428 command_runner.go:130] > Mar 14 19:41:37 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:37.706692261Z" level=info msg="cleaning up dead shim" namespace=moby
	I0314 19:42:14.298286    8428 command_runner.go:130] > Mar 14 19:41:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:53.663136392Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0314 19:42:14.298286    8428 command_runner.go:130] > Mar 14 19:41:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:53.663371709Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0314 19:42:14.298286    8428 command_runner.go:130] > Mar 14 19:41:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:53.663411212Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:14.298286    8428 command_runner.go:130] > Mar 14 19:41:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:53.663537821Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:14.298286    8428 command_runner.go:130] > Mar 14 19:42:10 multinode-442000 dockerd[1049]: time="2024-03-14T19:42:10.837487028Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0314 19:42:14.298286    8428 command_runner.go:130] > Mar 14 19:42:10 multinode-442000 dockerd[1049]: time="2024-03-14T19:42:10.837604337Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0314 19:42:14.298286    8428 command_runner.go:130] > Mar 14 19:42:10 multinode-442000 dockerd[1049]: time="2024-03-14T19:42:10.837625738Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:14.298286    8428 command_runner.go:130] > Mar 14 19:42:10 multinode-442000 dockerd[1049]: time="2024-03-14T19:42:10.837719345Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:14.298286    8428 command_runner.go:130] > Mar 14 19:42:10 multinode-442000 dockerd[1049]: time="2024-03-14T19:42:10.848167835Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0314 19:42:14.298286    8428 command_runner.go:130] > Mar 14 19:42:10 multinode-442000 dockerd[1049]: time="2024-03-14T19:42:10.849098605Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0314 19:42:14.298286    8428 command_runner.go:130] > Mar 14 19:42:10 multinode-442000 dockerd[1049]: time="2024-03-14T19:42:10.849287919Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:14.298286    8428 command_runner.go:130] > Mar 14 19:42:10 multinode-442000 dockerd[1049]: time="2024-03-14T19:42:10.849656747Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:14.298286    8428 command_runner.go:130] > Mar 14 19:42:11 multinode-442000 cri-dockerd[1267]: time="2024-03-14T19:42:11Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/cddebe360bf3a58d057146523ff9f043ddb40843d3e55a24f8f364524780a439/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	I0314 19:42:14.298286    8428 command_runner.go:130] > Mar 14 19:42:11 multinode-442000 cri-dockerd[1267]: time="2024-03-14T19:42:11Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/89f326046d00d990fbe8611867f6438ef498caad91d78b4f265633a7cd56307f/resolv.conf as [nameserver 172.17.80.1]"
	I0314 19:42:14.298286    8428 command_runner.go:130] > Mar 14 19:42:11 multinode-442000 dockerd[1049]: time="2024-03-14T19:42:11.575693713Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0314 19:42:14.298286    8428 command_runner.go:130] > Mar 14 19:42:11 multinode-442000 dockerd[1049]: time="2024-03-14T19:42:11.575950032Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0314 19:42:14.298286    8428 command_runner.go:130] > Mar 14 19:42:11 multinode-442000 dockerd[1049]: time="2024-03-14T19:42:11.576019637Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:14.298286    8428 command_runner.go:130] > Mar 14 19:42:11 multinode-442000 dockerd[1049]: time="2024-03-14T19:42:11.577004211Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0314 19:42:14.298286    8428 command_runner.go:130] > Mar 14 19:42:11 multinode-442000 dockerd[1049]: time="2024-03-14T19:42:11.577168224Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0314 19:42:14.298286    8428 command_runner.go:130] > Mar 14 19:42:11 multinode-442000 dockerd[1049]: time="2024-03-14T19:42:11.577288033Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:14.298286    8428 command_runner.go:130] > Mar 14 19:42:11 multinode-442000 dockerd[1049]: time="2024-03-14T19:42:11.577583255Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:14.298286    8428 command_runner.go:130] > Mar 14 19:42:11 multinode-442000 dockerd[1049]: time="2024-03-14T19:42:11.576656985Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:14.298286    8428 command_runner.go:130] > Mar 14 19:42:13 multinode-442000 dockerd[1043]: 2024/03/14 19:42:13 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0314 19:42:14.298286    8428 command_runner.go:130] > Mar 14 19:42:14 multinode-442000 dockerd[1043]: 2024/03/14 19:42:14 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0314 19:42:14.298286    8428 command_runner.go:130] > Mar 14 19:42:14 multinode-442000 dockerd[1043]: 2024/03/14 19:42:14 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0314 19:42:14.298286    8428 command_runner.go:130] > Mar 14 19:42:14 multinode-442000 dockerd[1043]: 2024/03/14 19:42:14 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0314 19:42:14.298286    8428 command_runner.go:130] > Mar 14 19:42:14 multinode-442000 dockerd[1043]: 2024/03/14 19:42:14 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0314 19:42:14.298286    8428 command_runner.go:130] > Mar 14 19:42:14 multinode-442000 dockerd[1043]: 2024/03/14 19:42:14 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0314 19:42:14.327783    8428 logs.go:123] Gathering logs for kube-proxy [2a62baf3f1b4] ...
	I0314 19:42:14.327783    8428 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2a62baf3f1b4"
	I0314 19:42:14.354736    8428 command_runner.go:130] ! I0314 19:19:18.247796       1 server_others.go:69] "Using iptables proxy"
	I0314 19:42:14.354736    8428 command_runner.go:130] ! I0314 19:19:18.275162       1 node.go:141] Successfully retrieved node IP: 172.17.86.124
	I0314 19:42:14.354736    8428 command_runner.go:130] ! I0314 19:19:18.379821       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0314 19:42:14.354736    8428 command_runner.go:130] ! I0314 19:19:18.379851       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0314 19:42:14.354736    8428 command_runner.go:130] ! I0314 19:19:18.395429       1 server_others.go:152] "Using iptables Proxier"
	I0314 19:42:14.354736    8428 command_runner.go:130] ! I0314 19:19:18.395506       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0314 19:42:14.354736    8428 command_runner.go:130] ! I0314 19:19:18.395856       1 server.go:846] "Version info" version="v1.28.4"
	I0314 19:42:14.354736    8428 command_runner.go:130] ! I0314 19:19:18.395890       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0314 19:42:14.354736    8428 command_runner.go:130] ! I0314 19:19:18.417861       1 config.go:188] "Starting service config controller"
	I0314 19:42:14.354736    8428 command_runner.go:130] ! I0314 19:19:18.417913       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0314 19:42:14.354736    8428 command_runner.go:130] ! I0314 19:19:18.417950       1 config.go:97] "Starting endpoint slice config controller"
	I0314 19:42:14.354736    8428 command_runner.go:130] ! I0314 19:19:18.420511       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0314 19:42:14.354736    8428 command_runner.go:130] ! I0314 19:19:18.426566       1 config.go:315] "Starting node config controller"
	I0314 19:42:14.354736    8428 command_runner.go:130] ! I0314 19:19:18.426600       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0314 19:42:14.354736    8428 command_runner.go:130] ! I0314 19:19:18.519508       1 shared_informer.go:318] Caches are synced for service config
	I0314 19:42:14.354736    8428 command_runner.go:130] ! I0314 19:19:18.524347       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0314 19:42:14.354736    8428 command_runner.go:130] ! I0314 19:19:18.527360       1 shared_informer.go:318] Caches are synced for node config
	I0314 19:42:14.356981    8428 logs.go:123] Gathering logs for dmesg ...
	I0314 19:42:14.356981    8428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:42:14.379440    8428 command_runner.go:130] > [Mar14 19:39] You have booted with nomodeset. This means your GPU drivers are DISABLED
	I0314 19:42:14.379477    8428 command_runner.go:130] > [  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	I0314 19:42:14.379477    8428 command_runner.go:130] > [  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	I0314 19:42:14.379556    8428 command_runner.go:130] > [  +0.111500] RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible!
	I0314 19:42:14.379556    8428 command_runner.go:130] > [  +0.025646] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	I0314 19:42:14.379556    8428 command_runner.go:130] > [  +0.000006] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	I0314 19:42:14.379616    8428 command_runner.go:130] > [  +0.000001] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	I0314 19:42:14.379616    8428 command_runner.go:130] > [  +0.051209] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	I0314 19:42:14.379616    8428 command_runner.go:130] > [  +0.017569] * Found PM-Timer Bug on the chipset. Due to workarounds for a bug,
	I0314 19:42:14.379616    8428 command_runner.go:130] >               * this clock source is slow. Consider trying other clock sources
	I0314 19:42:14.379616    8428 command_runner.go:130] > [  +5.774438] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	I0314 19:42:14.379616    8428 command_runner.go:130] > [  +0.663188] psmouse serio1: trackpoint: failed to get extended button data, assuming 3 buttons
	I0314 19:42:14.379616    8428 command_runner.go:130] > [  +1.473946] systemd-fstab-generator[113]: Ignoring "noauto" option for root device
	I0314 19:42:14.379616    8428 command_runner.go:130] > [  +5.849126] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	I0314 19:42:14.379616    8428 command_runner.go:130] > [  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	I0314 19:42:14.379616    8428 command_runner.go:130] > [  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	I0314 19:42:14.379616    8428 command_runner.go:130] > [Mar14 19:40] systemd-fstab-generator[630]: Ignoring "noauto" option for root device
	I0314 19:42:14.379616    8428 command_runner.go:130] > [  +0.179743] systemd-fstab-generator[642]: Ignoring "noauto" option for root device
	I0314 19:42:14.379616    8428 command_runner.go:130] > [ +24.853688] systemd-fstab-generator[971]: Ignoring "noauto" option for root device
	I0314 19:42:14.379616    8428 command_runner.go:130] > [  +0.096946] kauditd_printk_skb: 73 callbacks suppressed
	I0314 19:42:14.379616    8428 command_runner.go:130] > [  +0.497369] systemd-fstab-generator[1009]: Ignoring "noauto" option for root device
	I0314 19:42:14.379616    8428 command_runner.go:130] > [  +0.185545] systemd-fstab-generator[1021]: Ignoring "noauto" option for root device
	I0314 19:42:14.379616    8428 command_runner.go:130] > [  +0.215423] systemd-fstab-generator[1035]: Ignoring "noauto" option for root device
	I0314 19:42:14.379616    8428 command_runner.go:130] > [  +2.887443] systemd-fstab-generator[1220]: Ignoring "noauto" option for root device
	I0314 19:42:14.379616    8428 command_runner.go:130] > [  +0.193519] systemd-fstab-generator[1232]: Ignoring "noauto" option for root device
	I0314 19:42:14.379616    8428 command_runner.go:130] > [  +0.182072] systemd-fstab-generator[1244]: Ignoring "noauto" option for root device
	I0314 19:42:14.379616    8428 command_runner.go:130] > [  +0.258988] systemd-fstab-generator[1259]: Ignoring "noauto" option for root device
	I0314 19:42:14.379616    8428 command_runner.go:130] > [  +0.819687] systemd-fstab-generator[1381]: Ignoring "noauto" option for root device
	I0314 19:42:14.379616    8428 command_runner.go:130] > [  +0.099817] kauditd_printk_skb: 205 callbacks suppressed
	I0314 19:42:14.379616    8428 command_runner.go:130] > [  +2.940519] systemd-fstab-generator[1516]: Ignoring "noauto" option for root device
	I0314 19:42:14.379616    8428 command_runner.go:130] > [Mar14 19:41] kauditd_printk_skb: 84 callbacks suppressed
	I0314 19:42:14.379616    8428 command_runner.go:130] > [  +4.042735] systemd-fstab-generator[3087]: Ignoring "noauto" option for root device
	I0314 19:42:14.379616    8428 command_runner.go:130] > [  +7.733278] kauditd_printk_skb: 70 callbacks suppressed
	I0314 19:42:14.381741    8428 logs.go:123] Gathering logs for kube-apiserver [a598d24960de] ...
	I0314 19:42:14.381741    8428 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a598d24960de"
	I0314 19:42:14.412838    8428 command_runner.go:130] ! I0314 19:41:02.580148       1 options.go:220] external host was not specified, using 172.17.93.236
	I0314 19:42:14.412838    8428 command_runner.go:130] ! I0314 19:41:02.584195       1 server.go:148] Version: v1.28.4
	I0314 19:42:14.412838    8428 command_runner.go:130] ! I0314 19:41:02.584361       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0314 19:42:14.412838    8428 command_runner.go:130] ! I0314 19:41:03.945945       1 shared_informer.go:311] Waiting for caches to sync for node_authorizer
	I0314 19:42:14.412838    8428 command_runner.go:130] ! I0314 19:41:03.963375       1 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0314 19:42:14.412838    8428 command_runner.go:130] ! I0314 19:41:03.963415       1 plugins.go:161] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0314 19:42:14.412838    8428 command_runner.go:130] ! I0314 19:41:03.963973       1 instance.go:298] Using reconciler: lease
	I0314 19:42:14.412838    8428 command_runner.go:130] ! I0314 19:41:04.031000       1 handler.go:232] Adding GroupVersion apiextensions.k8s.io v1 to ResourceManager
	I0314 19:42:14.413376    8428 command_runner.go:130] ! W0314 19:41:04.031118       1 genericapiserver.go:744] Skipping API apiextensions.k8s.io/v1beta1 because it has no resources.
	I0314 19:42:14.413376    8428 command_runner.go:130] ! I0314 19:41:04.342643       1 handler.go:232] Adding GroupVersion  v1 to ResourceManager
	I0314 19:42:14.413434    8428 command_runner.go:130] ! I0314 19:41:04.343120       1 instance.go:709] API group "internal.apiserver.k8s.io" is not enabled, skipping.
	I0314 19:42:14.413434    8428 command_runner.go:130] ! I0314 19:41:04.862959       1 instance.go:709] API group "resource.k8s.io" is not enabled, skipping.
	I0314 19:42:14.413478    8428 command_runner.go:130] ! I0314 19:41:04.875745       1 handler.go:232] Adding GroupVersion authentication.k8s.io v1 to ResourceManager
	I0314 19:42:14.413517    8428 command_runner.go:130] ! W0314 19:41:04.875858       1 genericapiserver.go:744] Skipping API authentication.k8s.io/v1beta1 because it has no resources.
	I0314 19:42:14.413543    8428 command_runner.go:130] ! W0314 19:41:04.875867       1 genericapiserver.go:744] Skipping API authentication.k8s.io/v1alpha1 because it has no resources.
	I0314 19:42:14.413586    8428 command_runner.go:130] ! I0314 19:41:04.876422       1 handler.go:232] Adding GroupVersion authorization.k8s.io v1 to ResourceManager
	I0314 19:42:14.413625    8428 command_runner.go:130] ! W0314 19:41:04.876506       1 genericapiserver.go:744] Skipping API authorization.k8s.io/v1beta1 because it has no resources.
	I0314 19:42:14.413625    8428 command_runner.go:130] ! I0314 19:41:04.877676       1 handler.go:232] Adding GroupVersion autoscaling v2 to ResourceManager
	I0314 19:42:14.413675    8428 command_runner.go:130] ! I0314 19:41:04.878707       1 handler.go:232] Adding GroupVersion autoscaling v1 to ResourceManager
	I0314 19:42:14.413675    8428 command_runner.go:130] ! W0314 19:41:04.878804       1 genericapiserver.go:744] Skipping API autoscaling/v2beta1 because it has no resources.
	I0314 19:42:14.413675    8428 command_runner.go:130] ! W0314 19:41:04.878812       1 genericapiserver.go:744] Skipping API autoscaling/v2beta2 because it has no resources.
	I0314 19:42:14.413675    8428 command_runner.go:130] ! I0314 19:41:04.881331       1 handler.go:232] Adding GroupVersion batch v1 to ResourceManager
	I0314 19:42:14.413763    8428 command_runner.go:130] ! W0314 19:41:04.881418       1 genericapiserver.go:744] Skipping API batch/v1beta1 because it has no resources.
	I0314 19:42:14.413763    8428 command_runner.go:130] ! I0314 19:41:04.882613       1 handler.go:232] Adding GroupVersion certificates.k8s.io v1 to ResourceManager
	I0314 19:42:14.413763    8428 command_runner.go:130] ! W0314 19:41:04.882706       1 genericapiserver.go:744] Skipping API certificates.k8s.io/v1beta1 because it has no resources.
	I0314 19:42:14.413763    8428 command_runner.go:130] ! W0314 19:41:04.882714       1 genericapiserver.go:744] Skipping API certificates.k8s.io/v1alpha1 because it has no resources.
	I0314 19:42:14.413901    8428 command_runner.go:130] ! I0314 19:41:04.883473       1 handler.go:232] Adding GroupVersion coordination.k8s.io v1 to ResourceManager
	I0314 19:42:14.413901    8428 command_runner.go:130] ! W0314 19:41:04.883562       1 genericapiserver.go:744] Skipping API coordination.k8s.io/v1beta1 because it has no resources.
	I0314 19:42:14.413969    8428 command_runner.go:130] ! W0314 19:41:04.883619       1 genericapiserver.go:744] Skipping API discovery.k8s.io/v1beta1 because it has no resources.
	I0314 19:42:14.414044    8428 command_runner.go:130] ! I0314 19:41:04.884340       1 handler.go:232] Adding GroupVersion discovery.k8s.io v1 to ResourceManager
	I0314 19:42:14.414044    8428 command_runner.go:130] ! I0314 19:41:04.886289       1 handler.go:232] Adding GroupVersion networking.k8s.io v1 to ResourceManager
	I0314 19:42:14.414044    8428 command_runner.go:130] ! W0314 19:41:04.886373       1 genericapiserver.go:744] Skipping API networking.k8s.io/v1beta1 because it has no resources.
	I0314 19:42:14.414044    8428 command_runner.go:130] ! W0314 19:41:04.886382       1 genericapiserver.go:744] Skipping API networking.k8s.io/v1alpha1 because it has no resources.
	I0314 19:42:14.414044    8428 command_runner.go:130] ! I0314 19:41:04.886877       1 handler.go:232] Adding GroupVersion node.k8s.io v1 to ResourceManager
	I0314 19:42:14.414044    8428 command_runner.go:130] ! W0314 19:41:04.886971       1 genericapiserver.go:744] Skipping API node.k8s.io/v1beta1 because it has no resources.
	I0314 19:42:14.414044    8428 command_runner.go:130] ! W0314 19:41:04.886979       1 genericapiserver.go:744] Skipping API node.k8s.io/v1alpha1 because it has no resources.
	I0314 19:42:14.414044    8428 command_runner.go:130] ! I0314 19:41:04.888213       1 handler.go:232] Adding GroupVersion policy v1 to ResourceManager
	I0314 19:42:14.414044    8428 command_runner.go:130] ! W0314 19:41:04.888261       1 genericapiserver.go:744] Skipping API policy/v1beta1 because it has no resources.
	I0314 19:42:14.414044    8428 command_runner.go:130] ! I0314 19:41:04.903461       1 handler.go:232] Adding GroupVersion rbac.authorization.k8s.io v1 to ResourceManager
	I0314 19:42:14.414044    8428 command_runner.go:130] ! W0314 19:41:04.903509       1 genericapiserver.go:744] Skipping API rbac.authorization.k8s.io/v1beta1 because it has no resources.
	I0314 19:42:14.414044    8428 command_runner.go:130] ! W0314 19:41:04.903517       1 genericapiserver.go:744] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
	I0314 19:42:14.414044    8428 command_runner.go:130] ! I0314 19:41:04.906409       1 handler.go:232] Adding GroupVersion scheduling.k8s.io v1 to ResourceManager
	I0314 19:42:14.414044    8428 command_runner.go:130] ! W0314 19:41:04.906458       1 genericapiserver.go:744] Skipping API scheduling.k8s.io/v1beta1 because it has no resources.
	I0314 19:42:14.414044    8428 command_runner.go:130] ! W0314 19:41:04.906466       1 genericapiserver.go:744] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
	I0314 19:42:14.414576    8428 command_runner.go:130] ! I0314 19:41:04.915366       1 handler.go:232] Adding GroupVersion storage.k8s.io v1 to ResourceManager
	I0314 19:42:14.414622    8428 command_runner.go:130] ! W0314 19:41:04.915463       1 genericapiserver.go:744] Skipping API storage.k8s.io/v1beta1 because it has no resources.
	I0314 19:42:14.414622    8428 command_runner.go:130] ! W0314 19:41:04.915472       1 genericapiserver.go:744] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
	I0314 19:42:14.414729    8428 command_runner.go:130] ! I0314 19:41:04.916839       1 handler.go:232] Adding GroupVersion flowcontrol.apiserver.k8s.io v1beta3 to ResourceManager
	I0314 19:42:14.414729    8428 command_runner.go:130] ! I0314 19:41:04.918318       1 handler.go:232] Adding GroupVersion flowcontrol.apiserver.k8s.io v1beta2 to ResourceManager
	I0314 19:42:14.414817    8428 command_runner.go:130] ! W0314 19:41:04.918410       1 genericapiserver.go:744] Skipping API flowcontrol.apiserver.k8s.io/v1beta1 because it has no resources.
	I0314 19:42:14.414878    8428 command_runner.go:130] ! W0314 19:41:04.918418       1 genericapiserver.go:744] Skipping API flowcontrol.apiserver.k8s.io/v1alpha1 because it has no resources.
	I0314 19:42:14.414918    8428 command_runner.go:130] ! I0314 19:41:04.922469       1 handler.go:232] Adding GroupVersion apps v1 to ResourceManager
	I0314 19:42:14.414965    8428 command_runner.go:130] ! W0314 19:41:04.922563       1 genericapiserver.go:744] Skipping API apps/v1beta2 because it has no resources.
	I0314 19:42:14.415014    8428 command_runner.go:130] ! W0314 19:41:04.922576       1 genericapiserver.go:744] Skipping API apps/v1beta1 because it has no resources.
	I0314 19:42:14.415058    8428 command_runner.go:130] ! I0314 19:41:04.923589       1 handler.go:232] Adding GroupVersion admissionregistration.k8s.io v1 to ResourceManager
	I0314 19:42:14.415107    8428 command_runner.go:130] ! W0314 19:41:04.923671       1 genericapiserver.go:744] Skipping API admissionregistration.k8s.io/v1beta1 because it has no resources.
	I0314 19:42:14.415167    8428 command_runner.go:130] ! W0314 19:41:04.923678       1 genericapiserver.go:744] Skipping API admissionregistration.k8s.io/v1alpha1 because it has no resources.
	I0314 19:42:14.415218    8428 command_runner.go:130] ! I0314 19:41:04.924323       1 handler.go:232] Adding GroupVersion events.k8s.io v1 to ResourceManager
	I0314 19:42:14.415218    8428 command_runner.go:130] ! W0314 19:41:04.924409       1 genericapiserver.go:744] Skipping API events.k8s.io/v1beta1 because it has no resources.
	I0314 19:42:14.415274    8428 command_runner.go:130] ! I0314 19:41:04.946149       1 handler.go:232] Adding GroupVersion apiregistration.k8s.io v1 to ResourceManager
	I0314 19:42:14.415373    8428 command_runner.go:130] ! W0314 19:41:04.946188       1 genericapiserver.go:744] Skipping API apiregistration.k8s.io/v1beta1 because it has no resources.
	I0314 19:42:14.415412    8428 command_runner.go:130] ! I0314 19:41:05.649195       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0314 19:42:14.415448    8428 command_runner.go:130] ! I0314 19:41:05.649351       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0314 19:42:14.415448    8428 command_runner.go:130] ! I0314 19:41:05.650113       1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	I0314 19:42:14.415448    8428 command_runner.go:130] ! I0314 19:41:05.651281       1 secure_serving.go:213] Serving securely on [::]:8443
	I0314 19:42:14.415448    8428 command_runner.go:130] ! I0314 19:41:05.651311       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0314 19:42:14.415448    8428 command_runner.go:130] ! I0314 19:41:05.651726       1 handler_discovery.go:412] Starting ResourceDiscoveryManager
	I0314 19:42:14.415448    8428 command_runner.go:130] ! I0314 19:41:05.651907       1 dynamic_serving_content.go:132] "Starting controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	I0314 19:42:14.415448    8428 command_runner.go:130] ! I0314 19:41:05.654468       1 controller.go:80] Starting OpenAPI V3 AggregationController
	I0314 19:42:14.415448    8428 command_runner.go:130] ! I0314 19:41:05.654814       1 gc_controller.go:78] Starting apiserver lease garbage collector
	I0314 19:42:14.415448    8428 command_runner.go:130] ! I0314 19:41:05.655201       1 gc_controller.go:78] Starting apiserver lease garbage collector
	I0314 19:42:14.415448    8428 command_runner.go:130] ! I0314 19:41:05.656049       1 apf_controller.go:372] Starting API Priority and Fairness config controller
	I0314 19:42:14.415448    8428 command_runner.go:130] ! I0314 19:41:05.656308       1 available_controller.go:423] Starting AvailableConditionController
	I0314 19:42:14.415448    8428 command_runner.go:130] ! I0314 19:41:05.656404       1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
	I0314 19:42:14.415448    8428 command_runner.go:130] ! I0314 19:41:05.651597       1 apiservice_controller.go:97] Starting APIServiceRegistrationController
	I0314 19:42:14.415448    8428 command_runner.go:130] ! I0314 19:41:05.656599       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
	I0314 19:42:14.415448    8428 command_runner.go:130] ! I0314 19:41:05.658623       1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
	I0314 19:42:14.415448    8428 command_runner.go:130] ! I0314 19:41:05.658785       1 shared_informer.go:311] Waiting for caches to sync for cluster_authentication_trust_controller
	I0314 19:42:14.415448    8428 command_runner.go:130] ! I0314 19:41:05.659483       1 system_namespaces_controller.go:67] Starting system namespaces controller
	I0314 19:42:14.415979    8428 command_runner.go:130] ! I0314 19:41:05.661076       1 aggregator.go:164] waiting for initial CRD sync...
	I0314 19:42:14.416026    8428 command_runner.go:130] ! I0314 19:41:05.662487       1 customresource_discovery_controller.go:289] Starting DiscoveryController
	I0314 19:42:14.416069    8428 command_runner.go:130] ! I0314 19:41:05.662789       1 controller.go:78] Starting OpenAPI AggregationController
	I0314 19:42:14.416109    8428 command_runner.go:130] ! I0314 19:41:05.727194       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0314 19:42:14.416143    8428 command_runner.go:130] ! I0314 19:41:05.728512       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0314 19:42:14.416143    8428 command_runner.go:130] ! I0314 19:41:05.729067       1 controller.go:116] Starting legacy_token_tracking_controller
	I0314 19:42:14.416143    8428 command_runner.go:130] ! I0314 19:41:05.729317       1 shared_informer.go:311] Waiting for caches to sync for configmaps
	I0314 19:42:14.416143    8428 command_runner.go:130] ! I0314 19:41:05.729432       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0314 19:42:14.416143    8428 command_runner.go:130] ! I0314 19:41:05.729507       1 shared_informer.go:311] Waiting for caches to sync for crd-autoregister
	I0314 19:42:14.416143    8428 command_runner.go:130] ! I0314 19:41:05.729606       1 controller.go:134] Starting OpenAPI controller
	I0314 19:42:14.416143    8428 command_runner.go:130] ! I0314 19:41:05.729710       1 controller.go:85] Starting OpenAPI V3 controller
	I0314 19:42:14.416143    8428 command_runner.go:130] ! I0314 19:41:05.729812       1 naming_controller.go:291] Starting NamingConditionController
	I0314 19:42:14.416143    8428 command_runner.go:130] ! I0314 19:41:05.729911       1 establishing_controller.go:76] Starting EstablishingController
	I0314 19:42:14.416143    8428 command_runner.go:130] ! I0314 19:41:05.730411       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0314 19:42:14.416143    8428 command_runner.go:130] ! I0314 19:41:05.730521       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0314 19:42:14.416143    8428 command_runner.go:130] ! I0314 19:41:05.730616       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0314 19:42:14.416143    8428 command_runner.go:130] ! I0314 19:41:05.799477       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0314 19:42:14.416143    8428 command_runner.go:130] ! I0314 19:41:05.813580       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0314 19:42:14.416143    8428 command_runner.go:130] ! I0314 19:41:05.830168       1 shared_informer.go:318] Caches are synced for configmaps
	I0314 19:42:14.416143    8428 command_runner.go:130] ! I0314 19:41:05.830217       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0314 19:42:14.416143    8428 command_runner.go:130] ! I0314 19:41:05.830281       1 aggregator.go:166] initial CRD sync complete...
	I0314 19:42:14.416143    8428 command_runner.go:130] ! I0314 19:41:05.830289       1 autoregister_controller.go:141] Starting autoregister controller
	I0314 19:42:14.416143    8428 command_runner.go:130] ! I0314 19:41:05.830295       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0314 19:42:14.416672    8428 command_runner.go:130] ! I0314 19:41:05.830301       1 cache.go:39] Caches are synced for autoregister controller
	I0314 19:42:14.416720    8428 command_runner.go:130] ! I0314 19:41:05.846941       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0314 19:42:14.416782    8428 command_runner.go:130] ! I0314 19:41:05.857057       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0314 19:42:14.416829    8428 command_runner.go:130] ! I0314 19:41:05.858966       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0314 19:42:14.416873    8428 command_runner.go:130] ! I0314 19:41:05.865554       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I0314 19:42:14.416963    8428 command_runner.go:130] ! I0314 19:41:05.865721       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I0314 19:42:14.416988    8428 command_runner.go:130] ! I0314 19:41:06.667315       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0314 19:42:14.417089    8428 command_runner.go:130] ! W0314 19:41:07.118314       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [172.17.93.236]
	I0314 19:42:14.417132    8428 command_runner.go:130] ! I0314 19:41:07.120612       1 controller.go:624] quota admission added evaluator for: endpoints
	I0314 19:42:14.417132    8428 command_runner.go:130] ! I0314 19:41:07.135973       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0314 19:42:14.417207    8428 command_runner.go:130] ! I0314 19:41:09.049225       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0314 19:42:14.417207    8428 command_runner.go:130] ! I0314 19:41:09.264220       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0314 19:42:14.417207    8428 command_runner.go:130] ! I0314 19:41:09.277110       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0314 19:42:14.417207    8428 command_runner.go:130] ! I0314 19:41:09.393446       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0314 19:42:14.417207    8428 command_runner.go:130] ! I0314 19:41:09.422214       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0314 19:42:14.424616    8428 logs.go:123] Gathering logs for coredns [b159aedddf94] ...
	I0314 19:42:14.424616    8428 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b159aedddf94"
	I0314 19:42:14.458823    8428 command_runner.go:130] > .:53
	I0314 19:42:14.459229    8428 command_runner.go:130] > [INFO] plugin/reload: Running configuration SHA512 = d518b2f22d7013b4ce33ee954d9f8802810eac8bed02a1cf0be20d76208a6f83258316421f15df605ab13f1704501370ffcd7655fbac5818a200880248c94b94
	I0314 19:42:14.459229    8428 command_runner.go:130] > CoreDNS-1.10.1
	I0314 19:42:14.459229    8428 command_runner.go:130] > linux/amd64, go1.20, 055b2c3
	I0314 19:42:14.459269    8428 command_runner.go:130] > [INFO] 127.0.0.1:38965 - 37747 "HINFO IN 9162400456686827331.1281991328183180689. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.052220616s
	I0314 19:42:14.459541    8428 logs.go:123] Gathering logs for coredns [8899bc003893] ...
	I0314 19:42:14.459541    8428 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8899bc003893"
	I0314 19:42:14.488904    8428 command_runner.go:130] > .:53
	I0314 19:42:14.488904    8428 command_runner.go:130] > [INFO] plugin/reload: Running configuration SHA512 = d518b2f22d7013b4ce33ee954d9f8802810eac8bed02a1cf0be20d76208a6f83258316421f15df605ab13f1704501370ffcd7655fbac5818a200880248c94b94
	I0314 19:42:14.488904    8428 command_runner.go:130] > CoreDNS-1.10.1
	I0314 19:42:14.488904    8428 command_runner.go:130] > linux/amd64, go1.20, 055b2c3
	I0314 19:42:14.488904    8428 command_runner.go:130] > [INFO] 127.0.0.1:56069 - 18242 "HINFO IN 687842018263708116.264844942244880205. udp 55 false 512" NXDOMAIN qr,rd,ra 130 0.040568923s
	I0314 19:42:14.488904    8428 command_runner.go:130] > [INFO] 10.244.0.3:42598 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000297623s
	I0314 19:42:14.488904    8428 command_runner.go:130] > [INFO] 10.244.0.3:49284 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.094729955s
	I0314 19:42:14.488904    8428 command_runner.go:130] > [INFO] 10.244.0.3:58753 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd 60 0.047978925s
	I0314 19:42:14.488904    8428 command_runner.go:130] > [INFO] 10.244.0.3:60240 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 1.250879171s
	I0314 19:42:14.488904    8428 command_runner.go:130] > [INFO] 10.244.1.2:35705 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000107809s
	I0314 19:42:14.488904    8428 command_runner.go:130] > [INFO] 10.244.1.2:38792 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 140 0.00013461s
	I0314 19:42:14.488904    8428 command_runner.go:130] > [INFO] 10.244.1.2:53339 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd 60 0.000060304s
	I0314 19:42:14.488904    8428 command_runner.go:130] > [INFO] 10.244.1.2:55975 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,aa,rd,ra 140 0.000059805s
	I0314 19:42:14.488904    8428 command_runner.go:130] > [INFO] 10.244.0.3:55630 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000117109s
	I0314 19:42:14.488904    8428 command_runner.go:130] > [INFO] 10.244.0.3:50181 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.122219329s
	I0314 19:42:14.488904    8428 command_runner.go:130] > [INFO] 10.244.0.3:58918 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000194615s
	I0314 19:42:14.488904    8428 command_runner.go:130] > [INFO] 10.244.0.3:48641 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00012501s
	I0314 19:42:14.488904    8428 command_runner.go:130] > [INFO] 10.244.0.3:57540 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.0346353s
	I0314 19:42:14.488904    8428 command_runner.go:130] > [INFO] 10.244.0.3:59969 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000278722s
	I0314 19:42:14.488904    8428 command_runner.go:130] > [INFO] 10.244.0.3:51295 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000167413s
	I0314 19:42:14.488904    8428 command_runner.go:130] > [INFO] 10.244.0.3:45005 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000148512s
	I0314 19:42:14.488904    8428 command_runner.go:130] > [INFO] 10.244.1.2:51938 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000100608s
	I0314 19:42:14.488904    8428 command_runner.go:130] > [INFO] 10.244.1.2:46248 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.00024762s
	I0314 19:42:14.488904    8428 command_runner.go:130] > [INFO] 10.244.1.2:46501 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000100408s
	I0314 19:42:14.488904    8428 command_runner.go:130] > [INFO] 10.244.1.2:52414 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000056704s
	I0314 19:42:14.488904    8428 command_runner.go:130] > [INFO] 10.244.1.2:44908 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000121409s
	I0314 19:42:14.489918    8428 command_runner.go:130] > [INFO] 10.244.1.2:49578 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00011941s
	I0314 19:42:14.489918    8428 command_runner.go:130] > [INFO] 10.244.1.2:51057 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000060205s
	I0314 19:42:14.489918    8428 command_runner.go:130] > [INFO] 10.244.1.2:56240 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000055805s
	I0314 19:42:14.489918    8428 command_runner.go:130] > [INFO] 10.244.0.3:32901 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000172914s
	I0314 19:42:14.489918    8428 command_runner.go:130] > [INFO] 10.244.0.3:41115 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000149912s
	I0314 19:42:14.490029    8428 command_runner.go:130] > [INFO] 10.244.0.3:40494 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00013161s
	I0314 19:42:14.490076    8428 command_runner.go:130] > [INFO] 10.244.0.3:40575 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000077106s
	I0314 19:42:14.490076    8428 command_runner.go:130] > [INFO] 10.244.1.2:55307 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000194115s
	I0314 19:42:14.490076    8428 command_runner.go:130] > [INFO] 10.244.1.2:46435 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00025832s
	I0314 19:42:14.490158    8428 command_runner.go:130] > [INFO] 10.244.1.2:52095 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000156813s
	I0314 19:42:14.490158    8428 command_runner.go:130] > [INFO] 10.244.1.2:57849 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00012701s
	I0314 19:42:14.490158    8428 command_runner.go:130] > [INFO] 10.244.0.3:47270 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000244119s
	I0314 19:42:14.490158    8428 command_runner.go:130] > [INFO] 10.244.0.3:59009 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000411532s
	I0314 19:42:14.490253    8428 command_runner.go:130] > [INFO] 10.244.0.3:40925 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000108108s
	I0314 19:42:14.490253    8428 command_runner.go:130] > [INFO] 10.244.0.3:56417 - 5 "PTR IN 1.80.17.172.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000067706s
	I0314 19:42:14.490253    8428 command_runner.go:130] > [INFO] 10.244.1.2:36896 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000108409s
	I0314 19:42:14.490253    8428 command_runner.go:130] > [INFO] 10.244.1.2:38949 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000118209s
	I0314 19:42:14.490253    8428 command_runner.go:130] > [INFO] 10.244.1.2:56933 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000156413s
	I0314 19:42:14.490350    8428 command_runner.go:130] > [INFO] 10.244.1.2:35971 - 5 "PTR IN 1.80.17.172.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000072406s
	I0314 19:42:14.490350    8428 command_runner.go:130] > [INFO] SIGTERM: Shutting down servers then terminating
	I0314 19:42:14.490350    8428 command_runner.go:130] > [INFO] plugin/health: Going into lameduck mode for 5s
	I0314 19:42:14.493252    8428 logs.go:123] Gathering logs for kube-scheduler [32d90a3ea213] ...
	I0314 19:42:14.493324    8428 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32d90a3ea213"
	I0314 19:42:14.520031    8428 command_runner.go:130] ! I0314 19:41:03.376319       1 serving.go:348] Generated self-signed cert in-memory
	I0314 19:42:14.520224    8428 command_runner.go:130] ! W0314 19:41:05.770317       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	I0314 19:42:14.520224    8428 command_runner.go:130] ! W0314 19:41:05.770426       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	I0314 19:42:14.520302    8428 command_runner.go:130] ! W0314 19:41:05.770581       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	I0314 19:42:14.520302    8428 command_runner.go:130] ! W0314 19:41:05.770640       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0314 19:42:14.520302    8428 command_runner.go:130] ! I0314 19:41:05.841573       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.4"
	I0314 19:42:14.520302    8428 command_runner.go:130] ! I0314 19:41:05.841674       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0314 19:42:14.520302    8428 command_runner.go:130] ! I0314 19:41:05.844125       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0314 19:42:14.520302    8428 command_runner.go:130] ! I0314 19:41:05.845062       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0314 19:42:14.520302    8428 command_runner.go:130] ! I0314 19:41:05.845143       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0314 19:42:14.520302    8428 command_runner.go:130] ! I0314 19:41:05.845293       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0314 19:42:14.520302    8428 command_runner.go:130] ! I0314 19:41:05.946840       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0314 19:42:14.523302    8428 logs.go:123] Gathering logs for kube-controller-manager [12baf105f0bb] ...
	I0314 19:42:14.523375    8428 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12baf105f0bb"
	I0314 19:42:14.555558    8428 command_runner.go:130] ! I0314 19:41:03.101287       1 serving.go:348] Generated self-signed cert in-memory
	I0314 19:42:14.555558    8428 command_runner.go:130] ! I0314 19:41:03.872151       1 controllermanager.go:189] "Starting" version="v1.28.4"
	I0314 19:42:14.555558    8428 command_runner.go:130] ! I0314 19:41:03.874301       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0314 19:42:14.555648    8428 command_runner.go:130] ! I0314 19:41:03.879645       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0314 19:42:14.555648    8428 command_runner.go:130] ! I0314 19:41:03.880765       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0314 19:42:14.555648    8428 command_runner.go:130] ! I0314 19:41:03.883873       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0314 19:42:14.555648    8428 command_runner.go:130] ! I0314 19:41:03.883977       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0314 19:42:14.555648    8428 command_runner.go:130] ! I0314 19:41:07.787609       1 shared_informer.go:311] Waiting for caches to sync for tokens
	I0314 19:42:14.555710    8428 command_runner.go:130] ! I0314 19:41:07.796442       1 controllermanager.go:642] "Started controller" controller="replicationcontroller-controller"
	I0314 19:42:14.555710    8428 command_runner.go:130] ! I0314 19:41:07.796953       1 replica_set.go:214] "Starting controller" name="replicationcontroller"
	I0314 19:42:14.555710    8428 command_runner.go:130] ! I0314 19:41:07.798900       1 shared_informer.go:311] Waiting for caches to sync for ReplicationController
	I0314 19:42:14.555750    8428 command_runner.go:130] ! I0314 19:41:07.848846       1 controllermanager.go:642] "Started controller" controller="namespace-controller"
	I0314 19:42:14.555750    8428 command_runner.go:130] ! I0314 19:41:07.849015       1 namespace_controller.go:197] "Starting namespace controller"
	I0314 19:42:14.555750    8428 command_runner.go:130] ! I0314 19:41:07.849025       1 shared_informer.go:311] Waiting for caches to sync for namespace
	I0314 19:42:14.555750    8428 command_runner.go:130] ! I0314 19:41:07.855296       1 controllermanager.go:642] "Started controller" controller="persistentvolumeclaim-protection-controller"
	I0314 19:42:14.555750    8428 command_runner.go:130] ! I0314 19:41:07.858491       1 pvc_protection_controller.go:102] "Starting PVC protection controller"
	I0314 19:42:14.555750    8428 command_runner.go:130] ! I0314 19:41:07.858512       1 shared_informer.go:311] Waiting for caches to sync for PVC protection
	I0314 19:42:14.555750    8428 command_runner.go:130] ! I0314 19:41:07.864964       1 controllermanager.go:642] "Started controller" controller="endpoints-controller"
	I0314 19:42:14.555750    8428 command_runner.go:130] ! I0314 19:41:07.865080       1 endpoints_controller.go:174] "Starting endpoint controller"
	I0314 19:42:14.555750    8428 command_runner.go:130] ! I0314 19:41:07.865088       1 shared_informer.go:311] Waiting for caches to sync for endpoint
	I0314 19:42:14.555750    8428 command_runner.go:130] ! I0314 19:41:07.870629       1 controllermanager.go:642] "Started controller" controller="daemonset-controller"
	I0314 19:42:14.555750    8428 command_runner.go:130] ! I0314 19:41:07.871089       1 daemon_controller.go:291] "Starting daemon sets controller"
	I0314 19:42:14.555750    8428 command_runner.go:130] ! I0314 19:41:07.871332       1 shared_informer.go:311] Waiting for caches to sync for daemon sets
	I0314 19:42:14.555750    8428 command_runner.go:130] ! I0314 19:41:07.889997       1 shared_informer.go:318] Caches are synced for tokens
	I0314 19:42:14.555750    8428 command_runner.go:130] ! I0314 19:41:07.899597       1 controllermanager.go:642] "Started controller" controller="horizontal-pod-autoscaler-controller"
	I0314 19:42:14.555750    8428 command_runner.go:130] ! I0314 19:41:07.900355       1 horizontal.go:200] "Starting HPA controller"
	I0314 19:42:14.555750    8428 command_runner.go:130] ! I0314 19:41:07.901325       1 shared_informer.go:311] Waiting for caches to sync for HPA
	I0314 19:42:14.555750    8428 command_runner.go:130] ! I0314 19:41:07.921217       1 controllermanager.go:642] "Started controller" controller="disruption-controller"
	I0314 19:42:14.555750    8428 command_runner.go:130] ! I0314 19:41:07.922072       1 disruption.go:433] "Sending events to api server."
	I0314 19:42:14.555750    8428 command_runner.go:130] ! I0314 19:41:07.922293       1 disruption.go:444] "Starting disruption controller"
	I0314 19:42:14.555750    8428 command_runner.go:130] ! I0314 19:41:07.922481       1 shared_informer.go:311] Waiting for caches to sync for disruption
	I0314 19:42:14.555750    8428 command_runner.go:130] ! I0314 19:41:07.927437       1 controllermanager.go:642] "Started controller" controller="persistentvolume-attach-detach-controller"
	I0314 19:42:14.555750    8428 command_runner.go:130] ! I0314 19:41:07.929290       1 attach_detach_controller.go:337] "Starting attach detach controller"
	I0314 19:42:14.555750    8428 command_runner.go:130] ! I0314 19:41:07.929325       1 shared_informer.go:311] Waiting for caches to sync for attach detach
	I0314 19:42:14.555750    8428 command_runner.go:130] ! I0314 19:41:07.936410       1 controllermanager.go:642] "Started controller" controller="ephemeral-volume-controller"
	I0314 19:42:14.555750    8428 command_runner.go:130] ! I0314 19:41:07.936565       1 controller.go:169] "Starting ephemeral volume controller"
	I0314 19:42:14.555750    8428 command_runner.go:130] ! I0314 19:41:07.936765       1 shared_informer.go:311] Waiting for caches to sync for ephemeral
	I0314 19:42:14.555750    8428 command_runner.go:130] ! I0314 19:41:07.954720       1 controllermanager.go:642] "Started controller" controller="cronjob-controller"
	I0314 19:42:14.555750    8428 command_runner.go:130] ! I0314 19:41:07.954939       1 cronjob_controllerv2.go:139] "Starting cronjob controller v2"
	I0314 19:42:14.555750    8428 command_runner.go:130] ! I0314 19:41:07.955142       1 shared_informer.go:311] Waiting for caches to sync for cronjob
	I0314 19:42:14.555750    8428 command_runner.go:130] ! I0314 19:41:07.970387       1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-kubelet-serving"
	I0314 19:42:14.555750    8428 command_runner.go:130] ! I0314 19:41:07.970474       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I0314 19:42:14.555750    8428 command_runner.go:130] ! I0314 19:41:07.970624       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0314 19:42:14.555750    8428 command_runner.go:130] ! I0314 19:41:07.971307       1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-kubelet-client"
	I0314 19:42:14.555750    8428 command_runner.go:130] ! I0314 19:41:07.975049       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I0314 19:42:14.555750    8428 command_runner.go:130] ! I0314 19:41:07.973288       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0314 19:42:14.555750    8428 command_runner.go:130] ! I0314 19:41:07.974848       1 controllermanager.go:642] "Started controller" controller="certificatesigningrequest-signing-controller"
	I0314 19:42:14.555750    8428 command_runner.go:130] ! I0314 19:41:07.974977       1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-legacy-unknown"
	I0314 19:42:14.556276    8428 command_runner.go:130] ! I0314 19:41:07.977476       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0314 19:42:14.556276    8428 command_runner.go:130] ! I0314 19:41:07.974992       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0314 19:42:14.556276    8428 command_runner.go:130] ! I0314 19:41:07.975020       1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-kube-apiserver-client"
	I0314 19:42:14.556276    8428 command_runner.go:130] ! I0314 19:41:07.977827       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I0314 19:42:14.556336    8428 command_runner.go:130] ! I0314 19:41:07.975030       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0314 19:42:14.556336    8428 command_runner.go:130] ! I0314 19:41:07.990774       1 controllermanager.go:642] "Started controller" controller="ttl-controller"
	I0314 19:42:14.556336    8428 command_runner.go:130] ! I0314 19:41:07.995647       1 ttl_controller.go:124] "Starting TTL controller"
	I0314 19:42:14.556336    8428 command_runner.go:130] ! I0314 19:41:07.995667       1 shared_informer.go:311] Waiting for caches to sync for TTL
	I0314 19:42:14.556386    8428 command_runner.go:130] ! I0314 19:41:08.019000       1 controllermanager.go:642] "Started controller" controller="token-cleaner-controller"
	I0314 19:42:14.556386    8428 command_runner.go:130] ! I0314 19:41:08.019415       1 tokencleaner.go:112] "Starting token cleaner controller"
	I0314 19:42:14.556386    8428 command_runner.go:130] ! I0314 19:41:08.019568       1 shared_informer.go:311] Waiting for caches to sync for token_cleaner
	I0314 19:42:14.556386    8428 command_runner.go:130] ! I0314 19:41:08.019700       1 shared_informer.go:318] Caches are synced for token_cleaner
	I0314 19:42:14.556386    8428 command_runner.go:130] ! E0314 19:41:08.029770       1 core.go:92] "Failed to start service controller" err="WARNING: no cloud provider provided, services of type LoadBalancer will fail"
	I0314 19:42:14.556442    8428 command_runner.go:130] ! I0314 19:41:08.029950       1 controllermanager.go:620] "Warning: skipping controller" controller="service-lb-controller"
	I0314 19:42:14.556442    8428 command_runner.go:130] ! I0314 19:41:08.030066       1 core.go:228] "Warning: configure-cloud-routes is set, but no cloud provider specified. Will not configure cloud provider routes."
	I0314 19:42:14.556442    8428 command_runner.go:130] ! I0314 19:41:08.030148       1 controllermanager.go:620] "Warning: skipping controller" controller="node-route-controller"
	I0314 19:42:14.556442    8428 command_runner.go:130] ! I0314 19:41:08.056856       1 controllermanager.go:642] "Started controller" controller="clusterrole-aggregation-controller"
	I0314 19:42:14.556489    8428 command_runner.go:130] ! I0314 19:41:08.058933       1 clusterroleaggregation_controller.go:189] "Starting ClusterRoleAggregator controller"
	I0314 19:42:14.556489    8428 command_runner.go:130] ! I0314 19:41:08.059323       1 shared_informer.go:311] Waiting for caches to sync for ClusterRoleAggregator
	I0314 19:42:14.556489    8428 command_runner.go:130] ! I0314 19:41:08.062839       1 controllermanager.go:642] "Started controller" controller="endpointslice-controller"
	I0314 19:42:14.556489    8428 command_runner.go:130] ! I0314 19:41:08.063208       1 endpointslice_controller.go:264] "Starting endpoint slice controller"
	I0314 19:42:14.556489    8428 command_runner.go:130] ! I0314 19:41:08.063512       1 shared_informer.go:311] Waiting for caches to sync for endpoint_slice
	I0314 19:42:14.556489    8428 command_runner.go:130] ! I0314 19:41:08.070376       1 node_lifecycle_controller.go:431] "Controller will reconcile labels"
	I0314 19:42:14.556489    8428 command_runner.go:130] ! I0314 19:41:08.070635       1 controllermanager.go:642] "Started controller" controller="node-lifecycle-controller"
	I0314 19:42:14.556489    8428 command_runner.go:130] ! I0314 19:41:08.070748       1 node_lifecycle_controller.go:465] "Sending events to api server"
	I0314 19:42:14.556489    8428 command_runner.go:130] ! I0314 19:41:08.071006       1 node_lifecycle_controller.go:476] "Starting node controller"
	I0314 19:42:14.556489    8428 command_runner.go:130] ! I0314 19:41:08.071615       1 shared_informer.go:311] Waiting for caches to sync for taint
	I0314 19:42:14.556489    8428 command_runner.go:130] ! I0314 19:41:08.079849       1 controllermanager.go:642] "Started controller" controller="persistentvolume-binder-controller"
	I0314 19:42:14.556489    8428 command_runner.go:130] ! I0314 19:41:08.080117       1 pv_controller_base.go:319] "Starting persistent volume controller"
	I0314 19:42:14.556489    8428 command_runner.go:130] ! I0314 19:41:08.081765       1 shared_informer.go:311] Waiting for caches to sync for persistent volume
	I0314 19:42:14.556489    8428 command_runner.go:130] ! I0314 19:41:08.084328       1 controllermanager.go:642] "Started controller" controller="ttl-after-finished-controller"
	I0314 19:42:14.556489    8428 command_runner.go:130] ! I0314 19:41:08.084731       1 ttlafterfinished_controller.go:109] "Starting TTL after finished controller"
	I0314 19:42:14.556489    8428 command_runner.go:130] ! I0314 19:41:08.085301       1 shared_informer.go:311] Waiting for caches to sync for TTL after finished
	I0314 19:42:14.556489    8428 command_runner.go:130] ! I0314 19:41:08.092529       1 controllermanager.go:642] "Started controller" controller="garbage-collector-controller"
	I0314 19:42:14.556489    8428 command_runner.go:130] ! I0314 19:41:08.092761       1 garbagecollector.go:155] "Starting controller" controller="garbagecollector"
	I0314 19:42:14.556489    8428 command_runner.go:130] ! I0314 19:41:08.092771       1 shared_informer.go:311] Waiting for caches to sync for garbage collector
	I0314 19:42:14.556489    8428 command_runner.go:130] ! I0314 19:41:08.097268       1 controllermanager.go:642] "Started controller" controller="persistentvolume-expander-controller"
	I0314 19:42:14.556489    8428 command_runner.go:130] ! I0314 19:41:08.097521       1 expand_controller.go:328] "Starting expand controller"
	I0314 19:42:14.556489    8428 command_runner.go:130] ! I0314 19:41:08.097531       1 shared_informer.go:311] Waiting for caches to sync for expand
	I0314 19:42:14.556489    8428 command_runner.go:130] ! I0314 19:41:08.097559       1 graph_builder.go:294] "Running" component="GraphBuilder"
	I0314 19:42:14.556489    8428 command_runner.go:130] ! I0314 19:41:08.117374       1 controllermanager.go:642] "Started controller" controller="root-ca-certificate-publisher-controller"
	I0314 19:42:14.556489    8428 command_runner.go:130] ! I0314 19:41:08.117512       1 publisher.go:102] "Starting root CA cert publisher controller"
	I0314 19:42:14.556489    8428 command_runner.go:130] ! I0314 19:41:08.117524       1 shared_informer.go:311] Waiting for caches to sync for crt configmap
	I0314 19:42:14.556489    8428 command_runner.go:130] ! I0314 19:41:08.126388       1 controllermanager.go:642] "Started controller" controller="statefulset-controller"
	I0314 19:42:14.556489    8428 command_runner.go:130] ! I0314 19:41:08.127645       1 stateful_set.go:161] "Starting stateful set controller"
	I0314 19:42:14.556489    8428 command_runner.go:130] ! I0314 19:41:08.127702       1 shared_informer.go:311] Waiting for caches to sync for stateful set
	I0314 19:42:14.556489    8428 command_runner.go:130] ! I0314 19:41:08.131336       1 controllermanager.go:642] "Started controller" controller="certificatesigningrequest-cleaner-controller"
	I0314 19:42:14.556489    8428 command_runner.go:130] ! I0314 19:41:08.131505       1 cleaner.go:83] "Starting CSR cleaner controller"
	I0314 19:42:14.556489    8428 command_runner.go:130] ! E0314 19:41:08.142589       1 core.go:213] "Failed to start cloud node lifecycle controller" err="no cloud provider provided"
	I0314 19:42:14.556489    8428 command_runner.go:130] ! I0314 19:41:08.142621       1 controllermanager.go:620] "Warning: skipping controller" controller="cloud-node-lifecycle-controller"
	I0314 19:42:14.556489    8428 command_runner.go:130] ! I0314 19:41:08.150057       1 controllermanager.go:642] "Started controller" controller="pod-garbage-collector-controller"
	I0314 19:42:14.556489    8428 command_runner.go:130] ! I0314 19:41:08.152574       1 gc_controller.go:101] "Starting GC controller"
	I0314 19:42:14.556489    8428 command_runner.go:130] ! I0314 19:41:08.152724       1 shared_informer.go:311] Waiting for caches to sync for GC
	I0314 19:42:14.556489    8428 command_runner.go:130] ! I0314 19:41:08.302881       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="ingresses.networking.k8s.io"
	I0314 19:42:14.556489    8428 command_runner.go:130] ! I0314 19:41:08.303337       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="endpoints"
	I0314 19:42:14.556489    8428 command_runner.go:130] ! W0314 19:41:08.303671       1 shared_informer.go:593] resyncPeriod 21h24m41.293167603s is smaller than resyncCheckPeriod 22h48m56.659186017s and the informer has already started. Changing it to 22h48m56.659186017s
	I0314 19:42:14.556489    8428 command_runner.go:130] ! I0314 19:41:08.303970       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="limitranges"
	I0314 19:42:14.557015    8428 command_runner.go:130] ! I0314 19:41:08.304292       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="podtemplates"
	I0314 19:42:14.557015    8428 command_runner.go:130] ! I0314 19:41:08.304532       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="endpointslices.discovery.k8s.io"
	I0314 19:42:14.557015    8428 command_runner.go:130] ! I0314 19:41:08.304816       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="daemonsets.apps"
	I0314 19:42:14.557073    8428 command_runner.go:130] ! I0314 19:41:08.305073       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="rolebindings.rbac.authorization.k8s.io"
	I0314 19:42:14.557073    8428 command_runner.go:130] ! I0314 19:41:08.305373       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="roles.rbac.authorization.k8s.io"
	I0314 19:42:14.557073    8428 command_runner.go:130] ! I0314 19:41:08.305634       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="leases.coordination.k8s.io"
	I0314 19:42:14.557073    8428 command_runner.go:130] ! I0314 19:41:08.305976       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="statefulsets.apps"
	I0314 19:42:14.557121    8428 command_runner.go:130] ! I0314 19:41:08.306286       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="horizontalpodautoscalers.autoscaling"
	I0314 19:42:14.557121    8428 command_runner.go:130] ! I0314 19:41:08.306541       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="networkpolicies.networking.k8s.io"
	I0314 19:42:14.557121    8428 command_runner.go:130] ! I0314 19:41:08.306699       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="controllerrevisions.apps"
	I0314 19:42:14.557121    8428 command_runner.go:130] ! I0314 19:41:08.306843       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="deployments.apps"
	I0314 19:42:14.557186    8428 command_runner.go:130] ! I0314 19:41:08.307119       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="jobs.batch"
	I0314 19:42:14.557186    8428 command_runner.go:130] ! I0314 19:41:08.307379       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="serviceaccounts"
	I0314 19:42:14.557186    8428 command_runner.go:130] ! I0314 19:41:08.307553       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="cronjobs.batch"
	I0314 19:42:14.557186    8428 command_runner.go:130] ! I0314 19:41:08.307700       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="poddisruptionbudgets.policy"
	I0314 19:42:14.557237    8428 command_runner.go:130] ! I0314 19:41:08.308022       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="csistoragecapacities.storage.k8s.io"
	I0314 19:42:14.557237    8428 command_runner.go:130] ! I0314 19:41:08.308207       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="replicasets.apps"
	I0314 19:42:14.557237    8428 command_runner.go:130] ! I0314 19:41:08.308473       1 controllermanager.go:642] "Started controller" controller="resourcequota-controller"
	I0314 19:42:14.557237    8428 command_runner.go:130] ! I0314 19:41:08.308664       1 resource_quota_controller.go:294] "Starting resource quota controller"
	I0314 19:42:14.557292    8428 command_runner.go:130] ! I0314 19:41:08.309850       1 shared_informer.go:311] Waiting for caches to sync for resource quota
	I0314 19:42:14.557292    8428 command_runner.go:130] ! I0314 19:41:08.310060       1 resource_quota_monitor.go:305] "QuotaMonitor running"
	I0314 19:42:14.557356    8428 command_runner.go:130] ! I0314 19:41:08.344084       1 controllermanager.go:642] "Started controller" controller="serviceaccount-controller"
	I0314 19:42:14.557356    8428 command_runner.go:130] ! I0314 19:41:08.344536       1 serviceaccounts_controller.go:111] "Starting service account controller"
	I0314 19:42:14.557356    8428 command_runner.go:130] ! I0314 19:41:08.344832       1 shared_informer.go:311] Waiting for caches to sync for service account
	I0314 19:42:14.557469    8428 command_runner.go:130] ! I0314 19:41:08.397742       1 controllermanager.go:642] "Started controller" controller="deployment-controller"
	I0314 19:42:14.557469    8428 command_runner.go:130] ! I0314 19:41:08.400742       1 deployment_controller.go:168] "Starting controller" controller="deployment"
	I0314 19:42:14.557469    8428 command_runner.go:130] ! I0314 19:41:08.401126       1 shared_informer.go:311] Waiting for caches to sync for deployment
	I0314 19:42:14.557469    8428 command_runner.go:130] ! I0314 19:41:08.448054       1 controllermanager.go:642] "Started controller" controller="bootstrap-signer-controller"
	I0314 19:42:14.557469    8428 command_runner.go:130] ! I0314 19:41:08.448538       1 shared_informer.go:311] Waiting for caches to sync for bootstrap_signer
	I0314 19:42:14.557469    8428 command_runner.go:130] ! I0314 19:41:08.495738       1 controllermanager.go:642] "Started controller" controller="persistentvolume-protection-controller"
	I0314 19:42:14.557539    8428 command_runner.go:130] ! I0314 19:41:08.496045       1 pv_protection_controller.go:78] "Starting PV protection controller"
	I0314 19:42:14.557539    8428 command_runner.go:130] ! I0314 19:41:08.496112       1 shared_informer.go:311] Waiting for caches to sync for PV protection
	I0314 19:42:14.557539    8428 command_runner.go:130] ! I0314 19:41:08.547967       1 controllermanager.go:642] "Started controller" controller="endpointslice-mirroring-controller"
	I0314 19:42:14.557539    8428 command_runner.go:130] ! I0314 19:41:08.548352       1 endpointslicemirroring_controller.go:223] "Starting EndpointSliceMirroring controller"
	I0314 19:42:14.557580    8428 command_runner.go:130] ! I0314 19:41:08.548556       1 shared_informer.go:311] Waiting for caches to sync for endpoint_slice_mirroring
	I0314 19:42:14.557580    8428 command_runner.go:130] ! I0314 19:41:08.593742       1 controllermanager.go:642] "Started controller" controller="job-controller"
	I0314 19:42:14.557580    8428 command_runner.go:130] ! I0314 19:41:08.593860       1 job_controller.go:226] "Starting job controller"
	I0314 19:42:14.557580    8428 command_runner.go:130] ! I0314 19:41:08.594297       1 shared_informer.go:311] Waiting for caches to sync for job
	I0314 19:42:14.557580    8428 command_runner.go:130] ! I0314 19:41:08.650392       1 controllermanager.go:642] "Started controller" controller="replicaset-controller"
	I0314 19:42:14.557580    8428 command_runner.go:130] ! I0314 19:41:08.650668       1 replica_set.go:214] "Starting controller" name="replicaset"
	I0314 19:42:14.557580    8428 command_runner.go:130] ! I0314 19:41:08.650851       1 shared_informer.go:311] Waiting for caches to sync for ReplicaSet
	I0314 19:42:14.557580    8428 command_runner.go:130] ! I0314 19:41:08.704591       1 controllermanager.go:642] "Started controller" controller="certificatesigningrequest-approving-controller"
	I0314 19:42:14.557580    8428 command_runner.go:130] ! I0314 19:41:08.704627       1 certificate_controller.go:115] "Starting certificate controller" name="csrapproving"
	I0314 19:42:14.557580    8428 command_runner.go:130] ! I0314 19:41:08.704645       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrapproving
	I0314 19:42:14.557580    8428 command_runner.go:130] ! I0314 19:41:18.768485       1 range_allocator.go:111] "No Secondary Service CIDR provided. Skipping filtering out secondary service addresses"
	I0314 19:42:14.557580    8428 command_runner.go:130] ! I0314 19:41:18.768824       1 controllermanager.go:642] "Started controller" controller="node-ipam-controller"
	I0314 19:42:14.557580    8428 command_runner.go:130] ! I0314 19:41:18.769281       1 node_ipam_controller.go:162] "Starting ipam controller"
	I0314 19:42:14.557580    8428 command_runner.go:130] ! I0314 19:41:18.769315       1 shared_informer.go:311] Waiting for caches to sync for node
	I0314 19:42:14.557580    8428 command_runner.go:130] ! I0314 19:41:18.779639       1 shared_informer.go:311] Waiting for caches to sync for resource quota
	I0314 19:42:14.557580    8428 command_runner.go:130] ! I0314 19:41:18.796167       1 shared_informer.go:318] Caches are synced for PV protection
	I0314 19:42:14.557580    8428 command_runner.go:130] ! I0314 19:41:18.796514       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-442000-m02"
	I0314 19:42:14.557580    8428 command_runner.go:130] ! I0314 19:41:18.796299       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-442000\" does not exist"
	I0314 19:42:14.557580    8428 command_runner.go:130] ! I0314 19:41:18.799471       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-442000-m02\" does not exist"
	I0314 19:42:14.557580    8428 command_runner.go:130] ! I0314 19:41:18.799722       1 shared_informer.go:318] Caches are synced for ReplicationController
	I0314 19:42:14.557580    8428 command_runner.go:130] ! I0314 19:41:18.799937       1 shared_informer.go:318] Caches are synced for TTL
	I0314 19:42:14.557580    8428 command_runner.go:130] ! I0314 19:41:18.800165       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-442000-m03\" does not exist"
	I0314 19:42:14.557580    8428 command_runner.go:130] ! I0314 19:41:18.802329       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-442000-m02"
	I0314 19:42:14.557580    8428 command_runner.go:130] ! I0314 19:41:18.802379       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-442000-m02"
	I0314 19:42:14.557580    8428 command_runner.go:130] ! I0314 19:41:18.806338       1 shared_informer.go:318] Caches are synced for certificate-csrapproving
	I0314 19:42:14.557580    8428 command_runner.go:130] ! I0314 19:41:18.836188       1 shared_informer.go:318] Caches are synced for attach detach
	I0314 19:42:14.557580    8428 command_runner.go:130] ! I0314 19:41:18.842003       1 shared_informer.go:318] Caches are synced for ephemeral
	I0314 19:42:14.557580    8428 command_runner.go:130] ! I0314 19:41:18.842516       1 shared_informer.go:318] Caches are synced for stateful set
	I0314 19:42:14.557580    8428 command_runner.go:130] ! I0314 19:41:18.845380       1 shared_informer.go:318] Caches are synced for service account
	I0314 19:42:14.558315    8428 command_runner.go:130] ! I0314 19:41:18.848744       1 shared_informer.go:318] Caches are synced for bootstrap_signer
	I0314 19:42:14.558315    8428 command_runner.go:130] ! I0314 19:41:18.849154       1 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring
	I0314 19:42:14.558315    8428 command_runner.go:130] ! I0314 19:41:18.849988       1 shared_informer.go:318] Caches are synced for namespace
	I0314 19:42:14.558315    8428 command_runner.go:130] ! I0314 19:41:18.850447       1 shared_informer.go:311] Waiting for caches to sync for garbage collector
	I0314 19:42:14.558315    8428 command_runner.go:130] ! I0314 19:41:18.851139       1 shared_informer.go:318] Caches are synced for ReplicaSet
	I0314 19:42:14.558315    8428 command_runner.go:130] ! I0314 19:41:18.852942       1 shared_informer.go:318] Caches are synced for GC
	I0314 19:42:14.558315    8428 command_runner.go:130] ! I0314 19:41:18.860631       1 shared_informer.go:318] Caches are synced for ClusterRoleAggregator
	I0314 19:42:14.558315    8428 command_runner.go:130] ! I0314 19:41:18.862001       1 shared_informer.go:318] Caches are synced for cronjob
	I0314 19:42:14.558315    8428 command_runner.go:130] ! I0314 19:41:18.862045       1 shared_informer.go:318] Caches are synced for PVC protection
	I0314 19:42:14.558315    8428 command_runner.go:130] ! I0314 19:41:18.864453       1 shared_informer.go:318] Caches are synced for endpoint_slice
	I0314 19:42:14.558315    8428 command_runner.go:130] ! I0314 19:41:18.865205       1 shared_informer.go:318] Caches are synced for endpoint
	I0314 19:42:14.558315    8428 command_runner.go:130] ! I0314 19:41:18.870312       1 shared_informer.go:318] Caches are synced for node
	I0314 19:42:14.558315    8428 command_runner.go:130] ! I0314 19:41:18.871490       1 range_allocator.go:174] "Sending events to api server"
	I0314 19:42:14.558315    8428 command_runner.go:130] ! I0314 19:41:18.871652       1 range_allocator.go:178] "Starting range CIDR allocator"
	I0314 19:42:14.558315    8428 command_runner.go:130] ! I0314 19:41:18.871843       1 shared_informer.go:311] Waiting for caches to sync for cidrallocator
	I0314 19:42:14.558315    8428 command_runner.go:130] ! I0314 19:41:18.871901       1 shared_informer.go:318] Caches are synced for cidrallocator
	I0314 19:42:14.558315    8428 command_runner.go:130] ! I0314 19:41:18.871655       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-serving
	I0314 19:42:14.558315    8428 command_runner.go:130] ! I0314 19:41:18.871600       1 shared_informer.go:318] Caches are synced for daemon sets
	I0314 19:42:14.558315    8428 command_runner.go:130] ! I0314 19:41:18.877449       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-client
	I0314 19:42:14.558315    8428 command_runner.go:130] ! I0314 19:41:18.878919       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-legacy-unknown
	I0314 19:42:14.558315    8428 command_runner.go:130] ! I0314 19:41:18.880521       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0314 19:42:14.558315    8428 command_runner.go:130] ! I0314 19:41:18.886337       1 shared_informer.go:318] Caches are synced for TTL after finished
	I0314 19:42:14.558315    8428 command_runner.go:130] ! I0314 19:41:18.895206       1 shared_informer.go:318] Caches are synced for job
	I0314 19:42:14.558315    8428 command_runner.go:130] ! I0314 19:41:18.898522       1 shared_informer.go:318] Caches are synced for expand
	I0314 19:42:14.558315    8428 command_runner.go:130] ! I0314 19:41:18.902360       1 shared_informer.go:318] Caches are synced for deployment
	I0314 19:42:14.558315    8428 command_runner.go:130] ! I0314 19:41:18.905493       1 shared_informer.go:318] Caches are synced for HPA
	I0314 19:42:14.558315    8428 command_runner.go:130] ! I0314 19:41:18.906213       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="54.805878ms"
	I0314 19:42:14.558315    8428 command_runner.go:130] ! I0314 19:41:18.908178       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="47.802µs"
	I0314 19:42:14.558315    8428 command_runner.go:130] ! I0314 19:41:18.908549       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="56.720551ms"
	I0314 19:42:14.558315    8428 command_runner.go:130] ! I0314 19:41:18.911784       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="133.705µs"
	I0314 19:42:14.558315    8428 command_runner.go:130] ! I0314 19:41:18.919410       1 shared_informer.go:318] Caches are synced for crt configmap
	I0314 19:42:14.558315    8428 command_runner.go:130] ! I0314 19:41:18.923587       1 shared_informer.go:318] Caches are synced for disruption
	I0314 19:42:14.558315    8428 command_runner.go:130] ! I0314 19:41:18.974303       1 shared_informer.go:318] Caches are synced for taint
	I0314 19:42:14.558315    8428 command_runner.go:130] ! I0314 19:41:18.974653       1 node_lifecycle_controller.go:1225] "Initializing eviction metric for zone" zone=""
	I0314 19:42:14.558315    8428 command_runner.go:130] ! I0314 19:41:18.975178       1 taint_manager.go:205] "Starting NoExecuteTaintManager"
	I0314 19:42:14.558315    8428 command_runner.go:130] ! I0314 19:41:18.975416       1 taint_manager.go:210] "Sending events to api server"
	I0314 19:42:14.558315    8428 command_runner.go:130] ! I0314 19:41:18.977051       1 event.go:307] "Event occurred" object="multinode-442000" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-442000 event: Registered Node multinode-442000 in Controller"
	I0314 19:42:14.558315    8428 command_runner.go:130] ! I0314 19:41:18.977995       1 event.go:307] "Event occurred" object="multinode-442000-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-442000-m02 event: Registered Node multinode-442000-m02 in Controller"
	I0314 19:42:14.558315    8428 command_runner.go:130] ! I0314 19:41:18.978165       1 event.go:307] "Event occurred" object="multinode-442000-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-442000-m03 event: Registered Node multinode-442000-m03 in Controller"
	I0314 19:42:14.558315    8428 command_runner.go:130] ! I0314 19:41:18.980168       1 shared_informer.go:318] Caches are synced for resource quota
	I0314 19:42:14.558315    8428 command_runner.go:130] ! I0314 19:41:18.982162       1 shared_informer.go:318] Caches are synced for persistent volume
	I0314 19:42:14.558315    8428 command_runner.go:130] ! I0314 19:41:19.001384       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-442000"
	I0314 19:42:14.558315    8428 command_runner.go:130] ! I0314 19:41:19.002299       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-442000-m02"
	I0314 19:42:14.558315    8428 command_runner.go:130] ! I0314 19:41:19.002838       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-442000-m03"
	I0314 19:42:14.558315    8428 command_runner.go:130] ! I0314 19:41:19.003844       1 node_lifecycle_controller.go:1071] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I0314 19:42:14.558315    8428 command_runner.go:130] ! I0314 19:41:19.010468       1 shared_informer.go:318] Caches are synced for resource quota
	I0314 19:42:14.558917    8428 command_runner.go:130] ! I0314 19:41:19.393074       1 shared_informer.go:318] Caches are synced for garbage collector
	I0314 19:42:14.558917    8428 command_runner.go:130] ! I0314 19:41:19.393161       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0314 19:42:14.558917    8428 command_runner.go:130] ! I0314 19:41:19.450734       1 shared_informer.go:318] Caches are synced for garbage collector
	I0314 19:42:14.558917    8428 command_runner.go:130] ! I0314 19:41:41.542550       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-442000-m02"
	I0314 19:42:14.558917    8428 command_runner.go:130] ! I0314 19:41:44.029818       1 event.go:307] "Event occurred" object="kube-system/storage-provisioner" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod kube-system/storage-provisioner"
	I0314 19:42:14.558917    8428 command_runner.go:130] ! I0314 19:41:44.029853       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68-d22jc" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod kube-system/coredns-5dd5756b68-d22jc"
	I0314 19:42:14.558917    8428 command_runner.go:130] ! I0314 19:41:44.029866       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6-7446n" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-5b5d89c9d6-7446n"
	I0314 19:42:14.558917    8428 command_runner.go:130] ! I0314 19:41:59.058949       1 event.go:307] "Event occurred" object="multinode-442000-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-442000-m02 status is now: NodeNotReady"
	I0314 19:42:14.559040    8428 command_runner.go:130] ! I0314 19:41:59.074940       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6-8drpb" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0314 19:42:14.559040    8428 command_runner.go:130] ! I0314 19:41:59.085508       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="9.938337ms"
	I0314 19:42:14.559040    8428 command_runner.go:130] ! I0314 19:41:59.086845       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="54.804µs"
	I0314 19:42:14.559040    8428 command_runner.go:130] ! I0314 19:41:59.099029       1 event.go:307] "Event occurred" object="kube-system/kindnet-c7m4p" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0314 19:42:14.559040    8428 command_runner.go:130] ! I0314 19:41:59.122329       1 event.go:307] "Event occurred" object="kube-system/kube-proxy-72dzs" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0314 19:42:14.559124    8428 command_runner.go:130] ! I0314 19:42:12.281109       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="13.332951ms"
	I0314 19:42:14.559124    8428 command_runner.go:130] ! I0314 19:42:12.281325       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="115.209µs"
	I0314 19:42:14.559124    8428 command_runner.go:130] ! I0314 19:42:12.305037       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="74.006µs"
	I0314 19:42:14.559175    8428 command_runner.go:130] ! I0314 19:42:12.366507       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="32.074928ms"
	I0314 19:42:14.559175    8428 command_runner.go:130] ! I0314 19:42:12.368560       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="106.408µs"
	I0314 19:42:14.573536    8428 logs.go:123] Gathering logs for container status ...
	I0314 19:42:14.573536    8428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:42:14.664901    8428 command_runner.go:130] > CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	I0314 19:42:14.664901    8428 command_runner.go:130] > b159aedddf94a       ead0a4a53df89                                                                                         3 seconds ago        Running             coredns                   1                   89f326046d00d       coredns-5dd5756b68-d22jc
	I0314 19:42:14.664901    8428 command_runner.go:130] > 813492ad2d666       8c811b4aec35f                                                                                         3 seconds ago        Running             busybox                   1                   cddebe360bf3a       busybox-5b5d89c9d6-7446n
	I0314 19:42:14.664901    8428 command_runner.go:130] > 3167caea2534f       6e38f40d628db                                                                                         21 seconds ago       Running             storage-provisioner       2                   a723f141543f2       storage-provisioner
	I0314 19:42:14.664901    8428 command_runner.go:130] > 999e4c168afef       4950bb10b3f87                                                                                         About a minute ago   Running             kindnet-cni               1                   a9176b5544663       kindnet-7b9lf
	I0314 19:42:14.664901    8428 command_runner.go:130] > 497007582e446       83f6cc407eed8                                                                                         About a minute ago   Running             kube-proxy                1                   f513a7aff6720       kube-proxy-cg28g
	I0314 19:42:14.664901    8428 command_runner.go:130] > 2876622a2618d       6e38f40d628db                                                                                         About a minute ago   Exited              storage-provisioner       1                   a723f141543f2       storage-provisioner
	I0314 19:42:14.664901    8428 command_runner.go:130] > 32d90a3ea2131       e3db313c6dbc0                                                                                         About a minute ago   Running             kube-scheduler            1                   c70744e60ac50       kube-scheduler-multinode-442000
	I0314 19:42:14.664901    8428 command_runner.go:130] > a598d24960de8       7fe0e6f37db33                                                                                         About a minute ago   Running             kube-apiserver            0                   a27fa2188ee4c       kube-apiserver-multinode-442000
	I0314 19:42:14.664901    8428 command_runner.go:130] > 12baf105f0bb2       d058aa5ab969c                                                                                         About a minute ago   Running             kube-controller-manager   1                   67475bf80ddd9       kube-controller-manager-multinode-442000
	I0314 19:42:14.664901    8428 command_runner.go:130] > a81a9c43c3552       73deb9a3f7025                                                                                         About a minute ago   Running             etcd                      0                   35dd339c8a08d       etcd-multinode-442000
	I0314 19:42:14.664901    8428 command_runner.go:130] > 0cd43cdaa31c9       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   19 minutes ago       Exited              busybox                   0                   fa0f2372c88ee       busybox-5b5d89c9d6-7446n
	I0314 19:42:14.664901    8428 command_runner.go:130] > 8899bc0038935       ead0a4a53df89                                                                                         22 minutes ago       Exited              coredns                   0                   a3dba3fc54c01       coredns-5dd5756b68-d22jc
	I0314 19:42:14.664901    8428 command_runner.go:130] > 1a321c0e89971       kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988              22 minutes ago       Exited              kindnet-cni               0                   b046b896affe9       kindnet-7b9lf
	I0314 19:42:14.664901    8428 command_runner.go:130] > 2a62baf3f1b46       83f6cc407eed8                                                                                         22 minutes ago       Exited              kube-proxy                0                   9b3244b47278e       kube-proxy-cg28g
	I0314 19:42:14.664901    8428 command_runner.go:130] > dbb603289bf16       e3db313c6dbc0                                                                                         23 minutes ago       Exited              kube-scheduler            0                   54e39762d7a64       kube-scheduler-multinode-442000
	I0314 19:42:14.664901    8428 command_runner.go:130] > 16b80f73683dc       d058aa5ab969c                                                                                         23 minutes ago       Exited              kube-controller-manager   0                   102c907609a3a       kube-controller-manager-multinode-442000
	I0314 19:42:17.187279    8428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:42:17.211159    8428 command_runner.go:130] > 2008
	I0314 19:42:17.211296    8428 api_server.go:72] duration metric: took 1m6.3722812s to wait for apiserver process to appear ...
	I0314 19:42:17.211296    8428 api_server.go:88] waiting for apiserver healthz status ...
	I0314 19:42:17.219950    8428 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0314 19:42:17.246006    8428 command_runner.go:130] > a598d24960de
	I0314 19:42:17.246006    8428 logs.go:276] 1 containers: [a598d24960de]
	I0314 19:42:17.252918    8428 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0314 19:42:17.278095    8428 command_runner.go:130] > a81a9c43c355
	I0314 19:42:17.278260    8428 logs.go:276] 1 containers: [a81a9c43c355]
	I0314 19:42:17.285100    8428 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0314 19:42:17.312071    8428 command_runner.go:130] > b159aedddf94
	I0314 19:42:17.312363    8428 command_runner.go:130] > 8899bc003893
	I0314 19:42:17.312399    8428 logs.go:276] 2 containers: [b159aedddf94 8899bc003893]
	I0314 19:42:17.321791    8428 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0314 19:42:17.350447    8428 command_runner.go:130] > 32d90a3ea213
	I0314 19:42:17.350447    8428 command_runner.go:130] > dbb603289bf1
	I0314 19:42:17.350447    8428 logs.go:276] 2 containers: [32d90a3ea213 dbb603289bf1]
	I0314 19:42:17.358067    8428 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0314 19:42:17.386725    8428 command_runner.go:130] > 497007582e44
	I0314 19:42:17.386725    8428 command_runner.go:130] > 2a62baf3f1b4
	I0314 19:42:17.386725    8428 logs.go:276] 2 containers: [497007582e44 2a62baf3f1b4]
	I0314 19:42:17.394510    8428 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0314 19:42:17.419653    8428 command_runner.go:130] > 12baf105f0bb
	I0314 19:42:17.419653    8428 command_runner.go:130] > 16b80f73683d
	I0314 19:42:17.419653    8428 logs.go:276] 2 containers: [12baf105f0bb 16b80f73683d]
	I0314 19:42:17.426754    8428 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0314 19:42:17.452548    8428 command_runner.go:130] > 999e4c168afe
	I0314 19:42:17.452609    8428 command_runner.go:130] > 1a321c0e8997
	I0314 19:42:17.452609    8428 logs.go:276] 2 containers: [999e4c168afe 1a321c0e8997]
	I0314 19:42:17.452609    8428 logs.go:123] Gathering logs for coredns [b159aedddf94] ...
	I0314 19:42:17.452609    8428 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b159aedddf94"
	I0314 19:42:17.479998    8428 command_runner.go:130] > .:53
	I0314 19:42:17.479998    8428 command_runner.go:130] > [INFO] plugin/reload: Running configuration SHA512 = d518b2f22d7013b4ce33ee954d9f8802810eac8bed02a1cf0be20d76208a6f83258316421f15df605ab13f1704501370ffcd7655fbac5818a200880248c94b94
	I0314 19:42:17.479998    8428 command_runner.go:130] > CoreDNS-1.10.1
	I0314 19:42:17.479998    8428 command_runner.go:130] > linux/amd64, go1.20, 055b2c3
	I0314 19:42:17.479998    8428 command_runner.go:130] > [INFO] 127.0.0.1:38965 - 37747 "HINFO IN 9162400456686827331.1281991328183180689. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.052220616s
	I0314 19:42:17.481847    8428 logs.go:123] Gathering logs for coredns [8899bc003893] ...
	I0314 19:42:17.481847    8428 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8899bc003893"
	I0314 19:42:17.511937    8428 command_runner.go:130] > .:53
	I0314 19:42:17.511937    8428 command_runner.go:130] > [INFO] plugin/reload: Running configuration SHA512 = d518b2f22d7013b4ce33ee954d9f8802810eac8bed02a1cf0be20d76208a6f83258316421f15df605ab13f1704501370ffcd7655fbac5818a200880248c94b94
	I0314 19:42:17.511937    8428 command_runner.go:130] > CoreDNS-1.10.1
	I0314 19:42:17.511937    8428 command_runner.go:130] > linux/amd64, go1.20, 055b2c3
	I0314 19:42:17.511937    8428 command_runner.go:130] > [INFO] 127.0.0.1:56069 - 18242 "HINFO IN 687842018263708116.264844942244880205. udp 55 false 512" NXDOMAIN qr,rd,ra 130 0.040568923s
	I0314 19:42:17.511937    8428 command_runner.go:130] > [INFO] 10.244.0.3:42598 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000297623s
	I0314 19:42:17.511937    8428 command_runner.go:130] > [INFO] 10.244.0.3:49284 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.094729955s
	I0314 19:42:17.511937    8428 command_runner.go:130] > [INFO] 10.244.0.3:58753 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd 60 0.047978925s
	I0314 19:42:17.511937    8428 command_runner.go:130] > [INFO] 10.244.0.3:60240 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 1.250879171s
	I0314 19:42:17.511937    8428 command_runner.go:130] > [INFO] 10.244.1.2:35705 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000107809s
	I0314 19:42:17.511937    8428 command_runner.go:130] > [INFO] 10.244.1.2:38792 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 140 0.00013461s
	I0314 19:42:17.511937    8428 command_runner.go:130] > [INFO] 10.244.1.2:53339 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd 60 0.000060304s
	I0314 19:42:17.512947    8428 command_runner.go:130] > [INFO] 10.244.1.2:55975 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,aa,rd,ra 140 0.000059805s
	I0314 19:42:17.512947    8428 command_runner.go:130] > [INFO] 10.244.0.3:55630 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000117109s
	I0314 19:42:17.512947    8428 command_runner.go:130] > [INFO] 10.244.0.3:50181 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.122219329s
	I0314 19:42:17.512947    8428 command_runner.go:130] > [INFO] 10.244.0.3:58918 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000194615s
	I0314 19:42:17.512947    8428 command_runner.go:130] > [INFO] 10.244.0.3:48641 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00012501s
	I0314 19:42:17.512947    8428 command_runner.go:130] > [INFO] 10.244.0.3:57540 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.0346353s
	I0314 19:42:17.512947    8428 command_runner.go:130] > [INFO] 10.244.0.3:59969 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000278722s
	I0314 19:42:17.512947    8428 command_runner.go:130] > [INFO] 10.244.0.3:51295 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000167413s
	I0314 19:42:17.512947    8428 command_runner.go:130] > [INFO] 10.244.0.3:45005 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000148512s
	I0314 19:42:17.512947    8428 command_runner.go:130] > [INFO] 10.244.1.2:51938 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000100608s
	I0314 19:42:17.512947    8428 command_runner.go:130] > [INFO] 10.244.1.2:46248 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.00024762s
	I0314 19:42:17.512947    8428 command_runner.go:130] > [INFO] 10.244.1.2:46501 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000100408s
	I0314 19:42:17.512947    8428 command_runner.go:130] > [INFO] 10.244.1.2:52414 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000056704s
	I0314 19:42:17.512947    8428 command_runner.go:130] > [INFO] 10.244.1.2:44908 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000121409s
	I0314 19:42:17.512947    8428 command_runner.go:130] > [INFO] 10.244.1.2:49578 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00011941s
	I0314 19:42:17.512947    8428 command_runner.go:130] > [INFO] 10.244.1.2:51057 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000060205s
	I0314 19:42:17.512947    8428 command_runner.go:130] > [INFO] 10.244.1.2:56240 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000055805s
	I0314 19:42:17.512947    8428 command_runner.go:130] > [INFO] 10.244.0.3:32901 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000172914s
	I0314 19:42:17.512947    8428 command_runner.go:130] > [INFO] 10.244.0.3:41115 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000149912s
	I0314 19:42:17.512947    8428 command_runner.go:130] > [INFO] 10.244.0.3:40494 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00013161s
	I0314 19:42:17.512947    8428 command_runner.go:130] > [INFO] 10.244.0.3:40575 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000077106s
	I0314 19:42:17.512947    8428 command_runner.go:130] > [INFO] 10.244.1.2:55307 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000194115s
	I0314 19:42:17.512947    8428 command_runner.go:130] > [INFO] 10.244.1.2:46435 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00025832s
	I0314 19:42:17.512947    8428 command_runner.go:130] > [INFO] 10.244.1.2:52095 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000156813s
	I0314 19:42:17.512947    8428 command_runner.go:130] > [INFO] 10.244.1.2:57849 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00012701s
	I0314 19:42:17.512947    8428 command_runner.go:130] > [INFO] 10.244.0.3:47270 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000244119s
	I0314 19:42:17.512947    8428 command_runner.go:130] > [INFO] 10.244.0.3:59009 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000411532s
	I0314 19:42:17.512947    8428 command_runner.go:130] > [INFO] 10.244.0.3:40925 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000108108s
	I0314 19:42:17.512947    8428 command_runner.go:130] > [INFO] 10.244.0.3:56417 - 5 "PTR IN 1.80.17.172.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000067706s
	I0314 19:42:17.512947    8428 command_runner.go:130] > [INFO] 10.244.1.2:36896 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000108409s
	I0314 19:42:17.512947    8428 command_runner.go:130] > [INFO] 10.244.1.2:38949 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000118209s
	I0314 19:42:17.512947    8428 command_runner.go:130] > [INFO] 10.244.1.2:56933 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000156413s
	I0314 19:42:17.512947    8428 command_runner.go:130] > [INFO] 10.244.1.2:35971 - 5 "PTR IN 1.80.17.172.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000072406s
	I0314 19:42:17.512947    8428 command_runner.go:130] > [INFO] SIGTERM: Shutting down servers then terminating
	I0314 19:42:17.512947    8428 command_runner.go:130] > [INFO] plugin/health: Going into lameduck mode for 5s
	I0314 19:42:17.515934    8428 logs.go:123] Gathering logs for kube-scheduler [dbb603289bf1] ...
	I0314 19:42:17.515934    8428 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbb603289bf1"
	I0314 19:42:17.549523    8428 command_runner.go:130] ! I0314 19:18:59.007917       1 serving.go:348] Generated self-signed cert in-memory
	I0314 19:42:17.549523    8428 command_runner.go:130] ! W0314 19:19:00.211611       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	I0314 19:42:17.549523    8428 command_runner.go:130] ! W0314 19:19:00.212802       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	I0314 19:42:17.549523    8428 command_runner.go:130] ! W0314 19:19:00.212990       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	I0314 19:42:17.549523    8428 command_runner.go:130] ! W0314 19:19:00.213108       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0314 19:42:17.549523    8428 command_runner.go:130] ! I0314 19:19:00.283055       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.4"
	I0314 19:42:17.549523    8428 command_runner.go:130] ! I0314 19:19:00.284207       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0314 19:42:17.549523    8428 command_runner.go:130] ! I0314 19:19:00.288027       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0314 19:42:17.549523    8428 command_runner.go:130] ! I0314 19:19:00.288233       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0314 19:42:17.549523    8428 command_runner.go:130] ! I0314 19:19:00.288206       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0314 19:42:17.549523    8428 command_runner.go:130] ! I0314 19:19:00.290233       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0314 19:42:17.550063    8428 command_runner.go:130] ! W0314 19:19:00.293166       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0314 19:42:17.550116    8428 command_runner.go:130] ! E0314 19:19:00.293367       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0314 19:42:17.550148    8428 command_runner.go:130] ! W0314 19:19:00.311723       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0314 19:42:17.550148    8428 command_runner.go:130] ! E0314 19:19:00.311803       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0314 19:42:17.550148    8428 command_runner.go:130] ! W0314 19:19:00.312480       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0314 19:42:17.550148    8428 command_runner.go:130] ! E0314 19:19:00.317665       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0314 19:42:17.550148    8428 command_runner.go:130] ! W0314 19:19:00.313212       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0314 19:42:17.550148    8428 command_runner.go:130] ! W0314 19:19:00.313379       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0314 19:42:17.550148    8428 command_runner.go:130] ! W0314 19:19:00.313450       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0314 19:42:17.550148    8428 command_runner.go:130] ! W0314 19:19:00.313586       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0314 19:42:17.550148    8428 command_runner.go:130] ! W0314 19:19:00.313632       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0314 19:42:17.550148    8428 command_runner.go:130] ! W0314 19:19:00.313705       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0314 19:42:17.550148    8428 command_runner.go:130] ! W0314 19:19:00.313774       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0314 19:42:17.550676    8428 command_runner.go:130] ! W0314 19:19:00.313864       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0314 19:42:17.550676    8428 command_runner.go:130] ! W0314 19:19:00.313910       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0314 19:42:17.550752    8428 command_runner.go:130] ! W0314 19:19:00.313978       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0314 19:42:17.550752    8428 command_runner.go:130] ! W0314 19:19:00.314056       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0314 19:42:17.550752    8428 command_runner.go:130] ! W0314 19:19:00.314091       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0314 19:42:17.550752    8428 command_runner.go:130] ! E0314 19:19:00.318101       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0314 19:42:17.550752    8428 command_runner.go:130] ! E0314 19:19:00.318394       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0314 19:42:17.550752    8428 command_runner.go:130] ! E0314 19:19:00.318606       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0314 19:42:17.550752    8428 command_runner.go:130] ! E0314 19:19:00.318728       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0314 19:42:17.550752    8428 command_runner.go:130] ! E0314 19:19:00.318953       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0314 19:42:17.550752    8428 command_runner.go:130] ! E0314 19:19:00.319076       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0314 19:42:17.550752    8428 command_runner.go:130] ! E0314 19:19:00.319318       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0314 19:42:17.550752    8428 command_runner.go:130] ! E0314 19:19:00.319575       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0314 19:42:17.550752    8428 command_runner.go:130] ! E0314 19:19:00.319588       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0314 19:42:17.551281    8428 command_runner.go:130] ! E0314 19:19:00.319719       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0314 19:42:17.551321    8428 command_runner.go:130] ! E0314 19:19:00.319732       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0314 19:42:17.551321    8428 command_runner.go:130] ! E0314 19:19:00.319788       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0314 19:42:17.551321    8428 command_runner.go:130] ! W0314 19:19:01.268901       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0314 19:42:17.551321    8428 command_runner.go:130] ! E0314 19:19:01.269219       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0314 19:42:17.551321    8428 command_runner.go:130] ! W0314 19:19:01.309661       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0314 19:42:17.551321    8428 command_runner.go:130] ! E0314 19:19:01.309894       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0314 19:42:17.551321    8428 command_runner.go:130] ! W0314 19:19:01.318104       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0314 19:42:17.551321    8428 command_runner.go:130] ! E0314 19:19:01.318410       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0314 19:42:17.551321    8428 command_runner.go:130] ! W0314 19:19:01.382148       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0314 19:42:17.551321    8428 command_runner.go:130] ! E0314 19:19:01.382194       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0314 19:42:17.551321    8428 command_runner.go:130] ! W0314 19:19:01.454259       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0314 19:42:17.551321    8428 command_runner.go:130] ! E0314 19:19:01.454398       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0314 19:42:17.551321    8428 command_runner.go:130] ! W0314 19:19:01.505982       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0314 19:42:17.551321    8428 command_runner.go:130] ! E0314 19:19:01.506182       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0314 19:42:17.551849    8428 command_runner.go:130] ! W0314 19:19:01.640521       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0314 19:42:17.551906    8428 command_runner.go:130] ! E0314 19:19:01.640836       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0314 19:42:17.551906    8428 command_runner.go:130] ! W0314 19:19:01.681052       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0314 19:42:17.551906    8428 command_runner.go:130] ! E0314 19:19:01.681953       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0314 19:42:17.551906    8428 command_runner.go:130] ! W0314 19:19:01.732243       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0314 19:42:17.551906    8428 command_runner.go:130] ! E0314 19:19:01.732288       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0314 19:42:17.551906    8428 command_runner.go:130] ! W0314 19:19:01.767241       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0314 19:42:17.551906    8428 command_runner.go:130] ! E0314 19:19:01.767329       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0314 19:42:17.551906    8428 command_runner.go:130] ! W0314 19:19:01.783665       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0314 19:42:17.551906    8428 command_runner.go:130] ! E0314 19:19:01.783845       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0314 19:42:17.551906    8428 command_runner.go:130] ! W0314 19:19:01.812936       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0314 19:42:17.551906    8428 command_runner.go:130] ! E0314 19:19:01.813027       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0314 19:42:17.551906    8428 command_runner.go:130] ! W0314 19:19:01.821109       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0314 19:42:17.551906    8428 command_runner.go:130] ! E0314 19:19:01.821267       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0314 19:42:17.551906    8428 command_runner.go:130] ! W0314 19:19:01.843311       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0314 19:42:17.551906    8428 command_runner.go:130] ! E0314 19:19:01.843339       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0314 19:42:17.552435    8428 command_runner.go:130] ! W0314 19:19:01.914649       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0314 19:42:17.552435    8428 command_runner.go:130] ! E0314 19:19:01.914986       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0314 19:42:17.552435    8428 command_runner.go:130] ! I0314 19:19:04.090863       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0314 19:42:17.552435    8428 command_runner.go:130] ! I0314 19:38:43.236637       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0314 19:42:17.552435    8428 command_runner.go:130] ! I0314 19:38:43.237145       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0314 19:42:17.552518    8428 command_runner.go:130] ! E0314 19:38:43.237439       1 run.go:74] "command failed" err="finished without leader elect"
	I0314 19:42:17.562343    8428 logs.go:123] Gathering logs for kindnet [1a321c0e8997] ...
	I0314 19:42:17.562343    8428 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a321c0e8997"
	I0314 19:42:17.594928    8428 command_runner.go:130] ! I0314 19:27:36.366640       1 main.go:227] handling current node
	I0314 19:42:17.594928    8428 command_runner.go:130] ! I0314 19:27:36.366652       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:17.595497    8428 command_runner.go:130] ! I0314 19:27:36.366658       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:17.595497    8428 command_runner.go:130] ! I0314 19:27:36.366818       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:17.595497    8428 command_runner.go:130] ! I0314 19:27:36.366827       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:17.595497    8428 command_runner.go:130] ! I0314 19:27:46.378468       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:17.595555    8428 command_runner.go:130] ! I0314 19:27:46.378496       1 main.go:227] handling current node
	I0314 19:42:17.595555    8428 command_runner.go:130] ! I0314 19:27:46.378506       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:17.595555    8428 command_runner.go:130] ! I0314 19:27:46.378513       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:17.595555    8428 command_runner.go:130] ! I0314 19:27:46.379039       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:17.595600    8428 command_runner.go:130] ! I0314 19:27:46.379130       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:17.595600    8428 command_runner.go:130] ! I0314 19:27:56.393642       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:17.595600    8428 command_runner.go:130] ! I0314 19:27:56.393700       1 main.go:227] handling current node
	I0314 19:42:17.595600    8428 command_runner.go:130] ! I0314 19:27:56.393723       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:17.595600    8428 command_runner.go:130] ! I0314 19:27:56.393733       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:17.595600    8428 command_runner.go:130] ! I0314 19:27:56.394716       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:17.595600    8428 command_runner.go:130] ! I0314 19:27:56.394779       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:17.595600    8428 command_runner.go:130] ! I0314 19:28:06.403171       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:17.595600    8428 command_runner.go:130] ! I0314 19:28:06.403199       1 main.go:227] handling current node
	I0314 19:42:17.595600    8428 command_runner.go:130] ! I0314 19:28:06.403212       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:17.595600    8428 command_runner.go:130] ! I0314 19:28:06.403219       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:17.595600    8428 command_runner.go:130] ! I0314 19:28:06.403663       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:17.595600    8428 command_runner.go:130] ! I0314 19:28:06.403834       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:17.595600    8428 command_runner.go:130] ! I0314 19:28:16.415146       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:17.595600    8428 command_runner.go:130] ! I0314 19:28:16.415237       1 main.go:227] handling current node
	I0314 19:42:17.595600    8428 command_runner.go:130] ! I0314 19:28:16.415250       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:17.595600    8428 command_runner.go:130] ! I0314 19:28:16.415260       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:17.595600    8428 command_runner.go:130] ! I0314 19:28:16.415497       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:17.595600    8428 command_runner.go:130] ! I0314 19:28:16.415703       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:17.595600    8428 command_runner.go:130] ! I0314 19:28:26.430257       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:17.595600    8428 command_runner.go:130] ! I0314 19:28:26.430350       1 main.go:227] handling current node
	I0314 19:42:17.595600    8428 command_runner.go:130] ! I0314 19:28:26.430364       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:17.595600    8428 command_runner.go:130] ! I0314 19:28:26.430372       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:17.595600    8428 command_runner.go:130] ! I0314 19:28:26.430709       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:17.595600    8428 command_runner.go:130] ! I0314 19:28:26.430804       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:17.595600    8428 command_runner.go:130] ! I0314 19:28:36.445854       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:17.595600    8428 command_runner.go:130] ! I0314 19:28:36.445897       1 main.go:227] handling current node
	I0314 19:42:17.595600    8428 command_runner.go:130] ! I0314 19:28:36.445915       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:17.595600    8428 command_runner.go:130] ! I0314 19:28:36.446285       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:17.595600    8428 command_runner.go:130] ! I0314 19:28:36.446702       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:17.595600    8428 command_runner.go:130] ! I0314 19:28:36.446731       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:17.595600    8428 command_runner.go:130] ! I0314 19:28:46.461369       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:17.595600    8428 command_runner.go:130] ! I0314 19:28:46.462057       1 main.go:227] handling current node
	I0314 19:42:17.595600    8428 command_runner.go:130] ! I0314 19:28:46.462235       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:17.595600    8428 command_runner.go:130] ! I0314 19:28:46.462250       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:17.595600    8428 command_runner.go:130] ! I0314 19:28:46.462593       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:17.595600    8428 command_runner.go:130] ! I0314 19:28:46.462770       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:17.595600    8428 command_runner.go:130] ! I0314 19:28:56.477451       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:17.595600    8428 command_runner.go:130] ! I0314 19:28:56.477483       1 main.go:227] handling current node
	I0314 19:42:17.595600    8428 command_runner.go:130] ! I0314 19:28:56.477496       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:17.595600    8428 command_runner.go:130] ! I0314 19:28:56.477508       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:17.596140    8428 command_runner.go:130] ! I0314 19:28:56.478007       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:17.596140    8428 command_runner.go:130] ! I0314 19:28:56.478089       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:17.596140    8428 command_runner.go:130] ! I0314 19:29:06.484423       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:17.596140    8428 command_runner.go:130] ! I0314 19:29:06.484497       1 main.go:227] handling current node
	I0314 19:42:17.596140    8428 command_runner.go:130] ! I0314 19:29:06.484559       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:17.596140    8428 command_runner.go:130] ! I0314 19:29:06.484624       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:17.596140    8428 command_runner.go:130] ! I0314 19:29:06.484852       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:17.596140    8428 command_runner.go:130] ! I0314 19:29:06.484945       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:17.596216    8428 command_runner.go:130] ! I0314 19:29:16.500812       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:17.596216    8428 command_runner.go:130] ! I0314 19:29:16.500909       1 main.go:227] handling current node
	I0314 19:42:17.596216    8428 command_runner.go:130] ! I0314 19:29:16.500924       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:17.596333    8428 command_runner.go:130] ! I0314 19:29:16.500932       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:17.596333    8428 command_runner.go:130] ! I0314 19:29:16.501505       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:17.596333    8428 command_runner.go:130] ! I0314 19:29:16.501585       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:17.596333    8428 command_runner.go:130] ! I0314 19:29:26.508494       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:17.596333    8428 command_runner.go:130] ! I0314 19:29:26.508585       1 main.go:227] handling current node
	I0314 19:42:17.596333    8428 command_runner.go:130] ! I0314 19:29:26.508601       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:17.596333    8428 command_runner.go:130] ! I0314 19:29:26.508609       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:17.596333    8428 command_runner.go:130] ! I0314 19:29:26.508822       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:17.596333    8428 command_runner.go:130] ! I0314 19:29:26.508837       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:17.596333    8428 command_runner.go:130] ! I0314 19:29:36.517002       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:17.596333    8428 command_runner.go:130] ! I0314 19:29:36.517123       1 main.go:227] handling current node
	I0314 19:42:17.596333    8428 command_runner.go:130] ! I0314 19:29:36.517142       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:17.596333    8428 command_runner.go:130] ! I0314 19:29:36.517155       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:17.596333    8428 command_runner.go:130] ! I0314 19:29:36.517648       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:17.596333    8428 command_runner.go:130] ! I0314 19:29:36.517836       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:17.596333    8428 command_runner.go:130] ! I0314 19:29:46.530826       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:17.596333    8428 command_runner.go:130] ! I0314 19:29:46.530962       1 main.go:227] handling current node
	I0314 19:42:17.596333    8428 command_runner.go:130] ! I0314 19:29:46.530978       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:17.596333    8428 command_runner.go:130] ! I0314 19:29:46.531314       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:17.596333    8428 command_runner.go:130] ! I0314 19:29:46.531557       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:17.596333    8428 command_runner.go:130] ! I0314 19:29:46.531706       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:17.596333    8428 command_runner.go:130] ! I0314 19:29:56.551916       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:17.596333    8428 command_runner.go:130] ! I0314 19:29:56.551953       1 main.go:227] handling current node
	I0314 19:42:17.596333    8428 command_runner.go:130] ! I0314 19:29:56.551965       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:17.596333    8428 command_runner.go:130] ! I0314 19:29:56.551971       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:17.596333    8428 command_runner.go:130] ! I0314 19:29:56.552084       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:17.596333    8428 command_runner.go:130] ! I0314 19:29:56.552107       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:17.596333    8428 command_runner.go:130] ! I0314 19:30:06.560066       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:17.596333    8428 command_runner.go:130] ! I0314 19:30:06.560115       1 main.go:227] handling current node
	I0314 19:42:17.596333    8428 command_runner.go:130] ! I0314 19:30:06.560129       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:17.596333    8428 command_runner.go:130] ! I0314 19:30:06.560136       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:17.596333    8428 command_runner.go:130] ! I0314 19:30:06.560429       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:17.596333    8428 command_runner.go:130] ! I0314 19:30:06.560534       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:17.596333    8428 command_runner.go:130] ! I0314 19:30:16.573690       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:17.596333    8428 command_runner.go:130] ! I0314 19:30:16.573731       1 main.go:227] handling current node
	I0314 19:42:17.596333    8428 command_runner.go:130] ! I0314 19:30:16.573978       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:17.596873    8428 command_runner.go:130] ! I0314 19:30:16.574067       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:17.596873    8428 command_runner.go:130] ! I0314 19:30:16.574385       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:17.596873    8428 command_runner.go:130] ! I0314 19:30:16.574414       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:17.596873    8428 command_runner.go:130] ! I0314 19:30:26.589277       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:17.596873    8428 command_runner.go:130] ! I0314 19:30:26.589488       1 main.go:227] handling current node
	I0314 19:42:17.596930    8428 command_runner.go:130] ! I0314 19:30:26.589534       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:17.596930    8428 command_runner.go:130] ! I0314 19:30:26.589557       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:17.596930    8428 command_runner.go:130] ! I0314 19:30:26.589802       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:17.596930    8428 command_runner.go:130] ! I0314 19:30:26.589885       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:17.596930    8428 command_runner.go:130] ! I0314 19:30:36.605356       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:17.596930    8428 command_runner.go:130] ! I0314 19:30:36.605400       1 main.go:227] handling current node
	I0314 19:42:17.596986    8428 command_runner.go:130] ! I0314 19:30:36.605412       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:17.596986    8428 command_runner.go:130] ! I0314 19:30:36.605418       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:17.596986    8428 command_runner.go:130] ! I0314 19:30:36.605556       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:17.596986    8428 command_runner.go:130] ! I0314 19:30:36.605625       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:17.596986    8428 command_runner.go:130] ! I0314 19:30:46.612911       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:17.597039    8428 command_runner.go:130] ! I0314 19:30:46.613010       1 main.go:227] handling current node
	I0314 19:42:17.597039    8428 command_runner.go:130] ! I0314 19:30:46.613025       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:17.597039    8428 command_runner.go:130] ! I0314 19:30:46.613034       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:17.597039    8428 command_runner.go:130] ! I0314 19:30:46.613445       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:17.597084    8428 command_runner.go:130] ! I0314 19:30:46.615380       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:17.597084    8428 command_runner.go:130] ! I0314 19:30:56.630605       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:17.597084    8428 command_runner.go:130] ! I0314 19:30:56.630965       1 main.go:227] handling current node
	I0314 19:42:17.597084    8428 command_runner.go:130] ! I0314 19:30:56.631076       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:17.597126    8428 command_runner.go:130] ! I0314 19:30:56.631132       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:17.597126    8428 command_runner.go:130] ! I0314 19:30:56.631442       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:17.597327    8428 command_runner.go:130] ! I0314 19:30:56.631542       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:17.597327    8428 command_runner.go:130] ! I0314 19:31:06.643588       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:17.597327    8428 command_runner.go:130] ! I0314 19:31:06.643631       1 main.go:227] handling current node
	I0314 19:42:17.597327    8428 command_runner.go:130] ! I0314 19:31:06.643643       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:17.597327    8428 command_runner.go:130] ! I0314 19:31:06.643650       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:17.597327    8428 command_runner.go:130] ! I0314 19:31:06.644160       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:17.597327    8428 command_runner.go:130] ! I0314 19:31:06.644255       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:17.597327    8428 command_runner.go:130] ! I0314 19:31:16.650940       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:17.597327    8428 command_runner.go:130] ! I0314 19:31:16.651187       1 main.go:227] handling current node
	I0314 19:42:17.597327    8428 command_runner.go:130] ! I0314 19:31:16.651208       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:17.597327    8428 command_runner.go:130] ! I0314 19:31:16.651236       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:17.597327    8428 command_runner.go:130] ! I0314 19:31:16.651354       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:17.597327    8428 command_runner.go:130] ! I0314 19:31:16.651374       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:17.597327    8428 command_runner.go:130] ! I0314 19:31:26.665304       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:17.597327    8428 command_runner.go:130] ! I0314 19:31:26.665403       1 main.go:227] handling current node
	I0314 19:42:17.597327    8428 command_runner.go:130] ! I0314 19:31:26.665418       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:17.597327    8428 command_runner.go:130] ! I0314 19:31:26.665427       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:17.597327    8428 command_runner.go:130] ! I0314 19:31:26.665674       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:17.597327    8428 command_runner.go:130] ! I0314 19:31:26.665859       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:17.597327    8428 command_runner.go:130] ! I0314 19:31:36.681645       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:17.597327    8428 command_runner.go:130] ! I0314 19:31:36.681680       1 main.go:227] handling current node
	I0314 19:42:17.597327    8428 command_runner.go:130] ! I0314 19:31:36.681695       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:17.597327    8428 command_runner.go:130] ! I0314 19:31:36.681704       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:17.597327    8428 command_runner.go:130] ! I0314 19:31:36.682032       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:17.597327    8428 command_runner.go:130] ! I0314 19:31:36.682062       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:17.597327    8428 command_runner.go:130] ! I0314 19:31:46.697305       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:17.597327    8428 command_runner.go:130] ! I0314 19:31:46.697415       1 main.go:227] handling current node
	I0314 19:42:17.597327    8428 command_runner.go:130] ! I0314 19:31:46.697432       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:17.597327    8428 command_runner.go:130] ! I0314 19:31:46.697444       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:17.597327    8428 command_runner.go:130] ! I0314 19:31:46.697965       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:17.597327    8428 command_runner.go:130] ! I0314 19:31:46.698093       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:17.597327    8428 command_runner.go:130] ! I0314 19:31:56.705518       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:17.597327    8428 command_runner.go:130] ! I0314 19:31:56.705613       1 main.go:227] handling current node
	I0314 19:42:17.597327    8428 command_runner.go:130] ! I0314 19:31:56.705627       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:17.597327    8428 command_runner.go:130] ! I0314 19:31:56.705635       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:17.597327    8428 command_runner.go:130] ! I0314 19:31:56.706151       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:17.597327    8428 command_runner.go:130] ! I0314 19:31:56.706269       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:17.597327    8428 command_runner.go:130] ! I0314 19:32:06.716977       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:17.597327    8428 command_runner.go:130] ! I0314 19:32:06.717087       1 main.go:227] handling current node
	I0314 19:42:17.597327    8428 command_runner.go:130] ! I0314 19:32:06.717105       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:17.597327    8428 command_runner.go:130] ! I0314 19:32:06.717116       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:17.597877    8428 command_runner.go:130] ! I0314 19:32:06.717701       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:17.597877    8428 command_runner.go:130] ! I0314 19:32:06.717870       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:17.597877    8428 command_runner.go:130] ! I0314 19:32:16.738903       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:17.597877    8428 command_runner.go:130] ! I0314 19:32:16.738946       1 main.go:227] handling current node
	I0314 19:42:17.597877    8428 command_runner.go:130] ! I0314 19:32:16.738962       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:17.597877    8428 command_runner.go:130] ! I0314 19:32:16.738971       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:17.597943    8428 command_runner.go:130] ! I0314 19:32:16.739310       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:17.597943    8428 command_runner.go:130] ! I0314 19:32:16.739420       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:17.597943    8428 command_runner.go:130] ! I0314 19:32:26.749067       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:17.597972    8428 command_runner.go:130] ! I0314 19:32:26.749521       1 main.go:227] handling current node
	I0314 19:42:17.597972    8428 command_runner.go:130] ! I0314 19:32:26.749656       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:17.597972    8428 command_runner.go:130] ! I0314 19:32:26.749670       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:17.597972    8428 command_runner.go:130] ! I0314 19:32:26.750040       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:17.597972    8428 command_runner.go:130] ! I0314 19:32:26.750074       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:17.597972    8428 command_runner.go:130] ! I0314 19:32:36.765313       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:17.597972    8428 command_runner.go:130] ! I0314 19:32:36.765423       1 main.go:227] handling current node
	I0314 19:42:17.597972    8428 command_runner.go:130] ! I0314 19:32:36.765442       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:17.597972    8428 command_runner.go:130] ! I0314 19:32:36.765453       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:17.597972    8428 command_runner.go:130] ! I0314 19:32:36.766102       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:17.597972    8428 command_runner.go:130] ! I0314 19:32:36.766130       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:17.597972    8428 command_runner.go:130] ! I0314 19:32:46.781715       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:17.597972    8428 command_runner.go:130] ! I0314 19:32:46.781800       1 main.go:227] handling current node
	I0314 19:42:17.597972    8428 command_runner.go:130] ! I0314 19:32:46.782151       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:17.597972    8428 command_runner.go:130] ! I0314 19:32:46.782168       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:17.597972    8428 command_runner.go:130] ! I0314 19:32:46.782370       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:17.597972    8428 command_runner.go:130] ! I0314 19:32:46.782396       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:17.597972    8428 command_runner.go:130] ! I0314 19:32:56.797473       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:17.597972    8428 command_runner.go:130] ! I0314 19:32:56.797568       1 main.go:227] handling current node
	I0314 19:42:17.597972    8428 command_runner.go:130] ! I0314 19:32:56.797583       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:17.597972    8428 command_runner.go:130] ! I0314 19:32:56.797621       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:17.597972    8428 command_runner.go:130] ! I0314 19:32:56.797733       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:17.597972    8428 command_runner.go:130] ! I0314 19:32:56.797772       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:17.597972    8428 command_runner.go:130] ! I0314 19:33:06.803421       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:17.597972    8428 command_runner.go:130] ! I0314 19:33:06.803513       1 main.go:227] handling current node
	I0314 19:42:17.597972    8428 command_runner.go:130] ! I0314 19:33:06.803527       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:17.597972    8428 command_runner.go:130] ! I0314 19:33:06.803534       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:17.597972    8428 command_runner.go:130] ! I0314 19:33:06.804158       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:17.597972    8428 command_runner.go:130] ! I0314 19:33:06.804237       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:17.597972    8428 command_runner.go:130] ! I0314 19:33:16.818983       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:17.597972    8428 command_runner.go:130] ! I0314 19:33:16.819134       1 main.go:227] handling current node
	I0314 19:42:17.597972    8428 command_runner.go:130] ! I0314 19:33:16.819149       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:17.597972    8428 command_runner.go:130] ! I0314 19:33:16.819157       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:17.597972    8428 command_runner.go:130] ! I0314 19:33:16.819421       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:17.597972    8428 command_runner.go:130] ! I0314 19:33:16.819491       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:17.597972    8428 command_runner.go:130] ! I0314 19:33:26.826209       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:17.597972    8428 command_runner.go:130] ! I0314 19:33:26.826474       1 main.go:227] handling current node
	I0314 19:42:17.597972    8428 command_runner.go:130] ! I0314 19:33:26.826509       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:17.597972    8428 command_runner.go:130] ! I0314 19:33:26.826519       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:17.597972    8428 command_runner.go:130] ! I0314 19:33:26.826794       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:17.597972    8428 command_runner.go:130] ! I0314 19:33:26.826886       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:17.597972    8428 command_runner.go:130] ! I0314 19:33:36.839979       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:17.597972    8428 command_runner.go:130] ! I0314 19:33:36.840555       1 main.go:227] handling current node
	I0314 19:42:17.598511    8428 command_runner.go:130] ! I0314 19:33:36.840828       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:17.598511    8428 command_runner.go:130] ! I0314 19:33:36.840855       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:17.598511    8428 command_runner.go:130] ! I0314 19:33:36.841055       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:17.598511    8428 command_runner.go:130] ! I0314 19:33:36.841183       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:17.598511    8428 command_runner.go:130] ! I0314 19:33:46.854483       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:17.598566    8428 command_runner.go:130] ! I0314 19:33:46.854585       1 main.go:227] handling current node
	I0314 19:42:17.598566    8428 command_runner.go:130] ! I0314 19:33:46.854600       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:17.598566    8428 command_runner.go:130] ! I0314 19:33:46.854608       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:17.598599    8428 command_runner.go:130] ! I0314 19:33:46.855303       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:17.598599    8428 command_runner.go:130] ! I0314 19:33:46.855389       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:17.598599    8428 command_runner.go:130] ! I0314 19:33:56.867052       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:17.598599    8428 command_runner.go:130] ! I0314 19:33:56.867136       1 main.go:227] handling current node
	I0314 19:42:17.598599    8428 command_runner.go:130] ! I0314 19:33:56.867150       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:17.598599    8428 command_runner.go:130] ! I0314 19:33:56.867158       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:17.598599    8428 command_runner.go:130] ! I0314 19:33:56.867493       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:17.598599    8428 command_runner.go:130] ! I0314 19:33:56.867886       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:17.598599    8428 command_runner.go:130] ! I0314 19:34:06.874298       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:17.598599    8428 command_runner.go:130] ! I0314 19:34:06.874391       1 main.go:227] handling current node
	I0314 19:42:17.598599    8428 command_runner.go:130] ! I0314 19:34:06.874405       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:17.598599    8428 command_runner.go:130] ! I0314 19:34:06.874413       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:17.598599    8428 command_runner.go:130] ! I0314 19:34:06.874932       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:17.598599    8428 command_runner.go:130] ! I0314 19:34:06.874962       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:17.598599    8428 command_runner.go:130] ! I0314 19:34:16.890513       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:17.598599    8428 command_runner.go:130] ! I0314 19:34:16.890589       1 main.go:227] handling current node
	I0314 19:42:17.598599    8428 command_runner.go:130] ! I0314 19:34:16.890604       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:17.598599    8428 command_runner.go:130] ! I0314 19:34:16.890612       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:17.598599    8428 command_runner.go:130] ! I0314 19:34:16.890870       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:17.598599    8428 command_runner.go:130] ! I0314 19:34:16.890953       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:17.598599    8428 command_runner.go:130] ! I0314 19:34:26.908423       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:17.598599    8428 command_runner.go:130] ! I0314 19:34:26.908576       1 main.go:227] handling current node
	I0314 19:42:17.598599    8428 command_runner.go:130] ! I0314 19:34:26.908597       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:17.598599    8428 command_runner.go:130] ! I0314 19:34:26.908606       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:17.598599    8428 command_runner.go:130] ! I0314 19:34:26.909103       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:17.598599    8428 command_runner.go:130] ! I0314 19:34:26.909271       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:17.598599    8428 command_runner.go:130] ! I0314 19:34:36.915794       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:17.598599    8428 command_runner.go:130] ! I0314 19:34:36.915910       1 main.go:227] handling current node
	I0314 19:42:17.598599    8428 command_runner.go:130] ! I0314 19:34:36.915926       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:17.598599    8428 command_runner.go:130] ! I0314 19:34:36.915935       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:17.598599    8428 command_runner.go:130] ! I0314 19:34:36.916282       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:17.598599    8428 command_runner.go:130] ! I0314 19:34:36.916372       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:17.598599    8428 command_runner.go:130] ! I0314 19:34:46.931699       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:17.598599    8428 command_runner.go:130] ! I0314 19:34:46.931833       1 main.go:227] handling current node
	I0314 19:42:17.598599    8428 command_runner.go:130] ! I0314 19:34:46.931849       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:17.598599    8428 command_runner.go:130] ! I0314 19:34:46.931858       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:17.598599    8428 command_runner.go:130] ! I0314 19:34:46.932099       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:17.598599    8428 command_runner.go:130] ! I0314 19:34:46.932124       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:17.598599    8428 command_runner.go:130] ! I0314 19:34:56.946470       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:17.598599    8428 command_runner.go:130] ! I0314 19:34:56.946565       1 main.go:227] handling current node
	I0314 19:42:17.598599    8428 command_runner.go:130] ! I0314 19:34:56.946580       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:17.598599    8428 command_runner.go:130] ! I0314 19:34:56.946588       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:17.598599    8428 command_runner.go:130] ! I0314 19:34:56.946812       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:17.598599    8428 command_runner.go:130] ! I0314 19:34:56.946927       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:17.599142    8428 command_runner.go:130] ! I0314 19:35:06.960844       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:17.599142    8428 command_runner.go:130] ! I0314 19:35:06.960939       1 main.go:227] handling current node
	I0314 19:42:17.599142    8428 command_runner.go:130] ! I0314 19:35:06.960954       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:17.599142    8428 command_runner.go:130] ! I0314 19:35:06.960962       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:17.599142    8428 command_runner.go:130] ! I0314 19:35:06.961467       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:17.599142    8428 command_runner.go:130] ! I0314 19:35:06.961574       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:17.599142    8428 command_runner.go:130] ! I0314 19:35:16.981993       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:17.599209    8428 command_runner.go:130] ! I0314 19:35:16.982080       1 main.go:227] handling current node
	I0314 19:42:17.599209    8428 command_runner.go:130] ! I0314 19:35:16.982095       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:17.599209    8428 command_runner.go:130] ! I0314 19:35:16.982103       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:17.599209    8428 command_runner.go:130] ! I0314 19:35:16.982594       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:17.599209    8428 command_runner.go:130] ! I0314 19:35:16.982673       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:17.599209    8428 command_runner.go:130] ! I0314 19:35:26.993848       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:17.599209    8428 command_runner.go:130] ! I0314 19:35:26.993940       1 main.go:227] handling current node
	I0314 19:42:17.599209    8428 command_runner.go:130] ! I0314 19:35:26.993955       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:17.599209    8428 command_runner.go:130] ! I0314 19:35:26.993963       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:17.599209    8428 command_runner.go:130] ! I0314 19:35:26.994360       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:17.599209    8428 command_runner.go:130] ! I0314 19:35:26.994437       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:17.599209    8428 command_runner.go:130] ! I0314 19:35:37.008613       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:17.599209    8428 command_runner.go:130] ! I0314 19:35:37.008706       1 main.go:227] handling current node
	I0314 19:42:17.599209    8428 command_runner.go:130] ! I0314 19:35:37.008720       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:17.599209    8428 command_runner.go:130] ! I0314 19:35:37.008727       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:17.599209    8428 command_runner.go:130] ! I0314 19:35:37.009233       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:17.599209    8428 command_runner.go:130] ! I0314 19:35:37.009320       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:17.599209    8428 command_runner.go:130] ! I0314 19:35:47.018420       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:17.599209    8428 command_runner.go:130] ! I0314 19:35:47.018526       1 main.go:227] handling current node
	I0314 19:42:17.599209    8428 command_runner.go:130] ! I0314 19:35:47.018541       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:17.599750    8428 command_runner.go:130] ! I0314 19:35:47.018549       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:17.599750    8428 command_runner.go:130] ! I0314 19:35:47.018669       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:17.599750    8428 command_runner.go:130] ! I0314 19:35:47.018680       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:17.599750    8428 command_runner.go:130] ! I0314 19:35:57.025132       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:17.599750    8428 command_runner.go:130] ! I0314 19:35:57.025207       1 main.go:227] handling current node
	I0314 19:42:17.599750    8428 command_runner.go:130] ! I0314 19:35:57.025220       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:17.599750    8428 command_runner.go:130] ! I0314 19:35:57.025228       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:17.599839    8428 command_runner.go:130] ! I0314 19:35:57.026009       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:17.599839    8428 command_runner.go:130] ! I0314 19:35:57.026145       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:17.599839    8428 command_runner.go:130] ! I0314 19:36:07.042281       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:17.599839    8428 command_runner.go:130] ! I0314 19:36:07.042353       1 main.go:227] handling current node
	I0314 19:42:17.599839    8428 command_runner.go:130] ! I0314 19:36:07.042367       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:17.599839    8428 command_runner.go:130] ! I0314 19:36:07.042375       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:17.599839    8428 command_runner.go:130] ! I0314 19:36:07.042493       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:17.599839    8428 command_runner.go:130] ! I0314 19:36:07.042500       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:17.599839    8428 command_runner.go:130] ! I0314 19:36:17.055539       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:17.599839    8428 command_runner.go:130] ! I0314 19:36:17.055567       1 main.go:227] handling current node
	I0314 19:42:17.599839    8428 command_runner.go:130] ! I0314 19:36:17.055581       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:17.599839    8428 command_runner.go:130] ! I0314 19:36:17.055588       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:17.599839    8428 command_runner.go:130] ! I0314 19:36:17.056312       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:17.599839    8428 command_runner.go:130] ! I0314 19:36:17.056341       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:17.599839    8428 command_runner.go:130] ! I0314 19:36:27.067921       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:17.599839    8428 command_runner.go:130] ! I0314 19:36:27.067961       1 main.go:227] handling current node
	I0314 19:42:17.599839    8428 command_runner.go:130] ! I0314 19:36:27.069052       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:17.599839    8428 command_runner.go:130] ! I0314 19:36:27.069179       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:17.599839    8428 command_runner.go:130] ! I0314 19:36:27.069306       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:17.599839    8428 command_runner.go:130] ! I0314 19:36:27.069332       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:17.599839    8428 command_runner.go:130] ! I0314 19:36:37.082322       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:17.599839    8428 command_runner.go:130] ! I0314 19:36:37.082413       1 main.go:227] handling current node
	I0314 19:42:17.599839    8428 command_runner.go:130] ! I0314 19:36:37.082429       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:17.599839    8428 command_runner.go:130] ! I0314 19:36:37.082437       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:17.599839    8428 command_runner.go:130] ! I0314 19:36:37.082972       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:17.599839    8428 command_runner.go:130] ! I0314 19:36:37.083000       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:17.599839    8428 command_runner.go:130] ! I0314 19:36:47.099685       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:17.599839    8428 command_runner.go:130] ! I0314 19:36:47.099830       1 main.go:227] handling current node
	I0314 19:42:17.599839    8428 command_runner.go:130] ! I0314 19:36:47.099862       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:17.599839    8428 command_runner.go:130] ! I0314 19:36:47.099982       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:17.599839    8428 command_runner.go:130] ! I0314 19:36:57.107274       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:17.599839    8428 command_runner.go:130] ! I0314 19:36:57.107368       1 main.go:227] handling current node
	I0314 19:42:17.599839    8428 command_runner.go:130] ! I0314 19:36:57.107382       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:17.599839    8428 command_runner.go:130] ! I0314 19:36:57.107390       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:17.599839    8428 command_runner.go:130] ! I0314 19:36:57.107827       1 main.go:223] Handling node with IPs: map[172.17.84.215:{}]
	I0314 19:42:17.599839    8428 command_runner.go:130] ! I0314 19:36:57.107942       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.3.0/24] 
	I0314 19:42:17.599839    8428 command_runner.go:130] ! I0314 19:36:57.108076       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 172.17.84.215 Flags: [] Table: 0} 
	I0314 19:42:17.599839    8428 command_runner.go:130] ! I0314 19:37:07.120709       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:17.599839    8428 command_runner.go:130] ! I0314 19:37:07.121059       1 main.go:227] handling current node
	I0314 19:42:17.599839    8428 command_runner.go:130] ! I0314 19:37:07.121098       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:17.599839    8428 command_runner.go:130] ! I0314 19:37:07.121109       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:17.600378    8428 command_runner.go:130] ! I0314 19:37:07.121440       1 main.go:223] Handling node with IPs: map[172.17.84.215:{}]
	I0314 19:42:17.600378    8428 command_runner.go:130] ! I0314 19:37:07.121455       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.3.0/24] 
	I0314 19:42:17.600378    8428 command_runner.go:130] ! I0314 19:37:17.137704       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:17.600378    8428 command_runner.go:130] ! I0314 19:37:17.137784       1 main.go:227] handling current node
	I0314 19:42:17.600378    8428 command_runner.go:130] ! I0314 19:37:17.137796       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:17.600378    8428 command_runner.go:130] ! I0314 19:37:17.137803       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:17.600378    8428 command_runner.go:130] ! I0314 19:37:17.138265       1 main.go:223] Handling node with IPs: map[172.17.84.215:{}]
	I0314 19:42:17.600378    8428 command_runner.go:130] ! I0314 19:37:17.138298       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.3.0/24] 
	I0314 19:42:17.600378    8428 command_runner.go:130] ! I0314 19:37:27.144505       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:17.600378    8428 command_runner.go:130] ! I0314 19:37:27.144594       1 main.go:227] handling current node
	I0314 19:42:17.600378    8428 command_runner.go:130] ! I0314 19:37:27.144607       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:17.600378    8428 command_runner.go:130] ! I0314 19:37:27.144615       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:17.600378    8428 command_runner.go:130] ! I0314 19:37:27.145062       1 main.go:223] Handling node with IPs: map[172.17.84.215:{}]
	I0314 19:42:17.600524    8428 command_runner.go:130] ! I0314 19:37:27.145092       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.3.0/24] 
	I0314 19:42:17.600524    8428 command_runner.go:130] ! I0314 19:37:37.154684       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:17.600524    8428 command_runner.go:130] ! I0314 19:37:37.154836       1 main.go:227] handling current node
	I0314 19:42:17.600524    8428 command_runner.go:130] ! I0314 19:37:37.154851       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:17.600524    8428 command_runner.go:130] ! I0314 19:37:37.154860       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:17.600524    8428 command_runner.go:130] ! I0314 19:37:37.155452       1 main.go:223] Handling node with IPs: map[172.17.84.215:{}]
	I0314 19:42:17.600524    8428 command_runner.go:130] ! I0314 19:37:37.155614       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.3.0/24] 
	I0314 19:42:17.600524    8428 command_runner.go:130] ! I0314 19:37:47.168249       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:17.600613    8428 command_runner.go:130] ! I0314 19:37:47.168338       1 main.go:227] handling current node
	I0314 19:42:17.600613    8428 command_runner.go:130] ! I0314 19:37:47.168352       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:17.600613    8428 command_runner.go:130] ! I0314 19:37:47.168360       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:17.600613    8428 command_runner.go:130] ! I0314 19:37:47.168976       1 main.go:223] Handling node with IPs: map[172.17.84.215:{}]
	I0314 19:42:17.600613    8428 command_runner.go:130] ! I0314 19:37:47.169064       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.3.0/24] 
	I0314 19:42:17.600613    8428 command_runner.go:130] ! I0314 19:37:57.176039       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:17.600613    8428 command_runner.go:130] ! I0314 19:37:57.176130       1 main.go:227] handling current node
	I0314 19:42:17.600613    8428 command_runner.go:130] ! I0314 19:37:57.176145       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:17.600613    8428 command_runner.go:130] ! I0314 19:37:57.176153       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:17.600613    8428 command_runner.go:130] ! I0314 19:37:57.176528       1 main.go:223] Handling node with IPs: map[172.17.84.215:{}]
	I0314 19:42:17.600613    8428 command_runner.go:130] ! I0314 19:37:57.176659       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.3.0/24] 
	I0314 19:42:17.600613    8428 command_runner.go:130] ! I0314 19:38:07.189890       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:17.600613    8428 command_runner.go:130] ! I0314 19:38:07.189993       1 main.go:227] handling current node
	I0314 19:42:17.600613    8428 command_runner.go:130] ! I0314 19:38:07.190008       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:17.600613    8428 command_runner.go:130] ! I0314 19:38:07.190016       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:17.600613    8428 command_runner.go:130] ! I0314 19:38:07.190217       1 main.go:223] Handling node with IPs: map[172.17.84.215:{}]
	I0314 19:42:17.600613    8428 command_runner.go:130] ! I0314 19:38:07.190245       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.3.0/24] 
	I0314 19:42:17.600613    8428 command_runner.go:130] ! I0314 19:38:17.196541       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:17.600613    8428 command_runner.go:130] ! I0314 19:38:17.196633       1 main.go:227] handling current node
	I0314 19:42:17.600613    8428 command_runner.go:130] ! I0314 19:38:17.196647       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:17.600613    8428 command_runner.go:130] ! I0314 19:38:17.196655       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:17.600613    8428 command_runner.go:130] ! I0314 19:38:17.196888       1 main.go:223] Handling node with IPs: map[172.17.84.215:{}]
	I0314 19:42:17.600613    8428 command_runner.go:130] ! I0314 19:38:17.197012       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.3.0/24] 
	I0314 19:42:17.600613    8428 command_runner.go:130] ! I0314 19:38:27.217365       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:17.600613    8428 command_runner.go:130] ! I0314 19:38:27.217460       1 main.go:227] handling current node
	I0314 19:42:17.600613    8428 command_runner.go:130] ! I0314 19:38:27.217475       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:17.600613    8428 command_runner.go:130] ! I0314 19:38:27.217483       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:17.600613    8428 command_runner.go:130] ! I0314 19:38:27.217621       1 main.go:223] Handling node with IPs: map[172.17.84.215:{}]
	I0314 19:42:17.600613    8428 command_runner.go:130] ! I0314 19:38:27.217634       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.3.0/24] 
	I0314 19:42:17.600613    8428 command_runner.go:130] ! I0314 19:38:37.229941       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:17.600613    8428 command_runner.go:130] ! I0314 19:38:37.230048       1 main.go:227] handling current node
	I0314 19:42:17.600613    8428 command_runner.go:130] ! I0314 19:38:37.230062       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:17.600613    8428 command_runner.go:130] ! I0314 19:38:37.230070       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:17.600613    8428 command_runner.go:130] ! I0314 19:38:37.230268       1 main.go:223] Handling node with IPs: map[172.17.84.215:{}]
	I0314 19:42:17.600613    8428 command_runner.go:130] ! I0314 19:38:37.230338       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.3.0/24] 
	I0314 19:42:17.617295    8428 logs.go:123] Gathering logs for dmesg ...
	I0314 19:42:17.617295    8428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:42:17.637870    8428 command_runner.go:130] > [Mar14 19:39] You have booted with nomodeset. This means your GPU drivers are DISABLED
	I0314 19:42:17.637870    8428 command_runner.go:130] > [  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	I0314 19:42:17.637870    8428 command_runner.go:130] > [  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	I0314 19:42:17.637870    8428 command_runner.go:130] > [  +0.111500] RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible!
	I0314 19:42:17.637870    8428 command_runner.go:130] > [  +0.025646] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	I0314 19:42:17.637870    8428 command_runner.go:130] > [  +0.000006] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	I0314 19:42:17.637870    8428 command_runner.go:130] > [  +0.000001] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	I0314 19:42:17.637870    8428 command_runner.go:130] > [  +0.051209] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	I0314 19:42:17.637870    8428 command_runner.go:130] > [  +0.017569] * Found PM-Timer Bug on the chipset. Due to workarounds for a bug,
	I0314 19:42:17.637870    8428 command_runner.go:130] >               * this clock source is slow. Consider trying other clock sources
	I0314 19:42:17.637870    8428 command_runner.go:130] > [  +5.774438] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	I0314 19:42:17.637870    8428 command_runner.go:130] > [  +0.663188] psmouse serio1: trackpoint: failed to get extended button data, assuming 3 buttons
	I0314 19:42:17.637870    8428 command_runner.go:130] > [  +1.473946] systemd-fstab-generator[113]: Ignoring "noauto" option for root device
	I0314 19:42:17.637870    8428 command_runner.go:130] > [  +5.849126] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	I0314 19:42:17.637870    8428 command_runner.go:130] > [  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	I0314 19:42:17.637870    8428 command_runner.go:130] > [  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	I0314 19:42:17.637870    8428 command_runner.go:130] > [Mar14 19:40] systemd-fstab-generator[630]: Ignoring "noauto" option for root device
	I0314 19:42:17.637870    8428 command_runner.go:130] > [  +0.179743] systemd-fstab-generator[642]: Ignoring "noauto" option for root device
	I0314 19:42:17.637870    8428 command_runner.go:130] > [ +24.853688] systemd-fstab-generator[971]: Ignoring "noauto" option for root device
	I0314 19:42:17.637870    8428 command_runner.go:130] > [  +0.096946] kauditd_printk_skb: 73 callbacks suppressed
	I0314 19:42:17.637870    8428 command_runner.go:130] > [  +0.497369] systemd-fstab-generator[1009]: Ignoring "noauto" option for root device
	I0314 19:42:17.637870    8428 command_runner.go:130] > [  +0.185545] systemd-fstab-generator[1021]: Ignoring "noauto" option for root device
	I0314 19:42:17.637870    8428 command_runner.go:130] > [  +0.215423] systemd-fstab-generator[1035]: Ignoring "noauto" option for root device
	I0314 19:42:17.637870    8428 command_runner.go:130] > [  +2.887443] systemd-fstab-generator[1220]: Ignoring "noauto" option for root device
	I0314 19:42:17.637870    8428 command_runner.go:130] > [  +0.193519] systemd-fstab-generator[1232]: Ignoring "noauto" option for root device
	I0314 19:42:17.637870    8428 command_runner.go:130] > [  +0.182072] systemd-fstab-generator[1244]: Ignoring "noauto" option for root device
	I0314 19:42:17.637870    8428 command_runner.go:130] > [  +0.258988] systemd-fstab-generator[1259]: Ignoring "noauto" option for root device
	I0314 19:42:17.637870    8428 command_runner.go:130] > [  +0.819687] systemd-fstab-generator[1381]: Ignoring "noauto" option for root device
	I0314 19:42:17.637870    8428 command_runner.go:130] > [  +0.099817] kauditd_printk_skb: 205 callbacks suppressed
	I0314 19:42:17.637870    8428 command_runner.go:130] > [  +2.940519] systemd-fstab-generator[1516]: Ignoring "noauto" option for root device
	I0314 19:42:17.637870    8428 command_runner.go:130] > [Mar14 19:41] kauditd_printk_skb: 84 callbacks suppressed
	I0314 19:42:17.637870    8428 command_runner.go:130] > [  +4.042735] systemd-fstab-generator[3087]: Ignoring "noauto" option for root device
	I0314 19:42:17.637870    8428 command_runner.go:130] > [  +7.733278] kauditd_printk_skb: 70 callbacks suppressed
	I0314 19:42:17.640374    8428 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:42:17.640374    8428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 19:42:17.851291    8428 command_runner.go:130] > Name:               multinode-442000
	I0314 19:42:17.851291    8428 command_runner.go:130] > Roles:              control-plane
	I0314 19:42:17.851291    8428 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0314 19:42:17.851291    8428 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0314 19:42:17.851291    8428 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0314 19:42:17.851291    8428 command_runner.go:130] >                     kubernetes.io/hostname=multinode-442000
	I0314 19:42:17.851291    8428 command_runner.go:130] >                     kubernetes.io/os=linux
	I0314 19:42:17.851413    8428 command_runner.go:130] >                     minikube.k8s.io/commit=c6f78a3db54ac629870afb44fb5bc8be9e04a8c7
	I0314 19:42:17.851413    8428 command_runner.go:130] >                     minikube.k8s.io/name=multinode-442000
	I0314 19:42:17.851449    8428 command_runner.go:130] >                     minikube.k8s.io/primary=true
	I0314 19:42:17.851449    8428 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_03_14T19_19_05_0700
	I0314 19:42:17.851449    8428 command_runner.go:130] >                     minikube.k8s.io/version=v1.32.0
	I0314 19:42:17.851449    8428 command_runner.go:130] >                     node-role.kubernetes.io/control-plane=
	I0314 19:42:17.851449    8428 command_runner.go:130] >                     node.kubernetes.io/exclude-from-external-load-balancers=
	I0314 19:42:17.851449    8428 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0314 19:42:17.851449    8428 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0314 19:42:17.851449    8428 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0314 19:42:17.851449    8428 command_runner.go:130] > CreationTimestamp:  Thu, 14 Mar 2024 19:19:00 +0000
	I0314 19:42:17.851449    8428 command_runner.go:130] > Taints:             <none>
	I0314 19:42:17.851449    8428 command_runner.go:130] > Unschedulable:      false
	I0314 19:42:17.851449    8428 command_runner.go:130] > Lease:
	I0314 19:42:17.851449    8428 command_runner.go:130] >   HolderIdentity:  multinode-442000
	I0314 19:42:17.851449    8428 command_runner.go:130] >   AcquireTime:     <unset>
	I0314 19:42:17.851449    8428 command_runner.go:130] >   RenewTime:       Thu, 14 Mar 2024 19:42:17 +0000
	I0314 19:42:17.851449    8428 command_runner.go:130] > Conditions:
	I0314 19:42:17.851449    8428 command_runner.go:130] >   Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	I0314 19:42:17.851449    8428 command_runner.go:130] >   ----             ------  -----------------                 ------------------                ------                       -------
	I0314 19:42:17.851449    8428 command_runner.go:130] >   MemoryPressure   False   Thu, 14 Mar 2024 19:41:41 +0000   Thu, 14 Mar 2024 19:18:58 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	I0314 19:42:17.851449    8428 command_runner.go:130] >   DiskPressure     False   Thu, 14 Mar 2024 19:41:41 +0000   Thu, 14 Mar 2024 19:18:58 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	I0314 19:42:17.851449    8428 command_runner.go:130] >   PIDPressure      False   Thu, 14 Mar 2024 19:41:41 +0000   Thu, 14 Mar 2024 19:18:58 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	I0314 19:42:17.851449    8428 command_runner.go:130] >   Ready            True    Thu, 14 Mar 2024 19:41:41 +0000   Thu, 14 Mar 2024 19:41:41 +0000   KubeletReady                 kubelet is posting ready status
	I0314 19:42:17.851449    8428 command_runner.go:130] > Addresses:
	I0314 19:42:17.851449    8428 command_runner.go:130] >   InternalIP:  172.17.93.236
	I0314 19:42:17.851449    8428 command_runner.go:130] >   Hostname:    multinode-442000
	I0314 19:42:17.851449    8428 command_runner.go:130] > Capacity:
	I0314 19:42:17.851449    8428 command_runner.go:130] >   cpu:                2
	I0314 19:42:17.851449    8428 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0314 19:42:17.851449    8428 command_runner.go:130] >   hugepages-2Mi:      0
	I0314 19:42:17.851449    8428 command_runner.go:130] >   memory:             2164268Ki
	I0314 19:42:17.851449    8428 command_runner.go:130] >   pods:               110
	I0314 19:42:17.851449    8428 command_runner.go:130] > Allocatable:
	I0314 19:42:17.851449    8428 command_runner.go:130] >   cpu:                2
	I0314 19:42:17.851449    8428 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0314 19:42:17.851449    8428 command_runner.go:130] >   hugepages-2Mi:      0
	I0314 19:42:17.851449    8428 command_runner.go:130] >   memory:             2164268Ki
	I0314 19:42:17.851449    8428 command_runner.go:130] >   pods:               110
	I0314 19:42:17.851449    8428 command_runner.go:130] > System Info:
	I0314 19:42:17.851449    8428 command_runner.go:130] >   Machine ID:                 37c811f81f1d4d709fd4a6eb79d70749
	I0314 19:42:17.851449    8428 command_runner.go:130] >   System UUID:                8469b663-ea90-da4f-856d-11034a8f65d8
	I0314 19:42:17.851449    8428 command_runner.go:130] >   Boot ID:                    91589624-f8f3-469e-b556-aa6dd64e54de
	I0314 19:42:17.851449    8428 command_runner.go:130] >   Kernel Version:             5.10.207
	I0314 19:42:17.851449    8428 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0314 19:42:17.851449    8428 command_runner.go:130] >   Operating System:           linux
	I0314 19:42:17.851449    8428 command_runner.go:130] >   Architecture:               amd64
	I0314 19:42:17.851449    8428 command_runner.go:130] >   Container Runtime Version:  docker://25.0.4
	I0314 19:42:17.851449    8428 command_runner.go:130] >   Kubelet Version:            v1.28.4
	I0314 19:42:17.851449    8428 command_runner.go:130] >   Kube-Proxy Version:         v1.28.4
	I0314 19:42:17.851449    8428 command_runner.go:130] > PodCIDR:                      10.244.0.0/24
	I0314 19:42:17.851449    8428 command_runner.go:130] > PodCIDRs:                     10.244.0.0/24
	I0314 19:42:17.851449    8428 command_runner.go:130] > Non-terminated Pods:          (9 in total)
	I0314 19:42:17.851449    8428 command_runner.go:130] >   Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0314 19:42:17.851971    8428 command_runner.go:130] >   ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	I0314 19:42:17.851971    8428 command_runner.go:130] >   default                     busybox-5b5d89c9d6-7446n                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	I0314 19:42:17.851971    8428 command_runner.go:130] >   kube-system                 coredns-5dd5756b68-d22jc                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     23m
	I0314 19:42:17.851971    8428 command_runner.go:130] >   kube-system                 etcd-multinode-442000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         72s
	I0314 19:42:17.851971    8428 command_runner.go:130] >   kube-system                 kindnet-7b9lf                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      23m
	I0314 19:42:17.851971    8428 command_runner.go:130] >   kube-system                 kube-apiserver-multinode-442000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         72s
	I0314 19:42:17.852054    8428 command_runner.go:130] >   kube-system                 kube-controller-manager-multinode-442000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	I0314 19:42:17.852177    8428 command_runner.go:130] >   kube-system                 kube-proxy-cg28g                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	I0314 19:42:17.852177    8428 command_runner.go:130] >   kube-system                 kube-scheduler-multinode-442000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	I0314 19:42:17.852177    8428 command_runner.go:130] >   kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	I0314 19:42:17.852177    8428 command_runner.go:130] > Allocated resources:
	I0314 19:42:17.852177    8428 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0314 19:42:17.852177    8428 command_runner.go:130] >   Resource           Requests     Limits
	I0314 19:42:17.852177    8428 command_runner.go:130] >   --------           --------     ------
	I0314 19:42:17.852177    8428 command_runner.go:130] >   cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	I0314 19:42:17.852271    8428 command_runner.go:130] >   memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	I0314 19:42:17.852271    8428 command_runner.go:130] >   ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	I0314 19:42:17.852271    8428 command_runner.go:130] >   hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	I0314 19:42:17.852271    8428 command_runner.go:130] > Events:
	I0314 19:42:17.852271    8428 command_runner.go:130] >   Type    Reason                   Age                From             Message
	I0314 19:42:17.852271    8428 command_runner.go:130] >   ----    ------                   ----               ----             -------
	I0314 19:42:17.852271    8428 command_runner.go:130] >   Normal  Starting                 22m                kube-proxy       
	I0314 19:42:17.852271    8428 command_runner.go:130] >   Normal  Starting                 69s                kube-proxy       
	I0314 19:42:17.852412    8428 command_runner.go:130] >   Normal  NodeHasSufficientMemory  23m (x8 over 23m)  kubelet          Node multinode-442000 status is now: NodeHasSufficientMemory
	I0314 19:42:17.852412    8428 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    23m (x8 over 23m)  kubelet          Node multinode-442000 status is now: NodeHasNoDiskPressure
	I0314 19:42:17.852412    8428 command_runner.go:130] >   Normal  NodeHasSufficientPID     23m (x7 over 23m)  kubelet          Node multinode-442000 status is now: NodeHasSufficientPID
	I0314 19:42:17.852412    8428 command_runner.go:130] >   Normal  NodeAllocatableEnforced  23m                kubelet          Updated Node Allocatable limit across pods
	I0314 19:42:17.852412    8428 command_runner.go:130] >   Normal  NodeHasSufficientMemory  23m                kubelet          Node multinode-442000 status is now: NodeHasSufficientMemory
	I0314 19:42:17.852412    8428 command_runner.go:130] >   Normal  NodeAllocatableEnforced  23m                kubelet          Updated Node Allocatable limit across pods
	I0314 19:42:17.852412    8428 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    23m                kubelet          Node multinode-442000 status is now: NodeHasNoDiskPressure
	I0314 19:42:17.852412    8428 command_runner.go:130] >   Normal  NodeHasSufficientPID     23m                kubelet          Node multinode-442000 status is now: NodeHasSufficientPID
	I0314 19:42:17.852412    8428 command_runner.go:130] >   Normal  Starting                 23m                kubelet          Starting kubelet.
	I0314 19:42:17.852412    8428 command_runner.go:130] >   Normal  RegisteredNode           23m                node-controller  Node multinode-442000 event: Registered Node multinode-442000 in Controller
	I0314 19:42:17.852412    8428 command_runner.go:130] >   Normal  NodeReady                22m                kubelet          Node multinode-442000 status is now: NodeReady
	I0314 19:42:17.852412    8428 command_runner.go:130] >   Normal  Starting                 78s                kubelet          Starting kubelet.
	I0314 19:42:17.852412    8428 command_runner.go:130] >   Normal  NodeHasSufficientMemory  78s (x8 over 78s)  kubelet          Node multinode-442000 status is now: NodeHasSufficientMemory
	I0314 19:42:17.852412    8428 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    78s (x8 over 78s)  kubelet          Node multinode-442000 status is now: NodeHasNoDiskPressure
	I0314 19:42:17.852412    8428 command_runner.go:130] >   Normal  NodeHasSufficientPID     78s (x7 over 78s)  kubelet          Node multinode-442000 status is now: NodeHasSufficientPID
	I0314 19:42:17.852412    8428 command_runner.go:130] >   Normal  NodeAllocatableEnforced  78s                kubelet          Updated Node Allocatable limit across pods
	I0314 19:42:17.852412    8428 command_runner.go:130] >   Normal  RegisteredNode           60s                node-controller  Node multinode-442000 event: Registered Node multinode-442000 in Controller
	I0314 19:42:17.852412    8428 command_runner.go:130] > Name:               multinode-442000-m02
	I0314 19:42:17.852412    8428 command_runner.go:130] > Roles:              <none>
	I0314 19:42:17.852412    8428 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0314 19:42:17.852412    8428 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0314 19:42:17.852412    8428 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0314 19:42:17.852412    8428 command_runner.go:130] >                     kubernetes.io/hostname=multinode-442000-m02
	I0314 19:42:17.852412    8428 command_runner.go:130] >                     kubernetes.io/os=linux
	I0314 19:42:17.852412    8428 command_runner.go:130] >                     minikube.k8s.io/commit=c6f78a3db54ac629870afb44fb5bc8be9e04a8c7
	I0314 19:42:17.852412    8428 command_runner.go:130] >                     minikube.k8s.io/name=multinode-442000
	I0314 19:42:17.852412    8428 command_runner.go:130] >                     minikube.k8s.io/primary=false
	I0314 19:42:17.852412    8428 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_03_14T19_22_02_0700
	I0314 19:42:17.852412    8428 command_runner.go:130] >                     minikube.k8s.io/version=v1.32.0
	I0314 19:42:17.852412    8428 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0314 19:42:17.852412    8428 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0314 19:42:17.852412    8428 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0314 19:42:17.852412    8428 command_runner.go:130] > CreationTimestamp:  Thu, 14 Mar 2024 19:22:02 +0000
	I0314 19:42:17.852412    8428 command_runner.go:130] > Taints:             node.kubernetes.io/unreachable:NoExecute
	I0314 19:42:17.852412    8428 command_runner.go:130] >                     node.kubernetes.io/unreachable:NoSchedule
	I0314 19:42:17.852412    8428 command_runner.go:130] > Unschedulable:      false
	I0314 19:42:17.852412    8428 command_runner.go:130] > Lease:
	I0314 19:42:17.852412    8428 command_runner.go:130] >   HolderIdentity:  multinode-442000-m02
	I0314 19:42:17.852412    8428 command_runner.go:130] >   AcquireTime:     <unset>
	I0314 19:42:17.852412    8428 command_runner.go:130] >   RenewTime:       Thu, 14 Mar 2024 19:38:03 +0000
	I0314 19:42:17.852412    8428 command_runner.go:130] > Conditions:
	I0314 19:42:17.852412    8428 command_runner.go:130] >   Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	I0314 19:42:17.852412    8428 command_runner.go:130] >   ----             ------    -----------------                 ------------------                ------              -------
	I0314 19:42:17.852412    8428 command_runner.go:130] >   MemoryPressure   Unknown   Thu, 14 Mar 2024 19:33:15 +0000   Thu, 14 Mar 2024 19:41:59 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0314 19:42:17.852939    8428 command_runner.go:130] >   DiskPressure     Unknown   Thu, 14 Mar 2024 19:33:15 +0000   Thu, 14 Mar 2024 19:41:59 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0314 19:42:17.852939    8428 command_runner.go:130] >   PIDPressure      Unknown   Thu, 14 Mar 2024 19:33:15 +0000   Thu, 14 Mar 2024 19:41:59 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0314 19:42:17.852939    8428 command_runner.go:130] >   Ready            Unknown   Thu, 14 Mar 2024 19:33:15 +0000   Thu, 14 Mar 2024 19:41:59 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0314 19:42:17.852939    8428 command_runner.go:130] > Addresses:
	I0314 19:42:17.852939    8428 command_runner.go:130] >   InternalIP:  172.17.80.135
	I0314 19:42:17.852939    8428 command_runner.go:130] >   Hostname:    multinode-442000-m02
	I0314 19:42:17.853143    8428 command_runner.go:130] > Capacity:
	I0314 19:42:17.853143    8428 command_runner.go:130] >   cpu:                2
	I0314 19:42:17.853143    8428 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0314 19:42:17.853143    8428 command_runner.go:130] >   hugepages-2Mi:      0
	I0314 19:42:17.853143    8428 command_runner.go:130] >   memory:             2164268Ki
	I0314 19:42:17.853143    8428 command_runner.go:130] >   pods:               110
	I0314 19:42:17.853143    8428 command_runner.go:130] > Allocatable:
	I0314 19:42:17.853143    8428 command_runner.go:130] >   cpu:                2
	I0314 19:42:17.853314    8428 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0314 19:42:17.853333    8428 command_runner.go:130] >   hugepages-2Mi:      0
	I0314 19:42:17.853333    8428 command_runner.go:130] >   memory:             2164268Ki
	I0314 19:42:17.853333    8428 command_runner.go:130] >   pods:               110
	I0314 19:42:17.853333    8428 command_runner.go:130] > System Info:
	I0314 19:42:17.853333    8428 command_runner.go:130] >   Machine ID:                 35b6f7da4d3943d99d8a5913cae1c8fb
	I0314 19:42:17.853333    8428 command_runner.go:130] >   System UUID:                0b9b8376-0767-f940-9973-d373e3dc050d
	I0314 19:42:17.853333    8428 command_runner.go:130] >   Boot ID:                    45d479cc-26e8-46a6-9431-50637071f586
	I0314 19:42:17.853392    8428 command_runner.go:130] >   Kernel Version:             5.10.207
	I0314 19:42:17.853392    8428 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0314 19:42:17.853409    8428 command_runner.go:130] >   Operating System:           linux
	I0314 19:42:17.853409    8428 command_runner.go:130] >   Architecture:               amd64
	I0314 19:42:17.853409    8428 command_runner.go:130] >   Container Runtime Version:  docker://25.0.4
	I0314 19:42:17.853409    8428 command_runner.go:130] >   Kubelet Version:            v1.28.4
	I0314 19:42:17.853409    8428 command_runner.go:130] >   Kube-Proxy Version:         v1.28.4
	I0314 19:42:17.853409    8428 command_runner.go:130] > PodCIDR:                      10.244.1.0/24
	I0314 19:42:17.853409    8428 command_runner.go:130] > PodCIDRs:                     10.244.1.0/24
	I0314 19:42:17.853409    8428 command_runner.go:130] > Non-terminated Pods:          (3 in total)
	I0314 19:42:17.853494    8428 command_runner.go:130] >   Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0314 19:42:17.853494    8428 command_runner.go:130] >   ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	I0314 19:42:17.853494    8428 command_runner.go:130] >   default                     busybox-5b5d89c9d6-8drpb    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	I0314 19:42:17.853494    8428 command_runner.go:130] >   kube-system                 kindnet-c7m4p               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      20m
	I0314 19:42:17.853494    8428 command_runner.go:130] >   kube-system                 kube-proxy-72dzs            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	I0314 19:42:17.853494    8428 command_runner.go:130] > Allocated resources:
	I0314 19:42:17.853494    8428 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0314 19:42:17.853569    8428 command_runner.go:130] >   Resource           Requests   Limits
	I0314 19:42:17.853569    8428 command_runner.go:130] >   --------           --------   ------
	I0314 19:42:17.853569    8428 command_runner.go:130] >   cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	I0314 19:42:17.853569    8428 command_runner.go:130] >   memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	I0314 19:42:17.853569    8428 command_runner.go:130] >   ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	I0314 19:42:17.853569    8428 command_runner.go:130] >   hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	I0314 19:42:17.853569    8428 command_runner.go:130] > Events:
	I0314 19:42:17.853569    8428 command_runner.go:130] >   Type    Reason                   Age                From             Message
	I0314 19:42:17.853569    8428 command_runner.go:130] >   ----    ------                   ----               ----             -------
	I0314 19:42:17.853569    8428 command_runner.go:130] >   Normal  Starting                 20m                kube-proxy       
	I0314 19:42:17.853569    8428 command_runner.go:130] >   Normal  NodeHasSufficientMemory  20m (x5 over 20m)  kubelet          Node multinode-442000-m02 status is now: NodeHasSufficientMemory
	I0314 19:42:17.853569    8428 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    20m (x5 over 20m)  kubelet          Node multinode-442000-m02 status is now: NodeHasNoDiskPressure
	I0314 19:42:17.853569    8428 command_runner.go:130] >   Normal  NodeHasSufficientPID     20m (x5 over 20m)  kubelet          Node multinode-442000-m02 status is now: NodeHasSufficientPID
	I0314 19:42:17.853569    8428 command_runner.go:130] >   Normal  RegisteredNode           20m                node-controller  Node multinode-442000-m02 event: Registered Node multinode-442000-m02 in Controller
	I0314 19:42:17.853569    8428 command_runner.go:130] >   Normal  NodeReady                19m                kubelet          Node multinode-442000-m02 status is now: NodeReady
	I0314 19:42:17.853569    8428 command_runner.go:130] >   Normal  RegisteredNode           60s                node-controller  Node multinode-442000-m02 event: Registered Node multinode-442000-m02 in Controller
	I0314 19:42:17.853569    8428 command_runner.go:130] >   Normal  NodeNotReady             19s                node-controller  Node multinode-442000-m02 status is now: NodeNotReady
	I0314 19:42:17.853569    8428 command_runner.go:130] > Name:               multinode-442000-m03
	I0314 19:42:17.853569    8428 command_runner.go:130] > Roles:              <none>
	I0314 19:42:17.853569    8428 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0314 19:42:17.853569    8428 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0314 19:42:17.853569    8428 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0314 19:42:17.853569    8428 command_runner.go:130] >                     kubernetes.io/hostname=multinode-442000-m03
	I0314 19:42:17.853569    8428 command_runner.go:130] >                     kubernetes.io/os=linux
	I0314 19:42:17.853569    8428 command_runner.go:130] >                     minikube.k8s.io/commit=c6f78a3db54ac629870afb44fb5bc8be9e04a8c7
	I0314 19:42:17.853569    8428 command_runner.go:130] >                     minikube.k8s.io/name=multinode-442000
	I0314 19:42:17.853569    8428 command_runner.go:130] >                     minikube.k8s.io/primary=false
	I0314 19:42:17.853569    8428 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_03_14T19_36_47_0700
	I0314 19:42:17.853569    8428 command_runner.go:130] >                     minikube.k8s.io/version=v1.32.0
	I0314 19:42:17.853569    8428 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0314 19:42:17.853569    8428 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0314 19:42:17.853569    8428 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0314 19:42:17.853569    8428 command_runner.go:130] > CreationTimestamp:  Thu, 14 Mar 2024 19:36:47 +0000
	I0314 19:42:17.853569    8428 command_runner.go:130] > Taints:             node.kubernetes.io/unreachable:NoExecute
	I0314 19:42:17.853569    8428 command_runner.go:130] >                     node.kubernetes.io/unreachable:NoSchedule
	I0314 19:42:17.853569    8428 command_runner.go:130] > Unschedulable:      false
	I0314 19:42:17.853569    8428 command_runner.go:130] > Lease:
	I0314 19:42:17.853569    8428 command_runner.go:130] >   HolderIdentity:  multinode-442000-m03
	I0314 19:42:17.853569    8428 command_runner.go:130] >   AcquireTime:     <unset>
	I0314 19:42:17.853569    8428 command_runner.go:130] >   RenewTime:       Thu, 14 Mar 2024 19:37:37 +0000
	I0314 19:42:17.853569    8428 command_runner.go:130] > Conditions:
	I0314 19:42:17.853569    8428 command_runner.go:130] >   Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	I0314 19:42:17.853569    8428 command_runner.go:130] >   ----             ------    -----------------                 ------------------                ------              -------
	I0314 19:42:17.853569    8428 command_runner.go:130] >   MemoryPressure   Unknown   Thu, 14 Mar 2024 19:36:54 +0000   Thu, 14 Mar 2024 19:38:21 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0314 19:42:17.854104    8428 command_runner.go:130] >   DiskPressure     Unknown   Thu, 14 Mar 2024 19:36:54 +0000   Thu, 14 Mar 2024 19:38:21 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0314 19:42:17.854104    8428 command_runner.go:130] >   PIDPressure      Unknown   Thu, 14 Mar 2024 19:36:54 +0000   Thu, 14 Mar 2024 19:38:21 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0314 19:42:17.854104    8428 command_runner.go:130] >   Ready            Unknown   Thu, 14 Mar 2024 19:36:54 +0000   Thu, 14 Mar 2024 19:38:21 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0314 19:42:17.854104    8428 command_runner.go:130] > Addresses:
	I0314 19:42:17.854104    8428 command_runner.go:130] >   InternalIP:  172.17.84.215
	I0314 19:42:17.854104    8428 command_runner.go:130] >   Hostname:    multinode-442000-m03
	I0314 19:42:17.854104    8428 command_runner.go:130] > Capacity:
	I0314 19:42:17.854104    8428 command_runner.go:130] >   cpu:                2
	I0314 19:42:17.854176    8428 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0314 19:42:17.854176    8428 command_runner.go:130] >   hugepages-2Mi:      0
	I0314 19:42:17.854176    8428 command_runner.go:130] >   memory:             2164268Ki
	I0314 19:42:17.854176    8428 command_runner.go:130] >   pods:               110
	I0314 19:42:17.854176    8428 command_runner.go:130] > Allocatable:
	I0314 19:42:17.854176    8428 command_runner.go:130] >   cpu:                2
	I0314 19:42:17.854220    8428 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0314 19:42:17.854220    8428 command_runner.go:130] >   hugepages-2Mi:      0
	I0314 19:42:17.854220    8428 command_runner.go:130] >   memory:             2164268Ki
	I0314 19:42:17.854220    8428 command_runner.go:130] >   pods:               110
	I0314 19:42:17.854220    8428 command_runner.go:130] > System Info:
	I0314 19:42:17.854220    8428 command_runner.go:130] >   Machine ID:                 dc7772516bfe448db22a5c28796f53ab
	I0314 19:42:17.854220    8428 command_runner.go:130] >   System UUID:                71573585-d564-f043-9154-3d5854ce61b8
	I0314 19:42:17.854220    8428 command_runner.go:130] >   Boot ID:                    fed746b2-110b-43ee-9065-09983ba74a37
	I0314 19:42:17.854220    8428 command_runner.go:130] >   Kernel Version:             5.10.207
	I0314 19:42:17.854220    8428 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0314 19:42:17.854220    8428 command_runner.go:130] >   Operating System:           linux
	I0314 19:42:17.854220    8428 command_runner.go:130] >   Architecture:               amd64
	I0314 19:42:17.854220    8428 command_runner.go:130] >   Container Runtime Version:  docker://25.0.4
	I0314 19:42:17.854220    8428 command_runner.go:130] >   Kubelet Version:            v1.28.4
	I0314 19:42:17.854220    8428 command_runner.go:130] >   Kube-Proxy Version:         v1.28.4
	I0314 19:42:17.854331    8428 command_runner.go:130] > PodCIDR:                      10.244.3.0/24
	I0314 19:42:17.854331    8428 command_runner.go:130] > PodCIDRs:                     10.244.3.0/24
	I0314 19:42:17.854331    8428 command_runner.go:130] > Non-terminated Pods:          (2 in total)
	I0314 19:42:17.854331    8428 command_runner.go:130] >   Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0314 19:42:17.854331    8428 command_runner.go:130] >   ---------                   ----                ------------  ----------  ---------------  -------------  ---
	I0314 19:42:17.854331    8428 command_runner.go:130] >   kube-system                 kindnet-r7zdb       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      15m
	I0314 19:42:17.854331    8428 command_runner.go:130] >   kube-system                 kube-proxy-w2qls    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	I0314 19:42:17.854451    8428 command_runner.go:130] > Allocated resources:
	I0314 19:42:17.854451    8428 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0314 19:42:17.854451    8428 command_runner.go:130] >   Resource           Requests   Limits
	I0314 19:42:17.854451    8428 command_runner.go:130] >   --------           --------   ------
	I0314 19:42:17.854541    8428 command_runner.go:130] >   cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	I0314 19:42:17.854541    8428 command_runner.go:130] >   memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	I0314 19:42:17.854541    8428 command_runner.go:130] >   ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	I0314 19:42:17.854668    8428 command_runner.go:130] >   hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	I0314 19:42:17.854668    8428 command_runner.go:130] > Events:
	I0314 19:42:17.854668    8428 command_runner.go:130] >   Type    Reason                   Age                    From             Message
	I0314 19:42:17.854668    8428 command_runner.go:130] >   ----    ------                   ----                   ----             -------
	I0314 19:42:17.854728    8428 command_runner.go:130] >   Normal  Starting                 15m                    kube-proxy       
	I0314 19:42:17.854728    8428 command_runner.go:130] >   Normal  Starting                 5m29s                  kube-proxy       
	I0314 19:42:17.854766    8428 command_runner.go:130] >   Normal  NodeHasSufficientMemory  15m (x5 over 15m)      kubelet          Node multinode-442000-m03 status is now: NodeHasSufficientMemory
	I0314 19:42:17.854766    8428 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    15m (x5 over 15m)      kubelet          Node multinode-442000-m03 status is now: NodeHasNoDiskPressure
	I0314 19:42:17.854766    8428 command_runner.go:130] >   Normal  NodeHasSufficientPID     15m (x5 over 15m)      kubelet          Node multinode-442000-m03 status is now: NodeHasSufficientPID
	I0314 19:42:17.854810    8428 command_runner.go:130] >   Normal  NodeReady                15m                    kubelet          Node multinode-442000-m03 status is now: NodeReady
	I0314 19:42:17.854810    8428 command_runner.go:130] >   Normal  NodeHasSufficientMemory  5m31s (x5 over 5m33s)  kubelet          Node multinode-442000-m03 status is now: NodeHasSufficientMemory
	I0314 19:42:17.854810    8428 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    5m31s (x5 over 5m33s)  kubelet          Node multinode-442000-m03 status is now: NodeHasNoDiskPressure
	I0314 19:42:17.854810    8428 command_runner.go:130] >   Normal  NodeHasSufficientPID     5m31s (x5 over 5m33s)  kubelet          Node multinode-442000-m03 status is now: NodeHasSufficientPID
	I0314 19:42:17.854810    8428 command_runner.go:130] >   Normal  RegisteredNode           5m27s                  node-controller  Node multinode-442000-m03 event: Registered Node multinode-442000-m03 in Controller
	I0314 19:42:17.854810    8428 command_runner.go:130] >   Normal  NodeReady                5m24s                  kubelet          Node multinode-442000-m03 status is now: NodeReady
	I0314 19:42:17.854810    8428 command_runner.go:130] >   Normal  NodeNotReady             3m57s                  node-controller  Node multinode-442000-m03 status is now: NodeNotReady
	I0314 19:42:17.854810    8428 command_runner.go:130] >   Normal  RegisteredNode           60s                    node-controller  Node multinode-442000-m03 event: Registered Node multinode-442000-m03 in Controller
	I0314 19:42:17.863994    8428 logs.go:123] Gathering logs for etcd [a81a9c43c355] ...
	I0314 19:42:17.863994    8428 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a81a9c43c355"
	I0314 19:42:17.897967    8428 command_runner.go:130] ! {"level":"warn","ts":"2024-03-14T19:41:01.944953Z","caller":"embed/config.go:673","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I0314 19:42:17.897967    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:01.945607Z","caller":"etcdmain/etcd.go:73","msg":"Running: ","args":["etcd","--advertise-client-urls=https://172.17.93.236:2379","--cert-file=/var/lib/minikube/certs/etcd/server.crt","--client-cert-auth=true","--data-dir=/var/lib/minikube/etcd","--experimental-initial-corrupt-check=true","--experimental-watch-progress-notify-interval=5s","--initial-advertise-peer-urls=https://172.17.93.236:2380","--initial-cluster=multinode-442000=https://172.17.93.236:2380","--key-file=/var/lib/minikube/certs/etcd/server.key","--listen-client-urls=https://127.0.0.1:2379,https://172.17.93.236:2379","--listen-metrics-urls=http://127.0.0.1:2381","--listen-peer-urls=https://172.17.93.236:2380","--name=multinode-442000","--peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt","--peer-client-cert-auth=true","--peer-key-file=/var/lib/minikube/certs/etcd/peer.key","--peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt","--prox
y-refresh-interval=70000","--snapshot-count=10000","--trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt"]}
	I0314 19:42:17.897967    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:01.945676Z","caller":"etcdmain/etcd.go:116","msg":"server has been already initialized","data-dir":"/var/lib/minikube/etcd","dir-type":"member"}
	I0314 19:42:17.897967    8428 command_runner.go:130] ! {"level":"warn","ts":"2024-03-14T19:41:01.945701Z","caller":"embed/config.go:673","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I0314 19:42:17.897967    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:01.94571Z","caller":"embed/etcd.go:127","msg":"configuring peer listeners","listen-peer-urls":["https://172.17.93.236:2380"]}
	I0314 19:42:17.897967    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:01.94582Z","caller":"embed/etcd.go:495","msg":"starting with peer TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I0314 19:42:17.897967    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:01.94751Z","caller":"embed/etcd.go:135","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://172.17.93.236:2379"]}
	I0314 19:42:17.897967    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:01.948798Z","caller":"embed/etcd.go:309","msg":"starting an etcd server","etcd-version":"3.5.9","git-sha":"bdbbde998","go-version":"go1.19.9","go-os":"linux","go-arch":"amd64","max-cpu-set":2,"max-cpu-available":2,"member-initialized":true,"name":"multinode-442000","data-dir":"/var/lib/minikube/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/minikube/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://172.17.93.236:2380"],"listen-peer-urls":["https://172.17.93.236:2380"],"advertise-client-urls":["https://172.17.93.236:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.17.93.236:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"cors":["*"],"host-whitelist":["*"],"initial-
cluster":"","initial-cluster-state":"new","initial-cluster-token":"","quota-backend-bytes":2147483648,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"initial-corrupt-check":true,"corrupt-check-time-interval":"0s","compact-check-time-enabled":false,"compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s"}
	I0314 19:42:17.898497    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:01.989049Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"39.493838ms"}
	I0314 19:42:17.898541    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:02.0258Z","caller":"etcdserver/server.go:530","msg":"No snapshot found. Recovering WAL from scratch!"}
	I0314 19:42:17.898598    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:02.055698Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"76b99849a2fc5549","local-member-id":"fa26a6ed08186c39","commit-index":1967}
	I0314 19:42:17.898639    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:02.067927Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fa26a6ed08186c39 switched to configuration voters=()"}
	I0314 19:42:17.898639    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:02.067975Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fa26a6ed08186c39 became follower at term 2"}
	I0314 19:42:17.898692    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:02.068051Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft fa26a6ed08186c39 [peers: [], term: 2, commit: 1967, applied: 0, lastindex: 1967, lastterm: 2]"}
	I0314 19:42:17.898692    8428 command_runner.go:130] ! {"level":"warn","ts":"2024-03-14T19:41:02.100633Z","caller":"auth/store.go:1238","msg":"simple token is not cryptographically signed"}
	I0314 19:42:17.898733    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:02.113992Z","caller":"mvcc/kvstore.go:323","msg":"restored last compact revision","meta-bucket-name":"meta","meta-bucket-name-key":"finishedCompactRev","restored-compact-revision":1090}
	I0314 19:42:17.898733    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:02.125551Z","caller":"mvcc/kvstore.go:393","msg":"kvstore restored","current-rev":1704}
	I0314 19:42:17.898786    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:02.137052Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	I0314 19:42:17.898786    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:02.152836Z","caller":"etcdserver/corrupt.go:95","msg":"starting initial corruption check","local-member-id":"fa26a6ed08186c39","timeout":"7s"}
	I0314 19:42:17.898820    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:02.153448Z","caller":"etcdserver/corrupt.go:165","msg":"initial corruption checking passed; no corruption","local-member-id":"fa26a6ed08186c39"}
	I0314 19:42:17.898820    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:02.153504Z","caller":"etcdserver/server.go:854","msg":"starting etcd server","local-member-id":"fa26a6ed08186c39","local-server-version":"3.5.9","cluster-version":"to_be_decided"}
	I0314 19:42:17.898868    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:02.154089Z","caller":"etcdserver/server.go:754","msg":"starting initial election tick advance","election-ticks":10}
	I0314 19:42:17.898868    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:02.154894Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	I0314 19:42:17.898938    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:02.154977Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	I0314 19:42:17.898938    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:02.154992Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	I0314 19:42:17.898938    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:02.158559Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fa26a6ed08186c39 switched to configuration voters=(18025278095570267193)"}
	I0314 19:42:17.898938    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:02.158756Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"76b99849a2fc5549","local-member-id":"fa26a6ed08186c39","added-peer-id":"fa26a6ed08186c39","added-peer-peer-urls":["https://172.17.86.124:2380"]}
	I0314 19:42:17.898938    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:02.158933Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"76b99849a2fc5549","local-member-id":"fa26a6ed08186c39","cluster-version":"3.5"}
	I0314 19:42:17.898938    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:02.158969Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	I0314 19:42:17.898938    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:02.159838Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I0314 19:42:17.898938    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:02.160148Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"fa26a6ed08186c39","initial-advertise-peer-urls":["https://172.17.93.236:2380"],"listen-peer-urls":["https://172.17.93.236:2380"],"advertise-client-urls":["https://172.17.93.236:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.17.93.236:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	I0314 19:42:17.898938    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:02.160272Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	I0314 19:42:17.898938    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:02.161335Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"172.17.93.236:2380"}
	I0314 19:42:17.898938    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:02.161389Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"172.17.93.236:2380"}
	I0314 19:42:17.898938    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:03.281331Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fa26a6ed08186c39 is starting a new election at term 2"}
	I0314 19:42:17.898938    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:03.281645Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fa26a6ed08186c39 became pre-candidate at term 2"}
	I0314 19:42:17.898938    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:03.281829Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fa26a6ed08186c39 received MsgPreVoteResp from fa26a6ed08186c39 at term 2"}
	I0314 19:42:17.898938    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:03.281928Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fa26a6ed08186c39 became candidate at term 3"}
	I0314 19:42:17.898938    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:03.282044Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fa26a6ed08186c39 received MsgVoteResp from fa26a6ed08186c39 at term 3"}
	I0314 19:42:17.898938    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:03.282164Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fa26a6ed08186c39 became leader at term 3"}
	I0314 19:42:17.898938    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:03.282332Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: fa26a6ed08186c39 elected leader fa26a6ed08186c39 at term 3"}
	I0314 19:42:17.898938    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:03.292472Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"fa26a6ed08186c39","local-member-attributes":"{Name:multinode-442000 ClientURLs:[https://172.17.93.236:2379]}","request-path":"/0/members/fa26a6ed08186c39/attributes","cluster-id":"76b99849a2fc5549","publish-timeout":"7s"}
	I0314 19:42:17.898938    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:03.292867Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	I0314 19:42:17.898938    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:03.296522Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	I0314 19:42:17.898938    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:03.298446Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	I0314 19:42:17.898938    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:03.311867Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.17.93.236:2379"}
	I0314 19:42:17.898938    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:03.311957Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	I0314 19:42:17.898938    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:03.31205Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	I0314 19:42:17.904943    8428 logs.go:123] Gathering logs for kube-proxy [497007582e44] ...
	I0314 19:42:17.904943    8428 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 497007582e44"
	I0314 19:42:17.931281    8428 command_runner.go:130] ! I0314 19:41:08.342277       1 server_others.go:69] "Using iptables proxy"
	I0314 19:42:17.931281    8428 command_runner.go:130] ! I0314 19:41:08.381589       1 node.go:141] Successfully retrieved node IP: 172.17.93.236
	I0314 19:42:17.931281    8428 command_runner.go:130] ! I0314 19:41:08.703360       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0314 19:42:17.931281    8428 command_runner.go:130] ! I0314 19:41:08.703384       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0314 19:42:17.931281    8428 command_runner.go:130] ! I0314 19:41:08.724122       1 server_others.go:152] "Using iptables Proxier"
	I0314 19:42:17.931281    8428 command_runner.go:130] ! I0314 19:41:08.726554       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0314 19:42:17.931281    8428 command_runner.go:130] ! I0314 19:41:08.729424       1 server.go:846] "Version info" version="v1.28.4"
	I0314 19:42:17.931281    8428 command_runner.go:130] ! I0314 19:41:08.729460       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0314 19:42:17.931281    8428 command_runner.go:130] ! I0314 19:41:08.732062       1 config.go:188] "Starting service config controller"
	I0314 19:42:17.931281    8428 command_runner.go:130] ! I0314 19:41:08.732501       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0314 19:42:17.931281    8428 command_runner.go:130] ! I0314 19:41:08.732571       1 config.go:97] "Starting endpoint slice config controller"
	I0314 19:42:17.931281    8428 command_runner.go:130] ! I0314 19:41:08.732581       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0314 19:42:17.931281    8428 command_runner.go:130] ! I0314 19:41:08.733523       1 config.go:315] "Starting node config controller"
	I0314 19:42:17.931281    8428 command_runner.go:130] ! I0314 19:41:08.733550       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0314 19:42:17.931281    8428 command_runner.go:130] ! I0314 19:41:08.832968       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0314 19:42:17.931281    8428 command_runner.go:130] ! I0314 19:41:08.833049       1 shared_informer.go:318] Caches are synced for service config
	I0314 19:42:17.931281    8428 command_runner.go:130] ! I0314 19:41:08.835501       1 shared_informer.go:318] Caches are synced for node config
	I0314 19:42:17.933860    8428 logs.go:123] Gathering logs for kube-controller-manager [12baf105f0bb] ...
	I0314 19:42:17.933914    8428 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12baf105f0bb"
	I0314 19:42:17.963142    8428 command_runner.go:130] ! I0314 19:41:03.101287       1 serving.go:348] Generated self-signed cert in-memory
	I0314 19:42:17.964008    8428 command_runner.go:130] ! I0314 19:41:03.872151       1 controllermanager.go:189] "Starting" version="v1.28.4"
	I0314 19:42:17.964055    8428 command_runner.go:130] ! I0314 19:41:03.874301       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0314 19:42:17.964055    8428 command_runner.go:130] ! I0314 19:41:03.879645       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0314 19:42:17.964055    8428 command_runner.go:130] ! I0314 19:41:03.880765       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0314 19:42:17.964055    8428 command_runner.go:130] ! I0314 19:41:03.883873       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0314 19:42:17.964055    8428 command_runner.go:130] ! I0314 19:41:03.883977       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0314 19:42:17.964055    8428 command_runner.go:130] ! I0314 19:41:07.787609       1 shared_informer.go:311] Waiting for caches to sync for tokens
	I0314 19:42:17.964055    8428 command_runner.go:130] ! I0314 19:41:07.796442       1 controllermanager.go:642] "Started controller" controller="replicationcontroller-controller"
	I0314 19:42:17.964055    8428 command_runner.go:130] ! I0314 19:41:07.796953       1 replica_set.go:214] "Starting controller" name="replicationcontroller"
	I0314 19:42:17.964055    8428 command_runner.go:130] ! I0314 19:41:07.798900       1 shared_informer.go:311] Waiting for caches to sync for ReplicationController
	I0314 19:42:17.964055    8428 command_runner.go:130] ! I0314 19:41:07.848846       1 controllermanager.go:642] "Started controller" controller="namespace-controller"
	I0314 19:42:17.964055    8428 command_runner.go:130] ! I0314 19:41:07.849015       1 namespace_controller.go:197] "Starting namespace controller"
	I0314 19:42:17.964055    8428 command_runner.go:130] ! I0314 19:41:07.849025       1 shared_informer.go:311] Waiting for caches to sync for namespace
	I0314 19:42:17.964055    8428 command_runner.go:130] ! I0314 19:41:07.855296       1 controllermanager.go:642] "Started controller" controller="persistentvolumeclaim-protection-controller"
	I0314 19:42:17.964055    8428 command_runner.go:130] ! I0314 19:41:07.858491       1 pvc_protection_controller.go:102] "Starting PVC protection controller"
	I0314 19:42:17.964055    8428 command_runner.go:130] ! I0314 19:41:07.858512       1 shared_informer.go:311] Waiting for caches to sync for PVC protection
	I0314 19:42:17.964055    8428 command_runner.go:130] ! I0314 19:41:07.864964       1 controllermanager.go:642] "Started controller" controller="endpoints-controller"
	I0314 19:42:17.964055    8428 command_runner.go:130] ! I0314 19:41:07.865080       1 endpoints_controller.go:174] "Starting endpoint controller"
	I0314 19:42:17.964055    8428 command_runner.go:130] ! I0314 19:41:07.865088       1 shared_informer.go:311] Waiting for caches to sync for endpoint
	I0314 19:42:17.964055    8428 command_runner.go:130] ! I0314 19:41:07.870629       1 controllermanager.go:642] "Started controller" controller="daemonset-controller"
	I0314 19:42:17.964055    8428 command_runner.go:130] ! I0314 19:41:07.871089       1 daemon_controller.go:291] "Starting daemon sets controller"
	I0314 19:42:17.964055    8428 command_runner.go:130] ! I0314 19:41:07.871332       1 shared_informer.go:311] Waiting for caches to sync for daemon sets
	I0314 19:42:17.964055    8428 command_runner.go:130] ! I0314 19:41:07.889997       1 shared_informer.go:318] Caches are synced for tokens
	I0314 19:42:17.964055    8428 command_runner.go:130] ! I0314 19:41:07.899597       1 controllermanager.go:642] "Started controller" controller="horizontal-pod-autoscaler-controller"
	I0314 19:42:17.964055    8428 command_runner.go:130] ! I0314 19:41:07.900355       1 horizontal.go:200] "Starting HPA controller"
	I0314 19:42:17.964055    8428 command_runner.go:130] ! I0314 19:41:07.901325       1 shared_informer.go:311] Waiting for caches to sync for HPA
	I0314 19:42:17.982925    8428 command_runner.go:130] ! I0314 19:41:07.921217       1 controllermanager.go:642] "Started controller" controller="disruption-controller"
	I0314 19:42:17.982925    8428 command_runner.go:130] ! I0314 19:41:07.922072       1 disruption.go:433] "Sending events to api server."
	I0314 19:42:17.982925    8428 command_runner.go:130] ! I0314 19:41:07.922293       1 disruption.go:444] "Starting disruption controller"
	I0314 19:42:17.982925    8428 command_runner.go:130] ! I0314 19:41:07.922481       1 shared_informer.go:311] Waiting for caches to sync for disruption
	I0314 19:42:17.983008    8428 command_runner.go:130] ! I0314 19:41:07.927437       1 controllermanager.go:642] "Started controller" controller="persistentvolume-attach-detach-controller"
	I0314 19:42:17.983008    8428 command_runner.go:130] ! I0314 19:41:07.929290       1 attach_detach_controller.go:337] "Starting attach detach controller"
	I0314 19:42:17.983008    8428 command_runner.go:130] ! I0314 19:41:07.929325       1 shared_informer.go:311] Waiting for caches to sync for attach detach
	I0314 19:42:17.983008    8428 command_runner.go:130] ! I0314 19:41:07.936410       1 controllermanager.go:642] "Started controller" controller="ephemeral-volume-controller"
	I0314 19:42:17.983008    8428 command_runner.go:130] ! I0314 19:41:07.936565       1 controller.go:169] "Starting ephemeral volume controller"
	I0314 19:42:17.983085    8428 command_runner.go:130] ! I0314 19:41:07.936765       1 shared_informer.go:311] Waiting for caches to sync for ephemeral
	I0314 19:42:17.983085    8428 command_runner.go:130] ! I0314 19:41:07.954720       1 controllermanager.go:642] "Started controller" controller="cronjob-controller"
	I0314 19:42:17.983085    8428 command_runner.go:130] ! I0314 19:41:07.954939       1 cronjob_controllerv2.go:139] "Starting cronjob controller v2"
	I0314 19:42:17.983085    8428 command_runner.go:130] ! I0314 19:41:07.955142       1 shared_informer.go:311] Waiting for caches to sync for cronjob
	I0314 19:42:17.983165    8428 command_runner.go:130] ! I0314 19:41:07.970387       1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-kubelet-serving"
	I0314 19:42:17.983165    8428 command_runner.go:130] ! I0314 19:41:07.970474       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I0314 19:42:17.983165    8428 command_runner.go:130] ! I0314 19:41:07.970624       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0314 19:42:17.983165    8428 command_runner.go:130] ! I0314 19:41:07.971307       1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-kubelet-client"
	I0314 19:42:17.983240    8428 command_runner.go:130] ! I0314 19:41:07.975049       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I0314 19:42:17.983240    8428 command_runner.go:130] ! I0314 19:41:07.973288       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0314 19:42:17.983240    8428 command_runner.go:130] ! I0314 19:41:07.974848       1 controllermanager.go:642] "Started controller" controller="certificatesigningrequest-signing-controller"
	I0314 19:42:17.983310    8428 command_runner.go:130] ! I0314 19:41:07.974977       1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-legacy-unknown"
	I0314 19:42:17.983310    8428 command_runner.go:130] ! I0314 19:41:07.977476       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0314 19:42:17.983310    8428 command_runner.go:130] ! I0314 19:41:07.974992       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0314 19:42:17.983310    8428 command_runner.go:130] ! I0314 19:41:07.975020       1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-kube-apiserver-client"
	I0314 19:42:17.983310    8428 command_runner.go:130] ! I0314 19:41:07.977827       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I0314 19:42:17.983390    8428 command_runner.go:130] ! I0314 19:41:07.975030       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0314 19:42:17.983390    8428 command_runner.go:130] ! I0314 19:41:07.990774       1 controllermanager.go:642] "Started controller" controller="ttl-controller"
	I0314 19:42:17.983390    8428 command_runner.go:130] ! I0314 19:41:07.995647       1 ttl_controller.go:124] "Starting TTL controller"
	I0314 19:42:17.983390    8428 command_runner.go:130] ! I0314 19:41:07.995667       1 shared_informer.go:311] Waiting for caches to sync for TTL
	I0314 19:42:17.983390    8428 command_runner.go:130] ! I0314 19:41:08.019000       1 controllermanager.go:642] "Started controller" controller="token-cleaner-controller"
	I0314 19:42:17.983464    8428 command_runner.go:130] ! I0314 19:41:08.019415       1 tokencleaner.go:112] "Starting token cleaner controller"
	I0314 19:42:17.983464    8428 command_runner.go:130] ! I0314 19:41:08.019568       1 shared_informer.go:311] Waiting for caches to sync for token_cleaner
	I0314 19:42:17.983464    8428 command_runner.go:130] ! I0314 19:41:08.019700       1 shared_informer.go:318] Caches are synced for token_cleaner
	I0314 19:42:17.983464    8428 command_runner.go:130] ! E0314 19:41:08.029770       1 core.go:92] "Failed to start service controller" err="WARNING: no cloud provider provided, services of type LoadBalancer will fail"
	I0314 19:42:17.983464    8428 command_runner.go:130] ! I0314 19:41:08.029950       1 controllermanager.go:620] "Warning: skipping controller" controller="service-lb-controller"
	I0314 19:42:17.983540    8428 command_runner.go:130] ! I0314 19:41:08.030066       1 core.go:228] "Warning: configure-cloud-routes is set, but no cloud provider specified. Will not configure cloud provider routes."
	I0314 19:42:17.983540    8428 command_runner.go:130] ! I0314 19:41:08.030148       1 controllermanager.go:620] "Warning: skipping controller" controller="node-route-controller"
	I0314 19:42:17.983540    8428 command_runner.go:130] ! I0314 19:41:08.056856       1 controllermanager.go:642] "Started controller" controller="clusterrole-aggregation-controller"
	I0314 19:42:17.983540    8428 command_runner.go:130] ! I0314 19:41:08.058933       1 clusterroleaggregation_controller.go:189] "Starting ClusterRoleAggregator controller"
	I0314 19:42:17.983540    8428 command_runner.go:130] ! I0314 19:41:08.059323       1 shared_informer.go:311] Waiting for caches to sync for ClusterRoleAggregator
	I0314 19:42:17.983540    8428 command_runner.go:130] ! I0314 19:41:08.062839       1 controllermanager.go:642] "Started controller" controller="endpointslice-controller"
	I0314 19:42:17.983613    8428 command_runner.go:130] ! I0314 19:41:08.063208       1 endpointslice_controller.go:264] "Starting endpoint slice controller"
	I0314 19:42:17.983613    8428 command_runner.go:130] ! I0314 19:41:08.063512       1 shared_informer.go:311] Waiting for caches to sync for endpoint_slice
	I0314 19:42:17.983613    8428 command_runner.go:130] ! I0314 19:41:08.070376       1 node_lifecycle_controller.go:431] "Controller will reconcile labels"
	I0314 19:42:17.983613    8428 command_runner.go:130] ! I0314 19:41:08.070635       1 controllermanager.go:642] "Started controller" controller="node-lifecycle-controller"
	I0314 19:42:17.983687    8428 command_runner.go:130] ! I0314 19:41:08.070748       1 node_lifecycle_controller.go:465] "Sending events to api server"
	I0314 19:42:17.983687    8428 command_runner.go:130] ! I0314 19:41:08.071006       1 node_lifecycle_controller.go:476] "Starting node controller"
	I0314 19:42:17.983763    8428 command_runner.go:130] ! I0314 19:41:08.071615       1 shared_informer.go:311] Waiting for caches to sync for taint
	I0314 19:42:17.983763    8428 command_runner.go:130] ! I0314 19:41:08.079849       1 controllermanager.go:642] "Started controller" controller="persistentvolume-binder-controller"
	I0314 19:42:17.983763    8428 command_runner.go:130] ! I0314 19:41:08.080117       1 pv_controller_base.go:319] "Starting persistent volume controller"
	I0314 19:42:17.983763    8428 command_runner.go:130] ! I0314 19:41:08.081765       1 shared_informer.go:311] Waiting for caches to sync for persistent volume
	I0314 19:42:17.983763    8428 command_runner.go:130] ! I0314 19:41:08.084328       1 controllermanager.go:642] "Started controller" controller="ttl-after-finished-controller"
	I0314 19:42:17.983763    8428 command_runner.go:130] ! I0314 19:41:08.084731       1 ttlafterfinished_controller.go:109] "Starting TTL after finished controller"
	I0314 19:42:17.983763    8428 command_runner.go:130] ! I0314 19:41:08.085301       1 shared_informer.go:311] Waiting for caches to sync for TTL after finished
	I0314 19:42:17.983836    8428 command_runner.go:130] ! I0314 19:41:08.092529       1 controllermanager.go:642] "Started controller" controller="garbage-collector-controller"
	I0314 19:42:17.983836    8428 command_runner.go:130] ! I0314 19:41:08.092761       1 garbagecollector.go:155] "Starting controller" controller="garbagecollector"
	I0314 19:42:17.983836    8428 command_runner.go:130] ! I0314 19:41:08.092771       1 shared_informer.go:311] Waiting for caches to sync for garbage collector
	I0314 19:42:17.983836    8428 command_runner.go:130] ! I0314 19:41:08.097268       1 controllermanager.go:642] "Started controller" controller="persistentvolume-expander-controller"
	I0314 19:42:17.983910    8428 command_runner.go:130] ! I0314 19:41:08.097521       1 expand_controller.go:328] "Starting expand controller"
	I0314 19:42:17.983910    8428 command_runner.go:130] ! I0314 19:41:08.097531       1 shared_informer.go:311] Waiting for caches to sync for expand
	I0314 19:42:17.983910    8428 command_runner.go:130] ! I0314 19:41:08.097559       1 graph_builder.go:294] "Running" component="GraphBuilder"
	I0314 19:42:17.983910    8428 command_runner.go:130] ! I0314 19:41:08.117374       1 controllermanager.go:642] "Started controller" controller="root-ca-certificate-publisher-controller"
	I0314 19:42:17.983910    8428 command_runner.go:130] ! I0314 19:41:08.117512       1 publisher.go:102] "Starting root CA cert publisher controller"
	I0314 19:42:17.983981    8428 command_runner.go:130] ! I0314 19:41:08.117524       1 shared_informer.go:311] Waiting for caches to sync for crt configmap
	I0314 19:42:17.983981    8428 command_runner.go:130] ! I0314 19:41:08.126388       1 controllermanager.go:642] "Started controller" controller="statefulset-controller"
	I0314 19:42:17.983981    8428 command_runner.go:130] ! I0314 19:41:08.127645       1 stateful_set.go:161] "Starting stateful set controller"
	I0314 19:42:17.983981    8428 command_runner.go:130] ! I0314 19:41:08.127702       1 shared_informer.go:311] Waiting for caches to sync for stateful set
	I0314 19:42:17.984056    8428 command_runner.go:130] ! I0314 19:41:08.131336       1 controllermanager.go:642] "Started controller" controller="certificatesigningrequest-cleaner-controller"
	I0314 19:42:17.984056    8428 command_runner.go:130] ! I0314 19:41:08.131505       1 cleaner.go:83] "Starting CSR cleaner controller"
	I0314 19:42:17.984056    8428 command_runner.go:130] ! E0314 19:41:08.142589       1 core.go:213] "Failed to start cloud node lifecycle controller" err="no cloud provider provided"
	I0314 19:42:17.984056    8428 command_runner.go:130] ! I0314 19:41:08.142621       1 controllermanager.go:620] "Warning: skipping controller" controller="cloud-node-lifecycle-controller"
	I0314 19:42:17.984056    8428 command_runner.go:130] ! I0314 19:41:08.150057       1 controllermanager.go:642] "Started controller" controller="pod-garbage-collector-controller"
	I0314 19:42:17.984131    8428 command_runner.go:130] ! I0314 19:41:08.152574       1 gc_controller.go:101] "Starting GC controller"
	I0314 19:42:17.984131    8428 command_runner.go:130] ! I0314 19:41:08.152724       1 shared_informer.go:311] Waiting for caches to sync for GC
	I0314 19:42:17.984131    8428 command_runner.go:130] ! I0314 19:41:08.302881       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="ingresses.networking.k8s.io"
	I0314 19:42:17.984131    8428 command_runner.go:130] ! I0314 19:41:08.303337       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="endpoints"
	I0314 19:42:17.984131    8428 command_runner.go:130] ! W0314 19:41:08.303671       1 shared_informer.go:593] resyncPeriod 21h24m41.293167603s is smaller than resyncCheckPeriod 22h48m56.659186017s and the informer has already started. Changing it to 22h48m56.659186017s
	I0314 19:42:17.984206    8428 command_runner.go:130] ! I0314 19:41:08.303970       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="limitranges"
	I0314 19:42:17.984206    8428 command_runner.go:130] ! I0314 19:41:08.304292       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="podtemplates"
	I0314 19:42:17.984206    8428 command_runner.go:130] ! I0314 19:41:08.304532       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="endpointslices.discovery.k8s.io"
	I0314 19:42:17.984279    8428 command_runner.go:130] ! I0314 19:41:08.304816       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="daemonsets.apps"
	I0314 19:42:17.984279    8428 command_runner.go:130] ! I0314 19:41:08.305073       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="rolebindings.rbac.authorization.k8s.io"
	I0314 19:42:17.984279    8428 command_runner.go:130] ! I0314 19:41:08.305373       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="roles.rbac.authorization.k8s.io"
	I0314 19:42:17.984279    8428 command_runner.go:130] ! I0314 19:41:08.305634       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="leases.coordination.k8s.io"
	I0314 19:42:17.984354    8428 command_runner.go:130] ! I0314 19:41:08.305976       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="statefulsets.apps"
	I0314 19:42:17.984354    8428 command_runner.go:130] ! I0314 19:41:08.306286       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="horizontalpodautoscalers.autoscaling"
	I0314 19:42:17.984354    8428 command_runner.go:130] ! I0314 19:41:08.306541       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="networkpolicies.networking.k8s.io"
	I0314 19:42:17.984354    8428 command_runner.go:130] ! I0314 19:41:08.306699       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="controllerrevisions.apps"
	I0314 19:42:17.984429    8428 command_runner.go:130] ! I0314 19:41:08.306843       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="deployments.apps"
	I0314 19:42:17.984429    8428 command_runner.go:130] ! I0314 19:41:08.307119       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="jobs.batch"
	I0314 19:42:17.984429    8428 command_runner.go:130] ! I0314 19:41:08.307379       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="serviceaccounts"
	I0314 19:42:17.984429    8428 command_runner.go:130] ! I0314 19:41:08.307553       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="cronjobs.batch"
	I0314 19:42:17.984429    8428 command_runner.go:130] ! I0314 19:41:08.307700       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="poddisruptionbudgets.policy"
	I0314 19:42:17.984504    8428 command_runner.go:130] ! I0314 19:41:08.308022       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="csistoragecapacities.storage.k8s.io"
	I0314 19:42:17.984504    8428 command_runner.go:130] ! I0314 19:41:08.308207       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="replicasets.apps"
	I0314 19:42:17.984504    8428 command_runner.go:130] ! I0314 19:41:08.308473       1 controllermanager.go:642] "Started controller" controller="resourcequota-controller"
	I0314 19:42:17.984504    8428 command_runner.go:130] ! I0314 19:41:08.308664       1 resource_quota_controller.go:294] "Starting resource quota controller"
	I0314 19:42:17.984504    8428 command_runner.go:130] ! I0314 19:41:08.309850       1 shared_informer.go:311] Waiting for caches to sync for resource quota
	I0314 19:42:17.984580    8428 command_runner.go:130] ! I0314 19:41:08.310060       1 resource_quota_monitor.go:305] "QuotaMonitor running"
	I0314 19:42:17.984580    8428 command_runner.go:130] ! I0314 19:41:08.344084       1 controllermanager.go:642] "Started controller" controller="serviceaccount-controller"
	I0314 19:42:17.984580    8428 command_runner.go:130] ! I0314 19:41:08.344536       1 serviceaccounts_controller.go:111] "Starting service account controller"
	I0314 19:42:17.984580    8428 command_runner.go:130] ! I0314 19:41:08.344832       1 shared_informer.go:311] Waiting for caches to sync for service account
	I0314 19:42:17.984580    8428 command_runner.go:130] ! I0314 19:41:08.397742       1 controllermanager.go:642] "Started controller" controller="deployment-controller"
	I0314 19:42:17.984654    8428 command_runner.go:130] ! I0314 19:41:08.400742       1 deployment_controller.go:168] "Starting controller" controller="deployment"
	I0314 19:42:17.984654    8428 command_runner.go:130] ! I0314 19:41:08.401126       1 shared_informer.go:311] Waiting for caches to sync for deployment
	I0314 19:42:17.984654    8428 command_runner.go:130] ! I0314 19:41:08.448054       1 controllermanager.go:642] "Started controller" controller="bootstrap-signer-controller"
	I0314 19:42:17.984654    8428 command_runner.go:130] ! I0314 19:41:08.448538       1 shared_informer.go:311] Waiting for caches to sync for bootstrap_signer
	I0314 19:42:17.984654    8428 command_runner.go:130] ! I0314 19:41:08.495738       1 controllermanager.go:642] "Started controller" controller="persistentvolume-protection-controller"
	I0314 19:42:17.984728    8428 command_runner.go:130] ! I0314 19:41:08.496045       1 pv_protection_controller.go:78] "Starting PV protection controller"
	I0314 19:42:17.984728    8428 command_runner.go:130] ! I0314 19:41:08.496112       1 shared_informer.go:311] Waiting for caches to sync for PV protection
	I0314 19:42:17.984728    8428 command_runner.go:130] ! I0314 19:41:08.547967       1 controllermanager.go:642] "Started controller" controller="endpointslice-mirroring-controller"
	I0314 19:42:17.984728    8428 command_runner.go:130] ! I0314 19:41:08.548352       1 endpointslicemirroring_controller.go:223] "Starting EndpointSliceMirroring controller"
	I0314 19:42:17.984728    8428 command_runner.go:130] ! I0314 19:41:08.548556       1 shared_informer.go:311] Waiting for caches to sync for endpoint_slice_mirroring
	I0314 19:42:17.984728    8428 command_runner.go:130] ! I0314 19:41:08.593742       1 controllermanager.go:642] "Started controller" controller="job-controller"
	I0314 19:42:17.984817    8428 command_runner.go:130] ! I0314 19:41:08.593860       1 job_controller.go:226] "Starting job controller"
	I0314 19:42:17.984817    8428 command_runner.go:130] ! I0314 19:41:08.594297       1 shared_informer.go:311] Waiting for caches to sync for job
	I0314 19:42:17.984817    8428 command_runner.go:130] ! I0314 19:41:08.650392       1 controllermanager.go:642] "Started controller" controller="replicaset-controller"
	I0314 19:42:17.984893    8428 command_runner.go:130] ! I0314 19:41:08.650668       1 replica_set.go:214] "Starting controller" name="replicaset"
	I0314 19:42:17.984893    8428 command_runner.go:130] ! I0314 19:41:08.650851       1 shared_informer.go:311] Waiting for caches to sync for ReplicaSet
	I0314 19:42:17.984893    8428 command_runner.go:130] ! I0314 19:41:08.704591       1 controllermanager.go:642] "Started controller" controller="certificatesigningrequest-approving-controller"
	I0314 19:42:17.984893    8428 command_runner.go:130] ! I0314 19:41:08.704627       1 certificate_controller.go:115] "Starting certificate controller" name="csrapproving"
	I0314 19:42:17.984893    8428 command_runner.go:130] ! I0314 19:41:08.704645       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrapproving
	I0314 19:42:17.984973    8428 command_runner.go:130] ! I0314 19:41:18.768485       1 range_allocator.go:111] "No Secondary Service CIDR provided. Skipping filtering out secondary service addresses"
	I0314 19:42:17.984973    8428 command_runner.go:130] ! I0314 19:41:18.768824       1 controllermanager.go:642] "Started controller" controller="node-ipam-controller"
	I0314 19:42:17.984973    8428 command_runner.go:130] ! I0314 19:41:18.769281       1 node_ipam_controller.go:162] "Starting ipam controller"
	I0314 19:42:17.984973    8428 command_runner.go:130] ! I0314 19:41:18.769315       1 shared_informer.go:311] Waiting for caches to sync for node
	I0314 19:42:17.984973    8428 command_runner.go:130] ! I0314 19:41:18.779639       1 shared_informer.go:311] Waiting for caches to sync for resource quota
	I0314 19:42:17.985046    8428 command_runner.go:130] ! I0314 19:41:18.796167       1 shared_informer.go:318] Caches are synced for PV protection
	I0314 19:42:17.985046    8428 command_runner.go:130] ! I0314 19:41:18.796514       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-442000-m02"
	I0314 19:42:17.985046    8428 command_runner.go:130] ! I0314 19:41:18.796299       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-442000\" does not exist"
	I0314 19:42:17.985046    8428 command_runner.go:130] ! I0314 19:41:18.799471       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-442000-m02\" does not exist"
	I0314 19:42:17.985120    8428 command_runner.go:130] ! I0314 19:41:18.799722       1 shared_informer.go:318] Caches are synced for ReplicationController
	I0314 19:42:17.985120    8428 command_runner.go:130] ! I0314 19:41:18.799937       1 shared_informer.go:318] Caches are synced for TTL
	I0314 19:42:17.985120    8428 command_runner.go:130] ! I0314 19:41:18.800165       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-442000-m03\" does not exist"
	I0314 19:42:17.985120    8428 command_runner.go:130] ! I0314 19:41:18.802329       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-442000-m02"
	I0314 19:42:17.985203    8428 command_runner.go:130] ! I0314 19:41:18.802379       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-442000-m02"
	I0314 19:42:17.985203    8428 command_runner.go:130] ! I0314 19:41:18.806338       1 shared_informer.go:318] Caches are synced for certificate-csrapproving
	I0314 19:42:17.985203    8428 command_runner.go:130] ! I0314 19:41:18.836188       1 shared_informer.go:318] Caches are synced for attach detach
	I0314 19:42:17.985203    8428 command_runner.go:130] ! I0314 19:41:18.842003       1 shared_informer.go:318] Caches are synced for ephemeral
	I0314 19:42:17.985275    8428 command_runner.go:130] ! I0314 19:41:18.842516       1 shared_informer.go:318] Caches are synced for stateful set
	I0314 19:42:17.985275    8428 command_runner.go:130] ! I0314 19:41:18.845380       1 shared_informer.go:318] Caches are synced for service account
	I0314 19:42:17.985275    8428 command_runner.go:130] ! I0314 19:41:18.848744       1 shared_informer.go:318] Caches are synced for bootstrap_signer
	I0314 19:42:17.985275    8428 command_runner.go:130] ! I0314 19:41:18.849154       1 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring
	I0314 19:42:17.985275    8428 command_runner.go:130] ! I0314 19:41:18.849988       1 shared_informer.go:318] Caches are synced for namespace
	I0314 19:42:17.985275    8428 command_runner.go:130] ! I0314 19:41:18.850447       1 shared_informer.go:311] Waiting for caches to sync for garbage collector
	I0314 19:42:17.985353    8428 command_runner.go:130] ! I0314 19:41:18.851139       1 shared_informer.go:318] Caches are synced for ReplicaSet
	I0314 19:42:17.985353    8428 command_runner.go:130] ! I0314 19:41:18.852942       1 shared_informer.go:318] Caches are synced for GC
	I0314 19:42:17.985353    8428 command_runner.go:130] ! I0314 19:41:18.860631       1 shared_informer.go:318] Caches are synced for ClusterRoleAggregator
	I0314 19:42:17.985353    8428 command_runner.go:130] ! I0314 19:41:18.862001       1 shared_informer.go:318] Caches are synced for cronjob
	I0314 19:42:17.985353    8428 command_runner.go:130] ! I0314 19:41:18.862045       1 shared_informer.go:318] Caches are synced for PVC protection
	I0314 19:42:17.985429    8428 command_runner.go:130] ! I0314 19:41:18.864453       1 shared_informer.go:318] Caches are synced for endpoint_slice
	I0314 19:42:17.985429    8428 command_runner.go:130] ! I0314 19:41:18.865205       1 shared_informer.go:318] Caches are synced for endpoint
	I0314 19:42:17.985429    8428 command_runner.go:130] ! I0314 19:41:18.870312       1 shared_informer.go:318] Caches are synced for node
	I0314 19:42:17.985429    8428 command_runner.go:130] ! I0314 19:41:18.871490       1 range_allocator.go:174] "Sending events to api server"
	I0314 19:42:17.985429    8428 command_runner.go:130] ! I0314 19:41:18.871652       1 range_allocator.go:178] "Starting range CIDR allocator"
	I0314 19:42:17.985429    8428 command_runner.go:130] ! I0314 19:41:18.871843       1 shared_informer.go:311] Waiting for caches to sync for cidrallocator
	I0314 19:42:17.985508    8428 command_runner.go:130] ! I0314 19:41:18.871901       1 shared_informer.go:318] Caches are synced for cidrallocator
	I0314 19:42:17.985508    8428 command_runner.go:130] ! I0314 19:41:18.871655       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-serving
	I0314 19:42:17.985508    8428 command_runner.go:130] ! I0314 19:41:18.871600       1 shared_informer.go:318] Caches are synced for daemon sets
	I0314 19:42:17.985508    8428 command_runner.go:130] ! I0314 19:41:18.877449       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-client
	I0314 19:42:17.985508    8428 command_runner.go:130] ! I0314 19:41:18.878919       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-legacy-unknown
	I0314 19:42:17.985581    8428 command_runner.go:130] ! I0314 19:41:18.880521       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0314 19:42:17.985581    8428 command_runner.go:130] ! I0314 19:41:18.886337       1 shared_informer.go:318] Caches are synced for TTL after finished
	I0314 19:42:17.985581    8428 command_runner.go:130] ! I0314 19:41:18.895206       1 shared_informer.go:318] Caches are synced for job
	I0314 19:42:17.985581    8428 command_runner.go:130] ! I0314 19:41:18.898522       1 shared_informer.go:318] Caches are synced for expand
	I0314 19:42:17.985581    8428 command_runner.go:130] ! I0314 19:41:18.902360       1 shared_informer.go:318] Caches are synced for deployment
	I0314 19:42:17.985581    8428 command_runner.go:130] ! I0314 19:41:18.905493       1 shared_informer.go:318] Caches are synced for HPA
	I0314 19:42:17.985581    8428 command_runner.go:130] ! I0314 19:41:18.906213       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="54.805878ms"
	I0314 19:42:17.985656    8428 command_runner.go:130] ! I0314 19:41:18.908178       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="47.802µs"
	I0314 19:42:17.985656    8428 command_runner.go:130] ! I0314 19:41:18.908549       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="56.720551ms"
	I0314 19:42:17.985656    8428 command_runner.go:130] ! I0314 19:41:18.911784       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="133.705µs"
	I0314 19:42:17.985656    8428 command_runner.go:130] ! I0314 19:41:18.919410       1 shared_informer.go:318] Caches are synced for crt configmap
	I0314 19:42:17.985656    8428 command_runner.go:130] ! I0314 19:41:18.923587       1 shared_informer.go:318] Caches are synced for disruption
	I0314 19:42:17.985732    8428 command_runner.go:130] ! I0314 19:41:18.974303       1 shared_informer.go:318] Caches are synced for taint
	I0314 19:42:17.985732    8428 command_runner.go:130] ! I0314 19:41:18.974653       1 node_lifecycle_controller.go:1225] "Initializing eviction metric for zone" zone=""
	I0314 19:42:17.985732    8428 command_runner.go:130] ! I0314 19:41:18.975178       1 taint_manager.go:205] "Starting NoExecuteTaintManager"
	I0314 19:42:17.985732    8428 command_runner.go:130] ! I0314 19:41:18.975416       1 taint_manager.go:210] "Sending events to api server"
	I0314 19:42:17.985806    8428 command_runner.go:130] ! I0314 19:41:18.977051       1 event.go:307] "Event occurred" object="multinode-442000" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-442000 event: Registered Node multinode-442000 in Controller"
	I0314 19:42:17.985806    8428 command_runner.go:130] ! I0314 19:41:18.977995       1 event.go:307] "Event occurred" object="multinode-442000-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-442000-m02 event: Registered Node multinode-442000-m02 in Controller"
	I0314 19:42:17.985806    8428 command_runner.go:130] ! I0314 19:41:18.978165       1 event.go:307] "Event occurred" object="multinode-442000-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-442000-m03 event: Registered Node multinode-442000-m03 in Controller"
	I0314 19:42:17.985806    8428 command_runner.go:130] ! I0314 19:41:18.980168       1 shared_informer.go:318] Caches are synced for resource quota
	I0314 19:42:17.985883    8428 command_runner.go:130] ! I0314 19:41:18.982162       1 shared_informer.go:318] Caches are synced for persistent volume
	I0314 19:42:17.985883    8428 command_runner.go:130] ! I0314 19:41:19.001384       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-442000"
	I0314 19:42:17.985883    8428 command_runner.go:130] ! I0314 19:41:19.002299       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-442000-m02"
	I0314 19:42:17.985883    8428 command_runner.go:130] ! I0314 19:41:19.002838       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-442000-m03"
	I0314 19:42:17.985956    8428 command_runner.go:130] ! I0314 19:41:19.003844       1 node_lifecycle_controller.go:1071] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I0314 19:42:17.985956    8428 command_runner.go:130] ! I0314 19:41:19.010468       1 shared_informer.go:318] Caches are synced for resource quota
	I0314 19:42:17.985956    8428 command_runner.go:130] ! I0314 19:41:19.393074       1 shared_informer.go:318] Caches are synced for garbage collector
	I0314 19:42:17.985956    8428 command_runner.go:130] ! I0314 19:41:19.393161       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0314 19:42:17.986031    8428 command_runner.go:130] ! I0314 19:41:19.450734       1 shared_informer.go:318] Caches are synced for garbage collector
	I0314 19:42:17.986031    8428 command_runner.go:130] ! I0314 19:41:41.542550       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-442000-m02"
	I0314 19:42:17.986031    8428 command_runner.go:130] ! I0314 19:41:44.029818       1 event.go:307] "Event occurred" object="kube-system/storage-provisioner" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod kube-system/storage-provisioner"
	I0314 19:42:17.986031    8428 command_runner.go:130] ! I0314 19:41:44.029853       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68-d22jc" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod kube-system/coredns-5dd5756b68-d22jc"
	I0314 19:42:17.986111    8428 command_runner.go:130] ! I0314 19:41:44.029866       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6-7446n" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-5b5d89c9d6-7446n"
	I0314 19:42:17.986169    8428 command_runner.go:130] ! I0314 19:41:59.058949       1 event.go:307] "Event occurred" object="multinode-442000-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-442000-m02 status is now: NodeNotReady"
	I0314 19:42:17.986205    8428 command_runner.go:130] ! I0314 19:41:59.074940       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6-8drpb" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0314 19:42:17.986205    8428 command_runner.go:130] ! I0314 19:41:59.085508       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="9.938337ms"
	I0314 19:42:17.986205    8428 command_runner.go:130] ! I0314 19:41:59.086845       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="54.804µs"
	I0314 19:42:17.986205    8428 command_runner.go:130] ! I0314 19:41:59.099029       1 event.go:307] "Event occurred" object="kube-system/kindnet-c7m4p" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0314 19:42:17.986205    8428 command_runner.go:130] ! I0314 19:41:59.122329       1 event.go:307] "Event occurred" object="kube-system/kube-proxy-72dzs" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0314 19:42:17.986282    8428 command_runner.go:130] ! I0314 19:42:12.281109       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="13.332951ms"
	I0314 19:42:17.986282    8428 command_runner.go:130] ! I0314 19:42:12.281325       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="115.209µs"
	I0314 19:42:17.986313    8428 command_runner.go:130] ! I0314 19:42:12.305037       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="74.006µs"
	I0314 19:42:17.986341    8428 command_runner.go:130] ! I0314 19:42:12.366507       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="32.074928ms"
	I0314 19:42:17.986341    8428 command_runner.go:130] ! I0314 19:42:12.368560       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="106.408µs"
	I0314 19:42:17.998710    8428 logs.go:123] Gathering logs for Docker ...
	I0314 19:42:17.998710    8428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0314 19:42:18.030533    8428 command_runner.go:130] > Mar 14 19:39:36 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0314 19:42:18.030533    8428 command_runner.go:130] > Mar 14 19:39:36 minikube cri-dockerd[222]: time="2024-03-14T19:39:36Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0314 19:42:18.030533    8428 command_runner.go:130] > Mar 14 19:39:36 minikube cri-dockerd[222]: time="2024-03-14T19:39:36Z" level=info msg="Start docker client with request timeout 0s"
	I0314 19:42:18.030639    8428 command_runner.go:130] > Mar 14 19:39:36 minikube cri-dockerd[222]: time="2024-03-14T19:39:36Z" level=fatal msg="failed to get docker version: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0314 19:42:18.030639    8428 command_runner.go:130] > Mar 14 19:39:37 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0314 19:42:18.030676    8428 command_runner.go:130] > Mar 14 19:39:37 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0314 19:42:18.030712    8428 command_runner.go:130] > Mar 14 19:39:37 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0314 19:42:18.030712    8428 command_runner.go:130] > Mar 14 19:39:39 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 1.
	I0314 19:42:18.030752    8428 command_runner.go:130] > Mar 14 19:39:39 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0314 19:42:18.030752    8428 command_runner.go:130] > Mar 14 19:39:39 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0314 19:42:18.030787    8428 command_runner.go:130] > Mar 14 19:39:39 minikube cri-dockerd[402]: time="2024-03-14T19:39:39Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0314 19:42:18.030787    8428 command_runner.go:130] > Mar 14 19:39:39 minikube cri-dockerd[402]: time="2024-03-14T19:39:39Z" level=info msg="Start docker client with request timeout 0s"
	I0314 19:42:18.030826    8428 command_runner.go:130] > Mar 14 19:39:39 minikube cri-dockerd[402]: time="2024-03-14T19:39:39Z" level=fatal msg="failed to get docker version: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0314 19:42:18.030826    8428 command_runner.go:130] > Mar 14 19:39:39 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0314 19:42:18.030826    8428 command_runner.go:130] > Mar 14 19:39:39 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0314 19:42:18.030869    8428 command_runner.go:130] > Mar 14 19:39:39 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0314 19:42:18.030869    8428 command_runner.go:130] > Mar 14 19:39:41 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 2.
	I0314 19:42:18.030911    8428 command_runner.go:130] > Mar 14 19:39:41 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0314 19:42:18.030911    8428 command_runner.go:130] > Mar 14 19:39:41 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0314 19:42:18.030947    8428 command_runner.go:130] > Mar 14 19:39:41 minikube cri-dockerd[422]: time="2024-03-14T19:39:41Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0314 19:42:18.030947    8428 command_runner.go:130] > Mar 14 19:39:41 minikube cri-dockerd[422]: time="2024-03-14T19:39:41Z" level=info msg="Start docker client with request timeout 0s"
	I0314 19:42:18.031008    8428 command_runner.go:130] > Mar 14 19:39:41 minikube cri-dockerd[422]: time="2024-03-14T19:39:41Z" level=fatal msg="failed to get docker version: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0314 19:42:18.031008    8428 command_runner.go:130] > Mar 14 19:39:41 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0314 19:42:18.031008    8428 command_runner.go:130] > Mar 14 19:39:41 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0314 19:42:18.031008    8428 command_runner.go:130] > Mar 14 19:39:41 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0314 19:42:18.031008    8428 command_runner.go:130] > Mar 14 19:39:44 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 3.
	I0314 19:42:18.031008    8428 command_runner.go:130] > Mar 14 19:39:44 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0314 19:42:18.031008    8428 command_runner.go:130] > Mar 14 19:39:44 minikube systemd[1]: cri-docker.service: Start request repeated too quickly.
	I0314 19:42:18.031008    8428 command_runner.go:130] > Mar 14 19:39:44 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0314 19:42:18.031008    8428 command_runner.go:130] > Mar 14 19:39:44 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0314 19:42:18.031008    8428 command_runner.go:130] > Mar 14 19:40:26 multinode-442000 systemd[1]: Starting Docker Application Container Engine...
	I0314 19:42:18.031008    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[650]: time="2024-03-14T19:40:27.010258466Z" level=info msg="Starting up"
	I0314 19:42:18.031008    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[650]: time="2024-03-14T19:40:27.011413188Z" level=info msg="containerd not running, starting managed containerd"
	I0314 19:42:18.031008    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[650]: time="2024-03-14T19:40:27.012927209Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=656
	I0314 19:42:18.031008    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.042687292Z" level=info msg="starting containerd" revision=dcf2847247e18caba8dce86522029642f60fe96b version=v1.7.14
	I0314 19:42:18.031008    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.069138554Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0314 19:42:18.031008    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.069242083Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0314 19:42:18.031008    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.069344111Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0314 19:42:18.031008    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.069362416Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0314 19:42:18.031008    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.070081016Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0314 19:42:18.031008    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.070164740Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0314 19:42:18.031008    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.070380400Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0314 19:42:18.031008    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.070511536Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0314 19:42:18.031008    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.070532642Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0314 19:42:18.031008    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.070544145Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0314 19:42:18.031008    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.070983067Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0314 19:42:18.031008    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.071556427Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0314 19:42:18.031008    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.074554061Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0314 19:42:18.031008    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.074645687Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0314 19:42:18.031536    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.074800830Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0314 19:42:18.031536    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.074883153Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0314 19:42:18.031576    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.075687977Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0314 19:42:18.031619    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.075800308Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0314 19:42:18.031657    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.075818813Z" level=info msg="metadata content store policy set" policy=shared
	I0314 19:42:18.031657    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.081334348Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0314 19:42:18.031691    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.081440978Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0314 19:42:18.031691    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.081463484Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0314 19:42:18.031730    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.081526902Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0314 19:42:18.031765    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.081545007Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0314 19:42:18.031765    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.081621128Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0314 19:42:18.031804    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.082036144Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0314 19:42:18.031804    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.082193387Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0314 19:42:18.031846    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.082276711Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0314 19:42:18.031846    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.082349431Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0314 19:42:18.031887    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.082368036Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0314 19:42:18.031928    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.082385141Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0314 19:42:18.031928    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.082401545Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0314 19:42:18.031969    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.082417450Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0314 19:42:18.032010    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.082433154Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0314 19:42:18.032010    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.082457161Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0314 19:42:18.032052    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.082515377Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0314 19:42:18.032052    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.082533482Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0314 19:42:18.032087    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.082554788Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0314 19:42:18.032126    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.082572093Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0314 19:42:18.032126    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.082586997Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0314 19:42:18.032166    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.082601801Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0314 19:42:18.032205    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.082616305Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0314 19:42:18.032239    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.082631109Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0314 19:42:18.032271    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.082643913Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0314 19:42:18.032271    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.082659317Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0314 19:42:18.032297    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.082673721Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0314 19:42:18.032297    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.082690226Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0314 19:42:18.032297    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.082704230Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0314 19:42:18.032297    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.082717333Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0314 19:42:18.032297    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.082730637Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0314 19:42:18.032297    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.082747942Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0314 19:42:18.032297    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.082771048Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0314 19:42:18.032297    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.082785952Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0314 19:42:18.032297    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.082799956Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0314 19:42:18.032297    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.082936994Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0314 19:42:18.032297    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.082973004Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	I0314 19:42:18.032297    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.082986808Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0314 19:42:18.032297    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.082998612Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	I0314 19:42:18.032297    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.083067631Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0314 19:42:18.032297    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.083095839Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0314 19:42:18.032297    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.083107842Z" level=info msg="NRI interface is disabled by configuration."
	I0314 19:42:18.032297    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.083364013Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0314 19:42:18.032297    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.083531860Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0314 19:42:18.032297    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.083575672Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0314 19:42:18.032297    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.083609482Z" level=info msg="containerd successfully booted in 0.043398s"
	I0314 19:42:18.032297    8428 command_runner.go:130] > Mar 14 19:40:28 multinode-442000 dockerd[650]: time="2024-03-14T19:40:28.063674621Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0314 19:42:18.032297    8428 command_runner.go:130] > Mar 14 19:40:28 multinode-442000 dockerd[650]: time="2024-03-14T19:40:28.220876850Z" level=info msg="Loading containers: start."
	I0314 19:42:18.032297    8428 command_runner.go:130] > Mar 14 19:40:28 multinode-442000 dockerd[650]: time="2024-03-14T19:40:28.643208421Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.18.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0314 19:42:18.032297    8428 command_runner.go:130] > Mar 14 19:40:28 multinode-442000 dockerd[650]: time="2024-03-14T19:40:28.726589336Z" level=info msg="Loading containers: done."
	I0314 19:42:18.032821    8428 command_runner.go:130] > Mar 14 19:40:28 multinode-442000 dockerd[650]: time="2024-03-14T19:40:28.750141296Z" level=info msg="Docker daemon" commit=061aa95 containerd-snapshotter=false storage-driver=overlay2 version=25.0.4
	I0314 19:42:18.032862    8428 command_runner.go:130] > Mar 14 19:40:28 multinode-442000 dockerd[650]: time="2024-03-14T19:40:28.750832983Z" level=info msg="Daemon has completed initialization"
	I0314 19:42:18.032862    8428 command_runner.go:130] > Mar 14 19:40:28 multinode-442000 systemd[1]: Started Docker Application Container Engine.
	I0314 19:42:18.032862    8428 command_runner.go:130] > Mar 14 19:40:28 multinode-442000 dockerd[650]: time="2024-03-14T19:40:28.799522730Z" level=info msg="API listen on [::]:2376"
	I0314 19:42:18.032904    8428 command_runner.go:130] > Mar 14 19:40:28 multinode-442000 dockerd[650]: time="2024-03-14T19:40:28.799691776Z" level=info msg="API listen on /var/run/docker.sock"
	I0314 19:42:18.032904    8428 command_runner.go:130] > Mar 14 19:40:52 multinode-442000 systemd[1]: Stopping Docker Application Container Engine...
	I0314 19:42:18.032944    8428 command_runner.go:130] > Mar 14 19:40:52 multinode-442000 dockerd[650]: time="2024-03-14T19:40:52.824796168Z" level=info msg="Processing signal 'terminated'"
	I0314 19:42:18.032978    8428 command_runner.go:130] > Mar 14 19:40:52 multinode-442000 dockerd[650]: time="2024-03-14T19:40:52.825961557Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0314 19:42:18.032978    8428 command_runner.go:130] > Mar 14 19:40:52 multinode-442000 dockerd[650]: time="2024-03-14T19:40:52.826585605Z" level=info msg="Daemon shutdown complete"
	I0314 19:42:18.033017    8428 command_runner.go:130] > Mar 14 19:40:52 multinode-442000 dockerd[650]: time="2024-03-14T19:40:52.826653911Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0314 19:42:18.033051    8428 command_runner.go:130] > Mar 14 19:40:52 multinode-442000 dockerd[650]: time="2024-03-14T19:40:52.826812323Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0314 19:42:18.033051    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 systemd[1]: docker.service: Deactivated successfully.
	I0314 19:42:18.033090    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 systemd[1]: Stopped Docker Application Container Engine.
	I0314 19:42:18.033090    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 systemd[1]: Starting Docker Application Container Engine...
	I0314 19:42:18.033124    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1043]: time="2024-03-14T19:40:53.899936864Z" level=info msg="Starting up"
	I0314 19:42:18.033124    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1043]: time="2024-03-14T19:40:53.900739426Z" level=info msg="containerd not running, starting managed containerd"
	I0314 19:42:18.033163    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1043]: time="2024-03-14T19:40:53.901763504Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1049
	I0314 19:42:18.033163    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.930795337Z" level=info msg="starting containerd" revision=dcf2847247e18caba8dce86522029642f60fe96b version=v1.7.14
	I0314 19:42:18.033213    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.957961927Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0314 19:42:18.033213    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.958063735Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0314 19:42:18.033253    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.958107338Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0314 19:42:18.033286    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.958123339Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0314 19:42:18.033325    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.958150841Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0314 19:42:18.033359    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.958163842Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0314 19:42:18.033398    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.958360458Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0314 19:42:18.033439    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.958444864Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0314 19:42:18.033439    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.958463766Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0314 19:42:18.033478    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.958475466Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0314 19:42:18.033478    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.958502569Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0314 19:42:18.033518    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.958670881Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0314 19:42:18.033557    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.961627209Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0314 19:42:18.033592    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.961715316Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0314 19:42:18.033631    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.961871928Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0314 19:42:18.033672    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.961949634Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0314 19:42:18.033712    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.961985336Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0314 19:42:18.033747    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.962005238Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0314 19:42:18.033787    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.962017139Z" level=info msg="metadata content store policy set" policy=shared
	I0314 19:42:18.033787    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.962188852Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0314 19:42:18.033828    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.962280259Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0314 19:42:18.033828    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.962311462Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0314 19:42:18.033869    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.962328263Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0314 19:42:18.033869    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.962344564Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0314 19:42:18.033932    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.962393368Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0314 19:42:18.033932    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.962810900Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0314 19:42:18.033932    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.962939310Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0314 19:42:18.034006    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.963018216Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0314 19:42:18.034006    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.963036317Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0314 19:42:18.034006    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.963060419Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0314 19:42:18.034063    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.963076820Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0314 19:42:18.034063    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.963091221Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0314 19:42:18.034063    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.963106323Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0314 19:42:18.034124    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.963121324Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0314 19:42:18.034124    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.963135425Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0314 19:42:18.034181    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.963148726Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0314 19:42:18.034181    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.963162027Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0314 19:42:18.034181    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.963184029Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0314 19:42:18.034265    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.963205330Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0314 19:42:18.034265    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.963220631Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0314 19:42:18.034295    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.963270235Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0314 19:42:18.034295    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.963286336Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0314 19:42:18.034339    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.963300438Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0314 19:42:18.034339    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.963313039Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0314 19:42:18.034339    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.963326640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0314 19:42:18.034405    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.963341141Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0314 19:42:18.034405    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.963357642Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0314 19:42:18.034405    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.963369743Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0314 19:42:18.034477    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.963382444Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0314 19:42:18.034477    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.963395545Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0314 19:42:18.034477    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.963411646Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0314 19:42:18.034541    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.963433148Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0314 19:42:18.034541    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.963449149Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0314 19:42:18.034541    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.963461550Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0314 19:42:18.034612    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.963512954Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0314 19:42:18.034612    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.963529855Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	I0314 19:42:18.034612    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.963593860Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0314 19:42:18.034667    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.963606261Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	I0314 19:42:18.034667    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.963665466Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0314 19:42:18.034727    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.963679767Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0314 19:42:18.034727    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.963695368Z" level=info msg="NRI interface is disabled by configuration."
	I0314 19:42:18.034727    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.964176205Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0314 19:42:18.034785    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.964503330Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0314 19:42:18.034845    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.965392899Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0314 19:42:18.034845    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.966787506Z" level=info msg="containerd successfully booted in 0.037267s"
	I0314 19:42:18.034845    8428 command_runner.go:130] > Mar 14 19:40:54 multinode-442000 dockerd[1043]: time="2024-03-14T19:40:54.945087153Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0314 19:42:18.034902    8428 command_runner.go:130] > Mar 14 19:40:54 multinode-442000 dockerd[1043]: time="2024-03-14T19:40:54.972020025Z" level=info msg="Loading containers: start."
	I0314 19:42:18.034902    8428 command_runner.go:130] > Mar 14 19:40:55 multinode-442000 dockerd[1043]: time="2024-03-14T19:40:55.259462934Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.18.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0314 19:42:18.034902    8428 command_runner.go:130] > Mar 14 19:40:55 multinode-442000 dockerd[1043]: time="2024-03-14T19:40:55.336883289Z" level=info msg="Loading containers: done."
	I0314 19:42:18.034964    8428 command_runner.go:130] > Mar 14 19:40:55 multinode-442000 dockerd[1043]: time="2024-03-14T19:40:55.370669888Z" level=info msg="Docker daemon" commit=061aa95 containerd-snapshotter=false storage-driver=overlay2 version=25.0.4
	I0314 19:42:18.034964    8428 command_runner.go:130] > Mar 14 19:40:55 multinode-442000 dockerd[1043]: time="2024-03-14T19:40:55.370874904Z" level=info msg="Daemon has completed initialization"
	I0314 19:42:18.034964    8428 command_runner.go:130] > Mar 14 19:40:55 multinode-442000 dockerd[1043]: time="2024-03-14T19:40:55.415311921Z" level=info msg="API listen on /var/run/docker.sock"
	I0314 19:42:18.034964    8428 command_runner.go:130] > Mar 14 19:40:55 multinode-442000 dockerd[1043]: time="2024-03-14T19:40:55.415467233Z" level=info msg="API listen on [::]:2376"
	I0314 19:42:18.035022    8428 command_runner.go:130] > Mar 14 19:40:55 multinode-442000 systemd[1]: Started Docker Application Container Engine.
	I0314 19:42:18.035022    8428 command_runner.go:130] > Mar 14 19:40:56 multinode-442000 systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0314 19:42:18.035022    8428 command_runner.go:130] > Mar 14 19:40:56 multinode-442000 cri-dockerd[1267]: time="2024-03-14T19:40:56Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0314 19:42:18.035073    8428 command_runner.go:130] > Mar 14 19:40:56 multinode-442000 cri-dockerd[1267]: time="2024-03-14T19:40:56Z" level=info msg="Start docker client with request timeout 0s"
	I0314 19:42:18.035073    8428 command_runner.go:130] > Mar 14 19:40:56 multinode-442000 cri-dockerd[1267]: time="2024-03-14T19:40:56Z" level=info msg="Hairpin mode is set to hairpin-veth"
	I0314 19:42:18.035073    8428 command_runner.go:130] > Mar 14 19:40:56 multinode-442000 cri-dockerd[1267]: time="2024-03-14T19:40:56Z" level=info msg="Loaded network plugin cni"
	I0314 19:42:18.035073    8428 command_runner.go:130] > Mar 14 19:40:56 multinode-442000 cri-dockerd[1267]: time="2024-03-14T19:40:56Z" level=info msg="Docker cri networking managed by network plugin cni"
	I0314 19:42:18.035263    8428 command_runner.go:130] > Mar 14 19:40:56 multinode-442000 cri-dockerd[1267]: time="2024-03-14T19:40:56Z" level=info msg="Docker Info: &{ID:04f4855f-417a-422c-b5bb-3cf8a43fb438 Containers:18 ContainersRunning:0 ContainersPaused:0 ContainersStopped:18 Images:10 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:[] Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:[] Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6tables:true Debug:false NFd:26 OomKillDisable:false NGoroutines:52 SystemTime:2024-03-14T19:40:56.401787998Z LoggingDriver:json-file CgroupDriver:cgroupfs CgroupVersion:2 NEventsListener:0 Ke
rnelVersion:5.10.207 OperatingSystem:Buildroot 2023.02.9 OSVersion:2023.02.9 OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:0xc0004c0150 NCPU:2 MemTotal:2216210432 GenericResources:[] DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:multinode-442000 Labels:[provider=hyperv] ExperimentalBuild:false ServerVersion:25.0.4 ClusterStore: ClusterAdvertise: Runtimes:map[io.containerd.runc.v2:{Path:runc Args:[] Shim:<nil>} runc:{Path:runc Args:[] Shim:<nil>}] DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:[] Nodes:0 Managers:0 Cluster:<nil> Warnings:[]} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dcf2847247e18caba8dce86522029642f60fe96b Expected:dcf2847247e18caba8dce86522029642f60fe96b} RuncCommit:{ID:51d5e94601ceffbbd85688df1c928ecccbfa4685 Expected:51d5e94601ceffbbd85688df1c928ecccbfa4685} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[nam
e=seccomp,profile=builtin name=cgroupns] ProductLicense:Community Engine DefaultAddressPools:[] Warnings:[]}"
	I0314 19:42:18.035263    8428 command_runner.go:130] > Mar 14 19:40:56 multinode-442000 cri-dockerd[1267]: time="2024-03-14T19:40:56Z" level=info msg="Setting cgroupDriver cgroupfs"
	I0314 19:42:18.035317    8428 command_runner.go:130] > Mar 14 19:40:56 multinode-442000 cri-dockerd[1267]: time="2024-03-14T19:40:56Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	I0314 19:42:18.035317    8428 command_runner.go:130] > Mar 14 19:40:56 multinode-442000 cri-dockerd[1267]: time="2024-03-14T19:40:56Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	I0314 19:42:18.035361    8428 command_runner.go:130] > Mar 14 19:40:56 multinode-442000 cri-dockerd[1267]: time="2024-03-14T19:40:56Z" level=info msg="Start cri-dockerd grpc backend"
	I0314 19:42:18.035361    8428 command_runner.go:130] > Mar 14 19:40:56 multinode-442000 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	I0314 19:42:18.035420    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 cri-dockerd[1267]: time="2024-03-14T19:41:00Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"busybox-5b5d89c9d6-7446n_default\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"fa0f2372c88eef3de0c7caa0041064157c314aff4c14bf6622f34dd89106f773\""
	I0314 19:42:18.035420    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 cri-dockerd[1267]: time="2024-03-14T19:41:00Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-5dd5756b68-d22jc_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"a3dba3fc54c01e7fb1675536e155d6b541ed5782f664675ccd953639013f50b0\""
	I0314 19:42:18.035481    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:01.294795352Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0314 19:42:18.035481    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:01.294882858Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0314 19:42:18.035481    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:01.294903860Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:18.035547    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:01.295303891Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:18.035547    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:01.380666857Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0314 19:42:18.035608    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:01.380946878Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0314 19:42:18.035608    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:01.381075288Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:18.035664    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:01.381588628Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:18.035664    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:01.418754186Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0314 19:42:18.035664    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:01.418872295Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0314 19:42:18.035735    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:01.418919499Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:18.035735    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:01.419130315Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:18.035797    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 cri-dockerd[1267]: time="2024-03-14T19:41:01Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/35dd339c8a08d84d0d1a4d2c062b04d44baff78d20c6ed33ce967d50c18eaa3c/resolv.conf as [nameserver 172.17.80.1]"
	I0314 19:42:18.035797    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:01.449937485Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0314 19:42:18.035797    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:01.450067495Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0314 19:42:18.035797    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:01.450100297Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:18.035882    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:01.450295012Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:18.035882    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 cri-dockerd[1267]: time="2024-03-14T19:41:01Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/67475bf80ddd91df7549842450a8d92c27cd16f814cd4e4c750a7cad7d82fc9f/resolv.conf as [nameserver 172.17.80.1]"
	I0314 19:42:18.035938    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 cri-dockerd[1267]: time="2024-03-14T19:41:01Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a27fa2188ee4cf0c44cde0f8cae03a83655bc574c856082192e3261801efcc72/resolv.conf as [nameserver 172.17.80.1]"
	I0314 19:42:18.035938    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 cri-dockerd[1267]: time="2024-03-14T19:41:01Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/c70744e60ac50b50085376d0c124ff15cc884b8a836b0085ef71a65ddb06bcfd/resolv.conf as [nameserver 172.17.80.1]"
	I0314 19:42:18.035938    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:01.782527266Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0314 19:42:18.036027    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:01.782834890Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0314 19:42:18.036056    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:01.782945299Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:18.036056    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:01.783324628Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:18.036100    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:01.950307171Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0314 19:42:18.036100    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:01.950638097Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0314 19:42:18.036100    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:01.950847113Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:18.036168    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:01.951959699Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:18.036168    8428 command_runner.go:130] > Mar 14 19:41:02 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:02.033329657Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0314 19:42:18.036168    8428 command_runner.go:130] > Mar 14 19:41:02 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:02.033826996Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0314 19:42:18.036238    8428 command_runner.go:130] > Mar 14 19:41:02 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:02.034090516Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:18.036238    8428 command_runner.go:130] > Mar 14 19:41:02 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:02.034801671Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:18.036293    8428 command_runner.go:130] > Mar 14 19:41:02 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:02.038389546Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0314 19:42:18.036293    8428 command_runner.go:130] > Mar 14 19:41:02 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:02.038570160Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0314 19:42:18.036293    8428 command_runner.go:130] > Mar 14 19:41:02 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:02.038686569Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:18.036355    8428 command_runner.go:130] > Mar 14 19:41:02 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:02.038972291Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:18.036355    8428 command_runner.go:130] > Mar 14 19:41:05 multinode-442000 cri-dockerd[1267]: time="2024-03-14T19:41:05Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	I0314 19:42:18.036421    8428 command_runner.go:130] > Mar 14 19:41:07 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:07.056067890Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0314 19:42:18.036421    8428 command_runner.go:130] > Mar 14 19:41:07 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:07.056148096Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0314 19:42:18.036421    8428 command_runner.go:130] > Mar 14 19:41:07 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:07.056166397Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:18.036491    8428 command_runner.go:130] > Mar 14 19:41:07 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:07.056406816Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:18.036491    8428 command_runner.go:130] > Mar 14 19:41:07 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:07.109761119Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0314 19:42:18.036549    8428 command_runner.go:130] > Mar 14 19:41:07 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:07.110023440Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0314 19:42:18.036549    8428 command_runner.go:130] > Mar 14 19:41:07 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:07.110099145Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:18.036596    8428 command_runner.go:130] > Mar 14 19:41:07 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:07.110475674Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:18.036596    8428 command_runner.go:130] > Mar 14 19:41:07 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:07.116978275Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0314 19:42:18.036632    8428 command_runner.go:130] > Mar 14 19:41:07 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:07.117046280Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0314 19:42:18.036632    8428 command_runner.go:130] > Mar 14 19:41:07 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:07.117060481Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:18.036675    8428 command_runner.go:130] > Mar 14 19:41:07 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:07.117158888Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:18.036675    8428 command_runner.go:130] > Mar 14 19:41:07 multinode-442000 cri-dockerd[1267]: time="2024-03-14T19:41:07Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a723f141543f2007cc07e048ef5836fca4ae70749b7266630f6c890bb233c09a/resolv.conf as [nameserver 172.17.80.1]"
	I0314 19:42:18.036740    8428 command_runner.go:130] > Mar 14 19:41:07 multinode-442000 cri-dockerd[1267]: time="2024-03-14T19:41:07Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/f513a7aff67200987eb0f28647720ea4cb9bbdb684fc85d1b08c0dd54563517d/resolv.conf as [nameserver 172.17.80.1]"
	I0314 19:42:18.036740    8428 command_runner.go:130] > Mar 14 19:41:07 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:07.432676357Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0314 19:42:18.036788    8428 command_runner.go:130] > Mar 14 19:41:07 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:07.432829669Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0314 19:42:18.036788    8428 command_runner.go:130] > Mar 14 19:41:07 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:07.432849370Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:18.036842    8428 command_runner.go:130] > Mar 14 19:41:07 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:07.433004382Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:18.036842    8428 command_runner.go:130] > Mar 14 19:41:07 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:07.579105320Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0314 19:42:18.036904    8428 command_runner.go:130] > Mar 14 19:41:07 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:07.580432922Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0314 19:42:18.036904    8428 command_runner.go:130] > Mar 14 19:41:07 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:07.580451623Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:18.036904    8428 command_runner.go:130] > Mar 14 19:41:07 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:07.580554931Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:18.036967    8428 command_runner.go:130] > Mar 14 19:41:07 multinode-442000 cri-dockerd[1267]: time="2024-03-14T19:41:07Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a9176b55446637c4407c9a64ce7d85fce2b395bcc0a22061f5f7ff304ff2d47f/resolv.conf as [nameserver 172.17.80.1]"
	I0314 19:42:18.036967    8428 command_runner.go:130] > Mar 14 19:41:07 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:07.897653021Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0314 19:42:18.037017    8428 command_runner.go:130] > Mar 14 19:41:07 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:07.897936143Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0314 19:42:18.037017    8428 command_runner.go:130] > Mar 14 19:41:07 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:07.898062553Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:18.037072    8428 command_runner.go:130] > Mar 14 19:41:07 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:07.898459584Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:18.037072    8428 command_runner.go:130] > Mar 14 19:41:37 multinode-442000 dockerd[1043]: time="2024-03-14T19:41:37.705977514Z" level=info msg="ignoring event" container=2876622a2618d9b60f7cb4f182054a8b2d30209e3bd14c5d4afe515101547bc8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0314 19:42:18.037120    8428 command_runner.go:130] > Mar 14 19:41:37 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:37.706482647Z" level=info msg="shim disconnected" id=2876622a2618d9b60f7cb4f182054a8b2d30209e3bd14c5d4afe515101547bc8 namespace=moby
	I0314 19:42:18.037120    8428 command_runner.go:130] > Mar 14 19:41:37 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:37.706677460Z" level=warning msg="cleaning up after shim disconnected" id=2876622a2618d9b60f7cb4f182054a8b2d30209e3bd14c5d4afe515101547bc8 namespace=moby
	I0314 19:42:18.037175    8428 command_runner.go:130] > Mar 14 19:41:37 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:37.706692261Z" level=info msg="cleaning up dead shim" namespace=moby
	I0314 19:42:18.037175    8428 command_runner.go:130] > Mar 14 19:41:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:53.663136392Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0314 19:42:18.037225    8428 command_runner.go:130] > Mar 14 19:41:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:53.663371709Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0314 19:42:18.037262    8428 command_runner.go:130] > Mar 14 19:41:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:53.663411212Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:18.037262    8428 command_runner.go:130] > Mar 14 19:41:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:53.663537821Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:18.037316    8428 command_runner.go:130] > Mar 14 19:42:10 multinode-442000 dockerd[1049]: time="2024-03-14T19:42:10.837487028Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0314 19:42:18.037316    8428 command_runner.go:130] > Mar 14 19:42:10 multinode-442000 dockerd[1049]: time="2024-03-14T19:42:10.837604337Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0314 19:42:18.037371    8428 command_runner.go:130] > Mar 14 19:42:10 multinode-442000 dockerd[1049]: time="2024-03-14T19:42:10.837625738Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:18.037371    8428 command_runner.go:130] > Mar 14 19:42:10 multinode-442000 dockerd[1049]: time="2024-03-14T19:42:10.837719345Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:18.037419    8428 command_runner.go:130] > Mar 14 19:42:10 multinode-442000 dockerd[1049]: time="2024-03-14T19:42:10.848167835Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0314 19:42:18.037419    8428 command_runner.go:130] > Mar 14 19:42:10 multinode-442000 dockerd[1049]: time="2024-03-14T19:42:10.849098605Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0314 19:42:18.037474    8428 command_runner.go:130] > Mar 14 19:42:10 multinode-442000 dockerd[1049]: time="2024-03-14T19:42:10.849287919Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:18.037474    8428 command_runner.go:130] > Mar 14 19:42:10 multinode-442000 dockerd[1049]: time="2024-03-14T19:42:10.849656747Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:18.037531    8428 command_runner.go:130] > Mar 14 19:42:11 multinode-442000 cri-dockerd[1267]: time="2024-03-14T19:42:11Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/cddebe360bf3a58d057146523ff9f043ddb40843d3e55a24f8f364524780a439/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	I0314 19:42:18.037531    8428 command_runner.go:130] > Mar 14 19:42:11 multinode-442000 cri-dockerd[1267]: time="2024-03-14T19:42:11Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/89f326046d00d990fbe8611867f6438ef498caad91d78b4f265633a7cd56307f/resolv.conf as [nameserver 172.17.80.1]"
	I0314 19:42:18.037531    8428 command_runner.go:130] > Mar 14 19:42:11 multinode-442000 dockerd[1049]: time="2024-03-14T19:42:11.575693713Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0314 19:42:18.037531    8428 command_runner.go:130] > Mar 14 19:42:11 multinode-442000 dockerd[1049]: time="2024-03-14T19:42:11.575950032Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0314 19:42:18.037531    8428 command_runner.go:130] > Mar 14 19:42:11 multinode-442000 dockerd[1049]: time="2024-03-14T19:42:11.576019637Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:18.037531    8428 command_runner.go:130] > Mar 14 19:42:11 multinode-442000 dockerd[1049]: time="2024-03-14T19:42:11.577004211Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0314 19:42:18.037531    8428 command_runner.go:130] > Mar 14 19:42:11 multinode-442000 dockerd[1049]: time="2024-03-14T19:42:11.577168224Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0314 19:42:18.037531    8428 command_runner.go:130] > Mar 14 19:42:11 multinode-442000 dockerd[1049]: time="2024-03-14T19:42:11.577288033Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:18.037531    8428 command_runner.go:130] > Mar 14 19:42:11 multinode-442000 dockerd[1049]: time="2024-03-14T19:42:11.577583255Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:18.037531    8428 command_runner.go:130] > Mar 14 19:42:11 multinode-442000 dockerd[1049]: time="2024-03-14T19:42:11.576656985Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:18.037531    8428 command_runner.go:130] > Mar 14 19:42:13 multinode-442000 dockerd[1043]: 2024/03/14 19:42:13 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0314 19:42:18.037531    8428 command_runner.go:130] > Mar 14 19:42:14 multinode-442000 dockerd[1043]: 2024/03/14 19:42:14 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0314 19:42:18.037531    8428 command_runner.go:130] > Mar 14 19:42:14 multinode-442000 dockerd[1043]: 2024/03/14 19:42:14 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0314 19:42:18.037531    8428 command_runner.go:130] > Mar 14 19:42:14 multinode-442000 dockerd[1043]: 2024/03/14 19:42:14 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0314 19:42:18.037531    8428 command_runner.go:130] > Mar 14 19:42:14 multinode-442000 dockerd[1043]: 2024/03/14 19:42:14 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0314 19:42:18.037531    8428 command_runner.go:130] > Mar 14 19:42:14 multinode-442000 dockerd[1043]: 2024/03/14 19:42:14 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0314 19:42:18.037531    8428 command_runner.go:130] > Mar 14 19:42:14 multinode-442000 dockerd[1043]: 2024/03/14 19:42:14 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0314 19:42:18.037531    8428 command_runner.go:130] > Mar 14 19:42:14 multinode-442000 dockerd[1043]: 2024/03/14 19:42:14 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0314 19:42:18.037531    8428 command_runner.go:130] > Mar 14 19:42:14 multinode-442000 dockerd[1043]: 2024/03/14 19:42:14 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0314 19:42:18.037531    8428 command_runner.go:130] > Mar 14 19:42:14 multinode-442000 dockerd[1043]: 2024/03/14 19:42:14 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0314 19:42:18.037531    8428 command_runner.go:130] > Mar 14 19:42:14 multinode-442000 dockerd[1043]: 2024/03/14 19:42:14 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0314 19:42:18.037531    8428 command_runner.go:130] > Mar 14 19:42:14 multinode-442000 dockerd[1043]: 2024/03/14 19:42:14 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0314 19:42:18.037531    8428 command_runner.go:130] > Mar 14 19:42:17 multinode-442000 dockerd[1043]: 2024/03/14 19:42:17 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0314 19:42:18.038075    8428 command_runner.go:130] > Mar 14 19:42:17 multinode-442000 dockerd[1043]: 2024/03/14 19:42:17 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0314 19:42:18.038075    8428 command_runner.go:130] > Mar 14 19:42:17 multinode-442000 dockerd[1043]: 2024/03/14 19:42:17 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0314 19:42:18.038075    8428 command_runner.go:130] > Mar 14 19:42:17 multinode-442000 dockerd[1043]: 2024/03/14 19:42:17 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0314 19:42:18.038148    8428 command_runner.go:130] > Mar 14 19:42:18 multinode-442000 dockerd[1043]: 2024/03/14 19:42:18 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0314 19:42:18.038148    8428 command_runner.go:130] > Mar 14 19:42:18 multinode-442000 dockerd[1043]: 2024/03/14 19:42:18 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0314 19:42:18.038203    8428 command_runner.go:130] > Mar 14 19:42:18 multinode-442000 dockerd[1043]: 2024/03/14 19:42:18 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0314 19:42:18.067678    8428 logs.go:123] Gathering logs for kube-apiserver [a598d24960de] ...
	I0314 19:42:18.067678    8428 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a598d24960de"
	I0314 19:42:18.104507    8428 command_runner.go:130] ! I0314 19:41:02.580148       1 options.go:220] external host was not specified, using 172.17.93.236
	I0314 19:42:18.104607    8428 command_runner.go:130] ! I0314 19:41:02.584195       1 server.go:148] Version: v1.28.4
	I0314 19:42:18.104607    8428 command_runner.go:130] ! I0314 19:41:02.584361       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0314 19:42:18.104607    8428 command_runner.go:130] ! I0314 19:41:03.945945       1 shared_informer.go:311] Waiting for caches to sync for node_authorizer
	I0314 19:42:18.104762    8428 command_runner.go:130] ! I0314 19:41:03.963375       1 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0314 19:42:18.104818    8428 command_runner.go:130] ! I0314 19:41:03.963415       1 plugins.go:161] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0314 19:42:18.104913    8428 command_runner.go:130] ! I0314 19:41:03.963973       1 instance.go:298] Using reconciler: lease
	I0314 19:42:18.104962    8428 command_runner.go:130] ! I0314 19:41:04.031000       1 handler.go:232] Adding GroupVersion apiextensions.k8s.io v1 to ResourceManager
	I0314 19:42:18.104998    8428 command_runner.go:130] ! W0314 19:41:04.031118       1 genericapiserver.go:744] Skipping API apiextensions.k8s.io/v1beta1 because it has no resources.
	I0314 19:42:18.104998    8428 command_runner.go:130] ! I0314 19:41:04.342643       1 handler.go:232] Adding GroupVersion  v1 to ResourceManager
	I0314 19:42:18.104998    8428 command_runner.go:130] ! I0314 19:41:04.343120       1 instance.go:709] API group "internal.apiserver.k8s.io" is not enabled, skipping.
	I0314 19:42:18.105087    8428 command_runner.go:130] ! I0314 19:41:04.862959       1 instance.go:709] API group "resource.k8s.io" is not enabled, skipping.
	I0314 19:42:18.105087    8428 command_runner.go:130] ! I0314 19:41:04.875745       1 handler.go:232] Adding GroupVersion authentication.k8s.io v1 to ResourceManager
	I0314 19:42:18.105087    8428 command_runner.go:130] ! W0314 19:41:04.875858       1 genericapiserver.go:744] Skipping API authentication.k8s.io/v1beta1 because it has no resources.
	I0314 19:42:18.105186    8428 command_runner.go:130] ! W0314 19:41:04.875867       1 genericapiserver.go:744] Skipping API authentication.k8s.io/v1alpha1 because it has no resources.
	I0314 19:42:18.105186    8428 command_runner.go:130] ! I0314 19:41:04.876422       1 handler.go:232] Adding GroupVersion authorization.k8s.io v1 to ResourceManager
	I0314 19:42:18.105186    8428 command_runner.go:130] ! W0314 19:41:04.876506       1 genericapiserver.go:744] Skipping API authorization.k8s.io/v1beta1 because it has no resources.
	I0314 19:42:18.105285    8428 command_runner.go:130] ! I0314 19:41:04.877676       1 handler.go:232] Adding GroupVersion autoscaling v2 to ResourceManager
	I0314 19:42:18.105285    8428 command_runner.go:130] ! I0314 19:41:04.878707       1 handler.go:232] Adding GroupVersion autoscaling v1 to ResourceManager
	I0314 19:42:18.105285    8428 command_runner.go:130] ! W0314 19:41:04.878804       1 genericapiserver.go:744] Skipping API autoscaling/v2beta1 because it has no resources.
	I0314 19:42:18.105379    8428 command_runner.go:130] ! W0314 19:41:04.878812       1 genericapiserver.go:744] Skipping API autoscaling/v2beta2 because it has no resources.
	I0314 19:42:18.105379    8428 command_runner.go:130] ! I0314 19:41:04.881331       1 handler.go:232] Adding GroupVersion batch v1 to ResourceManager
	I0314 19:42:18.105379    8428 command_runner.go:130] ! W0314 19:41:04.881418       1 genericapiserver.go:744] Skipping API batch/v1beta1 because it has no resources.
	I0314 19:42:18.105379    8428 command_runner.go:130] ! I0314 19:41:04.882613       1 handler.go:232] Adding GroupVersion certificates.k8s.io v1 to ResourceManager
	I0314 19:42:18.105479    8428 command_runner.go:130] ! W0314 19:41:04.882706       1 genericapiserver.go:744] Skipping API certificates.k8s.io/v1beta1 because it has no resources.
	I0314 19:42:18.105479    8428 command_runner.go:130] ! W0314 19:41:04.882714       1 genericapiserver.go:744] Skipping API certificates.k8s.io/v1alpha1 because it has no resources.
	I0314 19:42:18.105479    8428 command_runner.go:130] ! I0314 19:41:04.883473       1 handler.go:232] Adding GroupVersion coordination.k8s.io v1 to ResourceManager
	I0314 19:42:18.105575    8428 command_runner.go:130] ! W0314 19:41:04.883562       1 genericapiserver.go:744] Skipping API coordination.k8s.io/v1beta1 because it has no resources.
	I0314 19:42:18.105575    8428 command_runner.go:130] ! W0314 19:41:04.883619       1 genericapiserver.go:744] Skipping API discovery.k8s.io/v1beta1 because it has no resources.
	I0314 19:42:18.105575    8428 command_runner.go:130] ! I0314 19:41:04.884340       1 handler.go:232] Adding GroupVersion discovery.k8s.io v1 to ResourceManager
	I0314 19:42:18.105575    8428 command_runner.go:130] ! I0314 19:41:04.886289       1 handler.go:232] Adding GroupVersion networking.k8s.io v1 to ResourceManager
	I0314 19:42:18.105667    8428 command_runner.go:130] ! W0314 19:41:04.886373       1 genericapiserver.go:744] Skipping API networking.k8s.io/v1beta1 because it has no resources.
	I0314 19:42:18.105752    8428 command_runner.go:130] ! W0314 19:41:04.886382       1 genericapiserver.go:744] Skipping API networking.k8s.io/v1alpha1 because it has no resources.
	I0314 19:42:18.105752    8428 command_runner.go:130] ! I0314 19:41:04.886877       1 handler.go:232] Adding GroupVersion node.k8s.io v1 to ResourceManager
	I0314 19:42:18.105752    8428 command_runner.go:130] ! W0314 19:41:04.886971       1 genericapiserver.go:744] Skipping API node.k8s.io/v1beta1 because it has no resources.
	I0314 19:42:18.105752    8428 command_runner.go:130] ! W0314 19:41:04.886979       1 genericapiserver.go:744] Skipping API node.k8s.io/v1alpha1 because it has no resources.
	I0314 19:42:18.105846    8428 command_runner.go:130] ! I0314 19:41:04.888213       1 handler.go:232] Adding GroupVersion policy v1 to ResourceManager
	I0314 19:42:18.105846    8428 command_runner.go:130] ! W0314 19:41:04.888261       1 genericapiserver.go:744] Skipping API policy/v1beta1 because it has no resources.
	I0314 19:42:18.105949    8428 command_runner.go:130] ! I0314 19:41:04.903461       1 handler.go:232] Adding GroupVersion rbac.authorization.k8s.io v1 to ResourceManager
	I0314 19:42:18.105949    8428 command_runner.go:130] ! W0314 19:41:04.903509       1 genericapiserver.go:744] Skipping API rbac.authorization.k8s.io/v1beta1 because it has no resources.
	I0314 19:42:18.105949    8428 command_runner.go:130] ! W0314 19:41:04.903517       1 genericapiserver.go:744] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
	I0314 19:42:18.105949    8428 command_runner.go:130] ! I0314 19:41:04.906409       1 handler.go:232] Adding GroupVersion scheduling.k8s.io v1 to ResourceManager
	I0314 19:42:18.106050    8428 command_runner.go:130] ! W0314 19:41:04.906458       1 genericapiserver.go:744] Skipping API scheduling.k8s.io/v1beta1 because it has no resources.
	I0314 19:42:18.106050    8428 command_runner.go:130] ! W0314 19:41:04.906466       1 genericapiserver.go:744] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
	I0314 19:42:18.106050    8428 command_runner.go:130] ! I0314 19:41:04.915366       1 handler.go:232] Adding GroupVersion storage.k8s.io v1 to ResourceManager
	I0314 19:42:18.106163    8428 command_runner.go:130] ! W0314 19:41:04.915463       1 genericapiserver.go:744] Skipping API storage.k8s.io/v1beta1 because it has no resources.
	I0314 19:42:18.106255    8428 command_runner.go:130] ! W0314 19:41:04.915472       1 genericapiserver.go:744] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
	I0314 19:42:18.106313    8428 command_runner.go:130] ! I0314 19:41:04.916839       1 handler.go:232] Adding GroupVersion flowcontrol.apiserver.k8s.io v1beta3 to ResourceManager
	I0314 19:42:18.106313    8428 command_runner.go:130] ! I0314 19:41:04.918318       1 handler.go:232] Adding GroupVersion flowcontrol.apiserver.k8s.io v1beta2 to ResourceManager
	I0314 19:42:18.106313    8428 command_runner.go:130] ! W0314 19:41:04.918410       1 genericapiserver.go:744] Skipping API flowcontrol.apiserver.k8s.io/v1beta1 because it has no resources.
	I0314 19:42:18.106403    8428 command_runner.go:130] ! W0314 19:41:04.918418       1 genericapiserver.go:744] Skipping API flowcontrol.apiserver.k8s.io/v1alpha1 because it has no resources.
	I0314 19:42:18.106403    8428 command_runner.go:130] ! I0314 19:41:04.922469       1 handler.go:232] Adding GroupVersion apps v1 to ResourceManager
	I0314 19:42:18.106403    8428 command_runner.go:130] ! W0314 19:41:04.922563       1 genericapiserver.go:744] Skipping API apps/v1beta2 because it has no resources.
	I0314 19:42:18.106403    8428 command_runner.go:130] ! W0314 19:41:04.922576       1 genericapiserver.go:744] Skipping API apps/v1beta1 because it has no resources.
	I0314 19:42:18.106504    8428 command_runner.go:130] ! I0314 19:41:04.923589       1 handler.go:232] Adding GroupVersion admissionregistration.k8s.io v1 to ResourceManager
	I0314 19:42:18.106504    8428 command_runner.go:130] ! W0314 19:41:04.923671       1 genericapiserver.go:744] Skipping API admissionregistration.k8s.io/v1beta1 because it has no resources.
	I0314 19:42:18.106604    8428 command_runner.go:130] ! W0314 19:41:04.923678       1 genericapiserver.go:744] Skipping API admissionregistration.k8s.io/v1alpha1 because it has no resources.
	I0314 19:42:18.106604    8428 command_runner.go:130] ! I0314 19:41:04.924323       1 handler.go:232] Adding GroupVersion events.k8s.io v1 to ResourceManager
	I0314 19:42:18.106604    8428 command_runner.go:130] ! W0314 19:41:04.924409       1 genericapiserver.go:744] Skipping API events.k8s.io/v1beta1 because it has no resources.
	I0314 19:42:18.106701    8428 command_runner.go:130] ! I0314 19:41:04.946149       1 handler.go:232] Adding GroupVersion apiregistration.k8s.io v1 to ResourceManager
	I0314 19:42:18.106701    8428 command_runner.go:130] ! W0314 19:41:04.946188       1 genericapiserver.go:744] Skipping API apiregistration.k8s.io/v1beta1 because it has no resources.
	I0314 19:42:18.106701    8428 command_runner.go:130] ! I0314 19:41:05.649195       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0314 19:42:18.106701    8428 command_runner.go:130] ! I0314 19:41:05.649351       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0314 19:42:18.106801    8428 command_runner.go:130] ! I0314 19:41:05.650113       1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	I0314 19:42:18.106801    8428 command_runner.go:130] ! I0314 19:41:05.651281       1 secure_serving.go:213] Serving securely on [::]:8443
	I0314 19:42:18.106801    8428 command_runner.go:130] ! I0314 19:41:05.651311       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0314 19:42:18.106801    8428 command_runner.go:130] ! I0314 19:41:05.651726       1 handler_discovery.go:412] Starting ResourceDiscoveryManager
	I0314 19:42:18.106906    8428 command_runner.go:130] ! I0314 19:41:05.651907       1 dynamic_serving_content.go:132] "Starting controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	I0314 19:42:18.106906    8428 command_runner.go:130] ! I0314 19:41:05.654468       1 controller.go:80] Starting OpenAPI V3 AggregationController
	I0314 19:42:18.106906    8428 command_runner.go:130] ! I0314 19:41:05.654814       1 gc_controller.go:78] Starting apiserver lease garbage collector
	I0314 19:42:18.107009    8428 command_runner.go:130] ! I0314 19:41:05.655201       1 gc_controller.go:78] Starting apiserver lease garbage collector
	I0314 19:42:18.107009    8428 command_runner.go:130] ! I0314 19:41:05.656049       1 apf_controller.go:372] Starting API Priority and Fairness config controller
	I0314 19:42:18.107009    8428 command_runner.go:130] ! I0314 19:41:05.656308       1 available_controller.go:423] Starting AvailableConditionController
	I0314 19:42:18.107117    8428 command_runner.go:130] ! I0314 19:41:05.656404       1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
	I0314 19:42:18.107117    8428 command_runner.go:130] ! I0314 19:41:05.651597       1 apiservice_controller.go:97] Starting APIServiceRegistrationController
	I0314 19:42:18.107117    8428 command_runner.go:130] ! I0314 19:41:05.656599       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
	I0314 19:42:18.107117    8428 command_runner.go:130] ! I0314 19:41:05.658623       1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
	I0314 19:42:18.107223    8428 command_runner.go:130] ! I0314 19:41:05.658785       1 shared_informer.go:311] Waiting for caches to sync for cluster_authentication_trust_controller
	I0314 19:42:18.107223    8428 command_runner.go:130] ! I0314 19:41:05.659483       1 system_namespaces_controller.go:67] Starting system namespaces controller
	I0314 19:42:18.107223    8428 command_runner.go:130] ! I0314 19:41:05.661076       1 aggregator.go:164] waiting for initial CRD sync...
	I0314 19:42:18.107223    8428 command_runner.go:130] ! I0314 19:41:05.662487       1 customresource_discovery_controller.go:289] Starting DiscoveryController
	I0314 19:42:18.107330    8428 command_runner.go:130] ! I0314 19:41:05.662789       1 controller.go:78] Starting OpenAPI AggregationController
	I0314 19:42:18.107330    8428 command_runner.go:130] ! I0314 19:41:05.727194       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0314 19:42:18.107330    8428 command_runner.go:130] ! I0314 19:41:05.728512       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0314 19:42:18.107424    8428 command_runner.go:130] ! I0314 19:41:05.729067       1 controller.go:116] Starting legacy_token_tracking_controller
	I0314 19:42:18.107451    8428 command_runner.go:130] ! I0314 19:41:05.729317       1 shared_informer.go:311] Waiting for caches to sync for configmaps
	I0314 19:42:18.107451    8428 command_runner.go:130] ! I0314 19:41:05.729432       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0314 19:42:18.107451    8428 command_runner.go:130] ! I0314 19:41:05.729507       1 shared_informer.go:311] Waiting for caches to sync for crd-autoregister
	I0314 19:42:18.107451    8428 command_runner.go:130] ! I0314 19:41:05.729606       1 controller.go:134] Starting OpenAPI controller
	I0314 19:42:18.107451    8428 command_runner.go:130] ! I0314 19:41:05.729710       1 controller.go:85] Starting OpenAPI V3 controller
	I0314 19:42:18.107451    8428 command_runner.go:130] ! I0314 19:41:05.729812       1 naming_controller.go:291] Starting NamingConditionController
	I0314 19:42:18.107633    8428 command_runner.go:130] ! I0314 19:41:05.729911       1 establishing_controller.go:76] Starting EstablishingController
	I0314 19:42:18.107633    8428 command_runner.go:130] ! I0314 19:41:05.730411       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0314 19:42:18.107633    8428 command_runner.go:130] ! I0314 19:41:05.730521       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0314 19:42:18.107633    8428 command_runner.go:130] ! I0314 19:41:05.730616       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0314 19:42:18.107741    8428 command_runner.go:130] ! I0314 19:41:05.799477       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0314 19:42:18.107741    8428 command_runner.go:130] ! I0314 19:41:05.813580       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0314 19:42:18.107741    8428 command_runner.go:130] ! I0314 19:41:05.830168       1 shared_informer.go:318] Caches are synced for configmaps
	I0314 19:42:18.107741    8428 command_runner.go:130] ! I0314 19:41:05.830217       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0314 19:42:18.107847    8428 command_runner.go:130] ! I0314 19:41:05.830281       1 aggregator.go:166] initial CRD sync complete...
	I0314 19:42:18.107847    8428 command_runner.go:130] ! I0314 19:41:05.830289       1 autoregister_controller.go:141] Starting autoregister controller
	I0314 19:42:18.107847    8428 command_runner.go:130] ! I0314 19:41:05.830295       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0314 19:42:18.107847    8428 command_runner.go:130] ! I0314 19:41:05.830301       1 cache.go:39] Caches are synced for autoregister controller
	I0314 19:42:18.107944    8428 command_runner.go:130] ! I0314 19:41:05.846941       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0314 19:42:18.107999    8428 command_runner.go:130] ! I0314 19:41:05.857057       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0314 19:42:18.107999    8428 command_runner.go:130] ! I0314 19:41:05.858966       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0314 19:42:18.108071    8428 command_runner.go:130] ! I0314 19:41:05.865554       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I0314 19:42:18.108071    8428 command_runner.go:130] ! I0314 19:41:05.865721       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I0314 19:42:18.108071    8428 command_runner.go:130] ! I0314 19:41:06.667315       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0314 19:42:18.108071    8428 command_runner.go:130] ! W0314 19:41:07.118314       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [172.17.93.236]
	I0314 19:42:18.108164    8428 command_runner.go:130] ! I0314 19:41:07.120612       1 controller.go:624] quota admission added evaluator for: endpoints
	I0314 19:42:18.108192    8428 command_runner.go:130] ! I0314 19:41:07.135973       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0314 19:42:18.108192    8428 command_runner.go:130] ! I0314 19:41:09.049225       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0314 19:42:18.108192    8428 command_runner.go:130] ! I0314 19:41:09.264220       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0314 19:42:18.108192    8428 command_runner.go:130] ! I0314 19:41:09.277110       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0314 19:42:18.108192    8428 command_runner.go:130] ! I0314 19:41:09.393446       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0314 19:42:18.108192    8428 command_runner.go:130] ! I0314 19:41:09.422214       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0314 19:42:18.114857    8428 logs.go:123] Gathering logs for kube-controller-manager [16b80f73683d] ...
	I0314 19:42:18.114857    8428 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16b80f73683d"
	I0314 19:42:18.142896    8428 command_runner.go:130] ! I0314 19:18:57.791996       1 serving.go:348] Generated self-signed cert in-memory
	I0314 19:42:18.143269    8428 command_runner.go:130] ! I0314 19:18:58.802083       1 controllermanager.go:189] "Starting" version="v1.28.4"
	I0314 19:42:18.143269    8428 command_runner.go:130] ! I0314 19:18:58.802123       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0314 19:42:18.143269    8428 command_runner.go:130] ! I0314 19:18:58.803952       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0314 19:42:18.143269    8428 command_runner.go:130] ! I0314 19:18:58.804068       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0314 19:42:18.143269    8428 command_runner.go:130] ! I0314 19:18:58.807259       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0314 19:42:18.143269    8428 command_runner.go:130] ! I0314 19:18:58.807321       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0314 19:42:18.143269    8428 command_runner.go:130] ! I0314 19:19:03.211766       1 shared_informer.go:311] Waiting for caches to sync for tokens
	I0314 19:42:18.143269    8428 command_runner.go:130] ! I0314 19:19:03.241058       1 controllermanager.go:642] "Started controller" controller="endpoints-controller"
	I0314 19:42:18.143269    8428 command_runner.go:130] ! I0314 19:19:03.241394       1 endpoints_controller.go:174] "Starting endpoint controller"
	I0314 19:42:18.143269    8428 command_runner.go:130] ! I0314 19:19:03.241421       1 shared_informer.go:311] Waiting for caches to sync for endpoint
	I0314 19:42:18.143269    8428 command_runner.go:130] ! I0314 19:19:03.277645       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="limitranges"
	I0314 19:42:18.143269    8428 command_runner.go:130] ! I0314 19:19:03.277842       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="serviceaccounts"
	I0314 19:42:18.143269    8428 command_runner.go:130] ! I0314 19:19:03.277987       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="statefulsets.apps"
	I0314 19:42:18.143804    8428 command_runner.go:130] ! I0314 19:19:03.278099       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="roles.rbac.authorization.k8s.io"
	I0314 19:42:18.143804    8428 command_runner.go:130] ! I0314 19:19:03.278176       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="jobs.batch"
	I0314 19:42:18.143804    8428 command_runner.go:130] ! I0314 19:19:03.278283       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="ingresses.networking.k8s.io"
	I0314 19:42:18.143804    8428 command_runner.go:130] ! I0314 19:19:03.278389       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="leases.coordination.k8s.io"
	I0314 19:42:18.143804    8428 command_runner.go:130] ! I0314 19:19:03.278566       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="rolebindings.rbac.authorization.k8s.io"
	I0314 19:42:18.143804    8428 command_runner.go:130] ! W0314 19:19:03.278710       1 shared_informer.go:593] resyncPeriod 13h23m0.648968128s is smaller than resyncCheckPeriod 15h46m21.421594093s and the informer has already started. Changing it to 15h46m21.421594093s
	I0314 19:42:18.143804    8428 command_runner.go:130] ! I0314 19:19:03.278915       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="controllerrevisions.apps"
	I0314 19:42:18.143804    8428 command_runner.go:130] ! I0314 19:19:03.279052       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="daemonsets.apps"
	I0314 19:42:18.143804    8428 command_runner.go:130] ! I0314 19:19:03.279196       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="networkpolicies.networking.k8s.io"
	I0314 19:42:18.143804    8428 command_runner.go:130] ! I0314 19:19:03.279291       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="poddisruptionbudgets.policy"
	I0314 19:42:18.143804    8428 command_runner.go:130] ! I0314 19:19:03.279313       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="endpoints"
	I0314 19:42:18.144131    8428 command_runner.go:130] ! I0314 19:19:03.279560       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="replicasets.apps"
	I0314 19:42:18.144131    8428 command_runner.go:130] ! I0314 19:19:03.279688       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="horizontalpodautoscalers.autoscaling"
	I0314 19:42:18.144131    8428 command_runner.go:130] ! I0314 19:19:03.279834       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="cronjobs.batch"
	I0314 19:42:18.144131    8428 command_runner.go:130] ! I0314 19:19:03.279857       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="deployments.apps"
	I0314 19:42:18.144131    8428 command_runner.go:130] ! I0314 19:19:03.279927       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="csistoragecapacities.storage.k8s.io"
	I0314 19:42:18.144131    8428 command_runner.go:130] ! I0314 19:19:03.280011       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="endpointslices.discovery.k8s.io"
	I0314 19:42:18.144261    8428 command_runner.go:130] ! I0314 19:19:03.280106       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="podtemplates"
	I0314 19:42:18.144261    8428 command_runner.go:130] ! I0314 19:19:03.280148       1 controllermanager.go:642] "Started controller" controller="resourcequota-controller"
	I0314 19:42:18.144261    8428 command_runner.go:130] ! I0314 19:19:03.280224       1 resource_quota_controller.go:294] "Starting resource quota controller"
	I0314 19:42:18.144261    8428 command_runner.go:130] ! I0314 19:19:03.280306       1 shared_informer.go:311] Waiting for caches to sync for resource quota
	I0314 19:42:18.144261    8428 command_runner.go:130] ! I0314 19:19:03.280392       1 resource_quota_monitor.go:305] "QuotaMonitor running"
	I0314 19:42:18.144261    8428 command_runner.go:130] ! I0314 19:19:03.297527       1 controllermanager.go:642] "Started controller" controller="serviceaccount-controller"
	I0314 19:42:18.144261    8428 command_runner.go:130] ! I0314 19:19:03.297675       1 serviceaccounts_controller.go:111] "Starting service account controller"
	I0314 19:42:18.144261    8428 command_runner.go:130] ! I0314 19:19:03.297706       1 shared_informer.go:311] Waiting for caches to sync for service account
	I0314 19:42:18.144261    8428 command_runner.go:130] ! I0314 19:19:03.310691       1 node_lifecycle_controller.go:431] "Controller will reconcile labels"
	I0314 19:42:18.144261    8428 command_runner.go:130] ! I0314 19:19:03.310864       1 controllermanager.go:642] "Started controller" controller="node-lifecycle-controller"
	I0314 19:42:18.144470    8428 command_runner.go:130] ! I0314 19:19:03.311121       1 node_lifecycle_controller.go:465] "Sending events to api server"
	I0314 19:42:18.144470    8428 command_runner.go:130] ! I0314 19:19:03.311163       1 node_lifecycle_controller.go:476] "Starting node controller"
	I0314 19:42:18.144470    8428 command_runner.go:130] ! I0314 19:19:03.311170       1 shared_informer.go:311] Waiting for caches to sync for taint
	I0314 19:42:18.144470    8428 command_runner.go:130] ! I0314 19:19:03.312491       1 shared_informer.go:318] Caches are synced for tokens
	I0314 19:42:18.144470    8428 command_runner.go:130] ! I0314 19:19:03.324271       1 controllermanager.go:642] "Started controller" controller="persistentvolume-attach-detach-controller"
	I0314 19:42:18.144470    8428 command_runner.go:130] ! I0314 19:19:03.324640       1 attach_detach_controller.go:337] "Starting attach detach controller"
	I0314 19:42:18.144470    8428 command_runner.go:130] ! I0314 19:19:03.324856       1 shared_informer.go:311] Waiting for caches to sync for attach detach
	I0314 19:42:18.144601    8428 command_runner.go:130] ! I0314 19:19:03.341489       1 controllermanager.go:642] "Started controller" controller="certificatesigningrequest-cleaner-controller"
	I0314 19:42:18.144601    8428 command_runner.go:130] ! I0314 19:19:03.341829       1 cleaner.go:83] "Starting CSR cleaner controller"
	I0314 19:42:18.144601    8428 command_runner.go:130] ! I0314 19:19:03.359979       1 controllermanager.go:642] "Started controller" controller="bootstrap-signer-controller"
	I0314 19:42:18.144601    8428 command_runner.go:130] ! I0314 19:19:03.360131       1 shared_informer.go:311] Waiting for caches to sync for bootstrap_signer
	I0314 19:42:18.144601    8428 command_runner.go:130] ! I0314 19:19:03.373006       1 controllermanager.go:642] "Started controller" controller="persistentvolume-binder-controller"
	I0314 19:42:18.144601    8428 command_runner.go:130] ! I0314 19:19:03.373343       1 pv_controller_base.go:319] "Starting persistent volume controller"
	I0314 19:42:18.144601    8428 command_runner.go:130] ! I0314 19:19:03.373606       1 shared_informer.go:311] Waiting for caches to sync for persistent volume
	I0314 19:42:18.144720    8428 command_runner.go:130] ! I0314 19:19:03.385026       1 controllermanager.go:642] "Started controller" controller="persistentvolumeclaim-protection-controller"
	I0314 19:42:18.144720    8428 command_runner.go:130] ! I0314 19:19:03.385081       1 pvc_protection_controller.go:102] "Starting PVC protection controller"
	I0314 19:42:18.144720    8428 command_runner.go:130] ! I0314 19:19:03.385807       1 shared_informer.go:311] Waiting for caches to sync for PVC protection
	I0314 19:42:18.144720    8428 command_runner.go:130] ! I0314 19:19:03.399556       1 controllermanager.go:642] "Started controller" controller="token-cleaner-controller"
	I0314 19:42:18.144720    8428 command_runner.go:130] ! I0314 19:19:03.399796       1 core.go:228] "Warning: configure-cloud-routes is set, but no cloud provider specified. Will not configure cloud provider routes."
	I0314 19:42:18.144720    8428 command_runner.go:130] ! I0314 19:19:03.399936       1 controllermanager.go:620] "Warning: skipping controller" controller="node-route-controller"
	I0314 19:42:18.144844    8428 command_runner.go:130] ! I0314 19:19:03.400078       1 tokencleaner.go:112] "Starting token cleaner controller"
	I0314 19:42:18.144844    8428 command_runner.go:130] ! I0314 19:19:03.400349       1 shared_informer.go:311] Waiting for caches to sync for token_cleaner
	I0314 19:42:18.144844    8428 command_runner.go:130] ! I0314 19:19:03.400489       1 shared_informer.go:318] Caches are synced for token_cleaner
	I0314 19:42:18.144844    8428 command_runner.go:130] ! I0314 19:19:03.521977       1 controllermanager.go:642] "Started controller" controller="persistentvolume-protection-controller"
	I0314 19:42:18.144844    8428 command_runner.go:130] ! I0314 19:19:03.522076       1 pv_protection_controller.go:78] "Starting PV protection controller"
	I0314 19:42:18.144844    8428 command_runner.go:130] ! I0314 19:19:03.522086       1 shared_informer.go:311] Waiting for caches to sync for PV protection
	I0314 19:42:18.144844    8428 command_runner.go:130] ! I0314 19:19:03.567446       1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-kubelet-serving"
	I0314 19:42:18.144975    8428 command_runner.go:130] ! I0314 19:19:03.567574       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I0314 19:42:18.144975    8428 command_runner.go:130] ! I0314 19:19:03.567615       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0314 19:42:18.144975    8428 command_runner.go:130] ! I0314 19:19:03.568792       1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-kubelet-client"
	I0314 19:42:18.144975    8428 command_runner.go:130] ! I0314 19:19:03.568891       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I0314 19:42:18.145108    8428 command_runner.go:130] ! I0314 19:19:03.569119       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0314 19:42:18.145108    8428 command_runner.go:130] ! I0314 19:19:03.570147       1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-kube-apiserver-client"
	I0314 19:42:18.145108    8428 command_runner.go:130] ! I0314 19:19:03.570261       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I0314 19:42:18.145108    8428 command_runner.go:130] ! I0314 19:19:03.570356       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0314 19:42:18.145108    8428 command_runner.go:130] ! I0314 19:19:03.571403       1 controllermanager.go:642] "Started controller" controller="certificatesigningrequest-signing-controller"
	I0314 19:42:18.145108    8428 command_runner.go:130] ! I0314 19:19:03.571529       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0314 19:42:18.145108    8428 command_runner.go:130] ! I0314 19:19:03.571434       1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-legacy-unknown"
	I0314 19:42:18.145243    8428 command_runner.go:130] ! I0314 19:19:03.572095       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0314 19:42:18.145243    8428 command_runner.go:130] ! I0314 19:19:03.723142       1 controllermanager.go:642] "Started controller" controller="ttl-controller"
	I0314 19:42:18.145243    8428 command_runner.go:130] ! I0314 19:19:03.723289       1 ttl_controller.go:124] "Starting TTL controller"
	I0314 19:42:18.145243    8428 command_runner.go:130] ! I0314 19:19:03.723300       1 shared_informer.go:311] Waiting for caches to sync for TTL
	I0314 19:42:18.145243    8428 command_runner.go:130] ! I0314 19:19:13.784656       1 range_allocator.go:111] "No Secondary Service CIDR provided. Skipping filtering out secondary service addresses"
	I0314 19:42:18.145243    8428 command_runner.go:130] ! I0314 19:19:13.784710       1 controllermanager.go:642] "Started controller" controller="node-ipam-controller"
	I0314 19:42:18.145326    8428 command_runner.go:130] ! I0314 19:19:13.784891       1 node_ipam_controller.go:162] "Starting ipam controller"
	I0314 19:42:18.145326    8428 command_runner.go:130] ! I0314 19:19:13.784975       1 shared_informer.go:311] Waiting for caches to sync for node
	I0314 19:42:18.145326    8428 command_runner.go:130] ! I0314 19:19:13.813537       1 controllermanager.go:642] "Started controller" controller="namespace-controller"
	I0314 19:42:18.145326    8428 command_runner.go:130] ! I0314 19:19:13.814099       1 namespace_controller.go:197] "Starting namespace controller"
	I0314 19:42:18.145326    8428 command_runner.go:130] ! I0314 19:19:13.814528       1 shared_informer.go:311] Waiting for caches to sync for namespace
	I0314 19:42:18.145326    8428 command_runner.go:130] ! I0314 19:19:13.831516       1 controllermanager.go:642] "Started controller" controller="garbage-collector-controller"
	I0314 19:42:18.145326    8428 command_runner.go:130] ! I0314 19:19:13.831928       1 garbagecollector.go:155] "Starting controller" controller="garbagecollector"
	I0314 19:42:18.145326    8428 command_runner.go:130] ! I0314 19:19:13.832023       1 shared_informer.go:311] Waiting for caches to sync for garbage collector
	I0314 19:42:18.145326    8428 command_runner.go:130] ! I0314 19:19:13.832052       1 graph_builder.go:294] "Running" component="GraphBuilder"
	I0314 19:42:18.145326    8428 command_runner.go:130] ! I0314 19:19:13.876141       1 controllermanager.go:642] "Started controller" controller="horizontal-pod-autoscaler-controller"
	I0314 19:42:18.145326    8428 command_runner.go:130] ! I0314 19:19:13.876437       1 horizontal.go:200] "Starting HPA controller"
	I0314 19:42:18.145326    8428 command_runner.go:130] ! I0314 19:19:13.876448       1 shared_informer.go:311] Waiting for caches to sync for HPA
	I0314 19:42:18.145326    8428 command_runner.go:130] ! I0314 19:19:13.892498       1 controllermanager.go:642] "Started controller" controller="disruption-controller"
	I0314 19:42:18.145326    8428 command_runner.go:130] ! I0314 19:19:13.892891       1 disruption.go:433] "Sending events to api server."
	I0314 19:42:18.145326    8428 command_runner.go:130] ! I0314 19:19:13.893092       1 disruption.go:444] "Starting disruption controller"
	I0314 19:42:18.145326    8428 command_runner.go:130] ! I0314 19:19:13.893185       1 shared_informer.go:311] Waiting for caches to sync for disruption
	I0314 19:42:18.145326    8428 command_runner.go:130] ! I0314 19:19:13.895299       1 controllermanager.go:642] "Started controller" controller="certificatesigningrequest-approving-controller"
	I0314 19:42:18.145326    8428 command_runner.go:130] ! I0314 19:19:13.895861       1 certificate_controller.go:115] "Starting certificate controller" name="csrapproving"
	I0314 19:42:18.145326    8428 command_runner.go:130] ! I0314 19:19:13.896105       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrapproving
	I0314 19:42:18.145326    8428 command_runner.go:130] ! I0314 19:19:13.908480       1 controllermanager.go:642] "Started controller" controller="endpointslice-mirroring-controller"
	I0314 19:42:18.145326    8428 command_runner.go:130] ! I0314 19:19:13.908861       1 endpointslicemirroring_controller.go:223] "Starting EndpointSliceMirroring controller"
	I0314 19:42:18.145326    8428 command_runner.go:130] ! I0314 19:19:13.908873       1 shared_informer.go:311] Waiting for caches to sync for endpoint_slice_mirroring
	I0314 19:42:18.145326    8428 command_runner.go:130] ! I0314 19:19:13.929369       1 controllermanager.go:642] "Started controller" controller="replicationcontroller-controller"
	I0314 19:42:18.145326    8428 command_runner.go:130] ! I0314 19:19:13.929803       1 replica_set.go:214] "Starting controller" name="replicationcontroller"
	I0314 19:42:18.145326    8428 command_runner.go:130] ! I0314 19:19:13.930050       1 shared_informer.go:311] Waiting for caches to sync for ReplicationController
	I0314 19:42:18.145863    8428 command_runner.go:130] ! I0314 19:19:13.974683       1 controllermanager.go:642] "Started controller" controller="replicaset-controller"
	I0314 19:42:18.145863    8428 command_runner.go:130] ! I0314 19:19:13.974899       1 replica_set.go:214] "Starting controller" name="replicaset"
	I0314 19:42:18.145863    8428 command_runner.go:130] ! I0314 19:19:13.975108       1 shared_informer.go:311] Waiting for caches to sync for ReplicaSet
	I0314 19:42:18.145863    8428 command_runner.go:130] ! E0314 19:19:14.134866       1 core.go:92] "Failed to start service controller" err="WARNING: no cloud provider provided, services of type LoadBalancer will fail"
	I0314 19:42:18.145863    8428 command_runner.go:130] ! I0314 19:19:14.135266       1 controllermanager.go:620] "Warning: skipping controller" controller="service-lb-controller"
	I0314 19:42:18.145964    8428 command_runner.go:130] ! E0314 19:19:14.170400       1 core.go:213] "Failed to start cloud node lifecycle controller" err="no cloud provider provided"
	I0314 19:42:18.145964    8428 command_runner.go:130] ! I0314 19:19:14.170426       1 controllermanager.go:620] "Warning: skipping controller" controller="cloud-node-lifecycle-controller"
	I0314 19:42:18.145964    8428 command_runner.go:130] ! I0314 19:19:14.324676       1 controllermanager.go:642] "Started controller" controller="ttl-after-finished-controller"
	I0314 19:42:18.145964    8428 command_runner.go:130] ! I0314 19:19:14.324865       1 ttlafterfinished_controller.go:109] "Starting TTL after finished controller"
	I0314 19:42:18.146055    8428 command_runner.go:130] ! I0314 19:19:14.325169       1 shared_informer.go:311] Waiting for caches to sync for TTL after finished
	I0314 19:42:18.146055    8428 command_runner.go:130] ! I0314 19:19:14.474401       1 controllermanager.go:642] "Started controller" controller="ephemeral-volume-controller"
	I0314 19:42:18.146055    8428 command_runner.go:130] ! I0314 19:19:14.474562       1 controller.go:169] "Starting ephemeral volume controller"
	I0314 19:42:18.146055    8428 command_runner.go:130] ! I0314 19:19:14.474660       1 shared_informer.go:311] Waiting for caches to sync for ephemeral
	I0314 19:42:18.146055    8428 command_runner.go:130] ! I0314 19:19:14.633668       1 controllermanager.go:642] "Started controller" controller="endpointslice-controller"
	I0314 19:42:18.146156    8428 command_runner.go:130] ! I0314 19:19:14.633821       1 endpointslice_controller.go:264] "Starting endpoint slice controller"
	I0314 19:42:18.146156    8428 command_runner.go:130] ! I0314 19:19:14.633832       1 shared_informer.go:311] Waiting for caches to sync for endpoint_slice
	I0314 19:42:18.146156    8428 command_runner.go:130] ! I0314 19:19:14.773955       1 controllermanager.go:642] "Started controller" controller="pod-garbage-collector-controller"
	I0314 19:42:18.146156    8428 command_runner.go:130] ! I0314 19:19:14.774019       1 gc_controller.go:101] "Starting GC controller"
	I0314 19:42:18.146246    8428 command_runner.go:130] ! I0314 19:19:14.774027       1 shared_informer.go:311] Waiting for caches to sync for GC
	I0314 19:42:18.146246    8428 command_runner.go:130] ! I0314 19:19:14.925568       1 controllermanager.go:642] "Started controller" controller="daemonset-controller"
	I0314 19:42:18.146246    8428 command_runner.go:130] ! I0314 19:19:14.925814       1 daemon_controller.go:291] "Starting daemon sets controller"
	I0314 19:42:18.146246    8428 command_runner.go:130] ! I0314 19:19:14.925828       1 shared_informer.go:311] Waiting for caches to sync for daemon sets
	I0314 19:42:18.146340    8428 command_runner.go:130] ! I0314 19:19:15.075328       1 controllermanager.go:642] "Started controller" controller="job-controller"
	I0314 19:42:18.146340    8428 command_runner.go:130] ! I0314 19:19:15.075556       1 job_controller.go:226] "Starting job controller"
	I0314 19:42:18.146340    8428 command_runner.go:130] ! I0314 19:19:15.075634       1 shared_informer.go:311] Waiting for caches to sync for job
	I0314 19:42:18.146340    8428 command_runner.go:130] ! I0314 19:19:15.225929       1 controllermanager.go:642] "Started controller" controller="persistentvolume-expander-controller"
	I0314 19:42:18.146340    8428 command_runner.go:130] ! I0314 19:19:15.226065       1 expand_controller.go:328] "Starting expand controller"
	I0314 19:42:18.146430    8428 command_runner.go:130] ! I0314 19:19:15.226077       1 shared_informer.go:311] Waiting for caches to sync for expand
	I0314 19:42:18.146430    8428 command_runner.go:130] ! I0314 19:19:15.378471       1 controllermanager.go:642] "Started controller" controller="deployment-controller"
	I0314 19:42:18.146430    8428 command_runner.go:130] ! I0314 19:19:15.378640       1 deployment_controller.go:168] "Starting controller" controller="deployment"
	I0314 19:42:18.146430    8428 command_runner.go:130] ! I0314 19:19:15.379237       1 shared_informer.go:311] Waiting for caches to sync for deployment
	I0314 19:42:18.146519    8428 command_runner.go:130] ! I0314 19:19:15.525089       1 controllermanager.go:642] "Started controller" controller="statefulset-controller"
	I0314 19:42:18.146519    8428 command_runner.go:130] ! I0314 19:19:15.525565       1 stateful_set.go:161] "Starting stateful set controller"
	I0314 19:42:18.146607    8428 command_runner.go:130] ! I0314 19:19:15.525643       1 shared_informer.go:311] Waiting for caches to sync for stateful set
	I0314 19:42:18.146607    8428 command_runner.go:130] ! I0314 19:19:15.679545       1 controllermanager.go:642] "Started controller" controller="cronjob-controller"
	I0314 19:42:18.146607    8428 command_runner.go:130] ! I0314 19:19:15.679611       1 cronjob_controllerv2.go:139] "Starting cronjob controller v2"
	I0314 19:42:18.146607    8428 command_runner.go:130] ! I0314 19:19:15.679619       1 shared_informer.go:311] Waiting for caches to sync for cronjob
	I0314 19:42:18.146696    8428 command_runner.go:130] ! I0314 19:19:15.825516       1 controllermanager.go:642] "Started controller" controller="clusterrole-aggregation-controller"
	I0314 19:42:18.146696    8428 command_runner.go:130] ! I0314 19:19:15.825908       1 clusterroleaggregation_controller.go:189] "Starting ClusterRoleAggregator controller"
	I0314 19:42:18.146696    8428 command_runner.go:130] ! I0314 19:19:15.825920       1 shared_informer.go:311] Waiting for caches to sync for ClusterRoleAggregator
	I0314 19:42:18.146785    8428 command_runner.go:130] ! I0314 19:19:15.976308       1 controllermanager.go:642] "Started controller" controller="root-ca-certificate-publisher-controller"
	I0314 19:42:18.146785    8428 command_runner.go:130] ! I0314 19:19:15.976673       1 publisher.go:102] "Starting root CA cert publisher controller"
	I0314 19:42:18.146785    8428 command_runner.go:130] ! I0314 19:19:15.976858       1 shared_informer.go:311] Waiting for caches to sync for crt configmap
	I0314 19:42:18.146785    8428 command_runner.go:130] ! I0314 19:19:15.993409       1 shared_informer.go:311] Waiting for caches to sync for resource quota
	I0314 19:42:18.146871    8428 command_runner.go:130] ! I0314 19:19:16.017841       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-442000\" does not exist"
	I0314 19:42:18.146871    8428 command_runner.go:130] ! I0314 19:19:16.022817       1 shared_informer.go:311] Waiting for caches to sync for garbage collector
	I0314 19:42:18.146871    8428 command_runner.go:130] ! I0314 19:19:16.023332       1 shared_informer.go:318] Caches are synced for TTL
	I0314 19:42:18.146967    8428 command_runner.go:130] ! I0314 19:19:16.025413       1 shared_informer.go:318] Caches are synced for TTL after finished
	I0314 19:42:18.146967    8428 command_runner.go:130] ! I0314 19:19:16.025667       1 shared_informer.go:318] Caches are synced for stateful set
	I0314 19:42:18.146967    8428 command_runner.go:130] ! I0314 19:19:16.025909       1 shared_informer.go:318] Caches are synced for daemon sets
	I0314 19:42:18.146967    8428 command_runner.go:130] ! I0314 19:19:16.026194       1 shared_informer.go:318] Caches are synced for expand
	I0314 19:42:18.146967    8428 command_runner.go:130] ! I0314 19:19:16.030689       1 shared_informer.go:318] Caches are synced for ReplicationController
	I0314 19:42:18.147059    8428 command_runner.go:130] ! I0314 19:19:16.042937       1 shared_informer.go:318] Caches are synced for endpoint
	I0314 19:42:18.147059    8428 command_runner.go:130] ! I0314 19:19:16.063170       1 shared_informer.go:318] Caches are synced for bootstrap_signer
	I0314 19:42:18.147059    8428 command_runner.go:130] ! I0314 19:19:16.069816       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-client
	I0314 19:42:18.147059    8428 command_runner.go:130] ! I0314 19:19:16.069953       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-serving
	I0314 19:42:18.147149    8428 command_runner.go:130] ! I0314 19:19:16.071382       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0314 19:42:18.147149    8428 command_runner.go:130] ! I0314 19:19:16.072881       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-legacy-unknown
	I0314 19:42:18.147149    8428 command_runner.go:130] ! I0314 19:19:16.075260       1 shared_informer.go:318] Caches are synced for GC
	I0314 19:42:18.147149    8428 command_runner.go:130] ! I0314 19:19:16.075273       1 shared_informer.go:318] Caches are synced for ReplicaSet
	I0314 19:42:18.147237    8428 command_runner.go:130] ! I0314 19:19:16.075312       1 shared_informer.go:318] Caches are synced for ephemeral
	I0314 19:42:18.147237    8428 command_runner.go:130] ! I0314 19:19:16.076852       1 shared_informer.go:318] Caches are synced for HPA
	I0314 19:42:18.147237    8428 command_runner.go:130] ! I0314 19:19:16.077008       1 shared_informer.go:318] Caches are synced for crt configmap
	I0314 19:42:18.147237    8428 command_runner.go:130] ! I0314 19:19:16.077022       1 shared_informer.go:318] Caches are synced for job
	I0314 19:42:18.147237    8428 command_runner.go:130] ! I0314 19:19:16.079681       1 shared_informer.go:318] Caches are synced for deployment
	I0314 19:42:18.147325    8428 command_runner.go:130] ! I0314 19:19:16.079893       1 shared_informer.go:318] Caches are synced for cronjob
	I0314 19:42:18.147325    8428 command_runner.go:130] ! I0314 19:19:16.085788       1 shared_informer.go:318] Caches are synced for node
	I0314 19:42:18.147325    8428 command_runner.go:130] ! I0314 19:19:16.085869       1 range_allocator.go:174] "Sending events to api server"
	I0314 19:42:18.147325    8428 command_runner.go:130] ! I0314 19:19:16.085937       1 range_allocator.go:178] "Starting range CIDR allocator"
	I0314 19:42:18.147414    8428 command_runner.go:130] ! I0314 19:19:16.085945       1 shared_informer.go:311] Waiting for caches to sync for cidrallocator
	I0314 19:42:18.147414    8428 command_runner.go:130] ! I0314 19:19:16.085951       1 shared_informer.go:318] Caches are synced for cidrallocator
	I0314 19:42:18.147414    8428 command_runner.go:130] ! I0314 19:19:16.086224       1 shared_informer.go:318] Caches are synced for PVC protection
	I0314 19:42:18.147414    8428 command_runner.go:130] ! I0314 19:19:16.093730       1 shared_informer.go:318] Caches are synced for disruption
	I0314 19:42:18.147504    8428 command_runner.go:130] ! I0314 19:19:16.093802       1 shared_informer.go:318] Caches are synced for resource quota
	I0314 19:42:18.147504    8428 command_runner.go:130] ! I0314 19:19:16.097148       1 shared_informer.go:318] Caches are synced for certificate-csrapproving
	I0314 19:42:18.147504    8428 command_runner.go:130] ! I0314 19:19:16.098688       1 shared_informer.go:318] Caches are synced for service account
	I0314 19:42:18.147504    8428 command_runner.go:130] ! I0314 19:19:16.102404       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-442000" podCIDRs=["10.244.0.0/24"]
	I0314 19:42:18.147592    8428 command_runner.go:130] ! I0314 19:19:16.112396       1 shared_informer.go:318] Caches are synced for taint
	I0314 19:42:18.147592    8428 command_runner.go:130] ! I0314 19:19:16.112849       1 node_lifecycle_controller.go:1225] "Initializing eviction metric for zone" zone=""
	I0314 19:42:18.147592    8428 command_runner.go:130] ! I0314 19:19:16.113070       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-442000"
	I0314 19:42:18.147680    8428 command_runner.go:130] ! I0314 19:19:16.113155       1 node_lifecycle_controller.go:1029] "Controller detected that all Nodes are not-Ready. Entering master disruption mode"
	I0314 19:42:18.147680    8428 command_runner.go:130] ! I0314 19:19:16.112659       1 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring
	I0314 19:42:18.147680    8428 command_runner.go:130] ! I0314 19:19:16.113865       1 taint_manager.go:205] "Starting NoExecuteTaintManager"
	I0314 19:42:18.147680    8428 command_runner.go:130] ! I0314 19:19:16.113966       1 taint_manager.go:210] "Sending events to api server"
	I0314 19:42:18.147680    8428 command_runner.go:130] ! I0314 19:19:16.115068       1 shared_informer.go:318] Caches are synced for namespace
	I0314 19:42:18.147777    8428 command_runner.go:130] ! I0314 19:19:16.118281       1 event.go:307] "Event occurred" object="multinode-442000" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-442000 event: Registered Node multinode-442000 in Controller"
	I0314 19:42:18.147777    8428 command_runner.go:130] ! I0314 19:19:16.134584       1 shared_informer.go:318] Caches are synced for endpoint_slice
	I0314 19:42:18.147777    8428 command_runner.go:130] ! I0314 19:19:16.151625       1 event.go:307] "Event occurred" object="kube-system/kube-scheduler-multinode-442000" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0314 19:42:18.147866    8428 command_runner.go:130] ! I0314 19:19:16.171551       1 event.go:307] "Event occurred" object="kube-system/etcd-multinode-442000" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0314 19:42:18.147866    8428 command_runner.go:130] ! I0314 19:19:16.174341       1 event.go:307] "Event occurred" object="kube-system/kube-apiserver-multinode-442000" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0314 19:42:18.147959    8428 command_runner.go:130] ! I0314 19:19:16.174358       1 event.go:307] "Event occurred" object="kube-system/kube-controller-manager-multinode-442000" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0314 19:42:18.147959    8428 command_runner.go:130] ! I0314 19:19:16.184987       1 shared_informer.go:318] Caches are synced for resource quota
	I0314 19:42:18.147959    8428 command_runner.go:130] ! I0314 19:19:16.223118       1 shared_informer.go:318] Caches are synced for PV protection
	I0314 19:42:18.147959    8428 command_runner.go:130] ! I0314 19:19:16.225526       1 shared_informer.go:318] Caches are synced for attach detach
	I0314 19:42:18.148048    8428 command_runner.go:130] ! I0314 19:19:16.225950       1 shared_informer.go:318] Caches are synced for ClusterRoleAggregator
	I0314 19:42:18.148048    8428 command_runner.go:130] ! I0314 19:19:16.274020       1 shared_informer.go:318] Caches are synced for persistent volume
	I0314 19:42:18.148048    8428 command_runner.go:130] ! I0314 19:19:16.320250       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-7b9lf"
	I0314 19:42:18.148142    8428 command_runner.go:130] ! I0314 19:19:16.328650       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-cg28g"
	I0314 19:42:18.148142    8428 command_runner.go:130] ! I0314 19:19:16.626855       1 shared_informer.go:318] Caches are synced for garbage collector
	I0314 19:42:18.148142    8428 command_runner.go:130] ! I0314 19:19:16.633099       1 shared_informer.go:318] Caches are synced for garbage collector
	I0314 19:42:18.148142    8428 command_runner.go:130] ! I0314 19:19:16.633344       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0314 19:42:18.148231    8428 command_runner.go:130] ! I0314 19:19:16.789964       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5dd5756b68 to 2"
	I0314 19:42:18.148231    8428 command_runner.go:130] ! I0314 19:19:17.099870       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-pvxpr"
	I0314 19:42:18.148319    8428 command_runner.go:130] ! I0314 19:19:17.114819       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-d22jc"
	I0314 19:42:18.148319    8428 command_runner.go:130] ! I0314 19:19:17.146456       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="355.713874ms"
	I0314 19:42:18.148319    8428 command_runner.go:130] ! I0314 19:19:17.166202       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="19.688691ms"
	I0314 19:42:18.148407    8428 command_runner.go:130] ! I0314 19:19:17.169087       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="2.771063ms"
	I0314 19:42:18.148407    8428 command_runner.go:130] ! I0314 19:19:18.399096       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I0314 19:42:18.148407    8428 command_runner.go:130] ! I0314 19:19:18.448322       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-pvxpr"
	I0314 19:42:18.148495    8428 command_runner.go:130] ! I0314 19:19:18.482373       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="82.944747ms"
	I0314 19:42:18.148495    8428 command_runner.go:130] ! I0314 19:19:18.500300       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="17.716936ms"
	I0314 19:42:18.148495    8428 command_runner.go:130] ! I0314 19:19:18.500887       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="99.317µs"
	I0314 19:42:18.148584    8428 command_runner.go:130] ! I0314 19:19:26.475232       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="98.515µs"
	I0314 19:42:18.148584    8428 command_runner.go:130] ! I0314 19:19:26.505160       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="58.309µs"
	I0314 19:42:18.148584    8428 command_runner.go:130] ! I0314 19:19:28.423231       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="23.310782ms"
	I0314 19:42:18.148584    8428 command_runner.go:130] ! I0314 19:19:28.423925       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="72.006µs"
	I0314 19:42:18.148675    8428 command_runner.go:130] ! I0314 19:19:31.116802       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I0314 19:42:18.148675    8428 command_runner.go:130] ! I0314 19:22:02.467925       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-442000-m02\" does not exist"
	I0314 19:42:18.148754    8428 command_runner.go:130] ! I0314 19:22:02.479576       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-442000-m02" podCIDRs=["10.244.1.0/24"]
	I0314 19:42:18.148790    8428 command_runner.go:130] ! I0314 19:22:02.507610       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-72dzs"
	I0314 19:42:18.148823    8428 command_runner.go:130] ! I0314 19:22:02.511169       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-c7m4p"
	I0314 19:42:18.148823    8428 command_runner.go:130] ! I0314 19:22:06.145908       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-442000-m02"
	I0314 19:42:18.148823    8428 command_runner.go:130] ! I0314 19:22:06.146201       1 event.go:307] "Event occurred" object="multinode-442000-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-442000-m02 event: Registered Node multinode-442000-m02 in Controller"
	I0314 19:42:18.148823    8428 command_runner.go:130] ! I0314 19:22:20.862710       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-442000-m02"
	I0314 19:42:18.148823    8428 command_runner.go:130] ! I0314 19:22:45.188036       1 event.go:307] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-5b5d89c9d6 to 2"
	I0314 19:42:18.148823    8428 command_runner.go:130] ! I0314 19:22:45.218022       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5b5d89c9d6-8drpb"
	I0314 19:42:18.148823    8428 command_runner.go:130] ! I0314 19:22:45.241867       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5b5d89c9d6-7446n"
	I0314 19:42:18.148823    8428 command_runner.go:130] ! I0314 19:22:45.267427       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="80.313691ms"
	I0314 19:42:18.148823    8428 command_runner.go:130] ! I0314 19:22:45.292961       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="25.159362ms"
	I0314 19:42:18.148823    8428 command_runner.go:130] ! I0314 19:22:45.311264       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="18.241692ms"
	I0314 19:42:18.148823    8428 command_runner.go:130] ! I0314 19:22:45.311407       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="93.911µs"
	I0314 19:42:18.148823    8428 command_runner.go:130] ! I0314 19:22:48.320252       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="21.515467ms"
	I0314 19:42:18.148823    8428 command_runner.go:130] ! I0314 19:22:48.320403       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="46.303µs"
	I0314 19:42:18.148823    8428 command_runner.go:130] ! I0314 19:22:48.344640       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="8.018521ms"
	I0314 19:42:18.148823    8428 command_runner.go:130] ! I0314 19:22:48.344838       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="42.804µs"
	I0314 19:42:18.148823    8428 command_runner.go:130] ! I0314 19:26:25.208780       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-442000-m02"
	I0314 19:42:18.148823    8428 command_runner.go:130] ! I0314 19:26:25.214591       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-442000-m03\" does not exist"
	I0314 19:42:18.148823    8428 command_runner.go:130] ! I0314 19:26:25.248082       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-442000-m03" podCIDRs=["10.244.2.0/24"]
	I0314 19:42:18.148823    8428 command_runner.go:130] ! I0314 19:26:25.265233       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-r7zdb"
	I0314 19:42:18.149355    8428 command_runner.go:130] ! I0314 19:26:25.273144       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-w2qls"
	I0314 19:42:18.149443    8428 command_runner.go:130] ! I0314 19:26:26.207170       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-442000-m03"
	I0314 19:42:18.149443    8428 command_runner.go:130] ! I0314 19:26:26.207236       1 event.go:307] "Event occurred" object="multinode-442000-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-442000-m03 event: Registered Node multinode-442000-m03 in Controller"
	I0314 19:42:18.149530    8428 command_runner.go:130] ! I0314 19:26:43.758846       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-442000-m02"
	I0314 19:42:18.149530    8428 command_runner.go:130] ! I0314 19:33:46.333556       1 event.go:307] "Event occurred" object="multinode-442000-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-442000-m03 status is now: NodeNotReady"
	I0314 19:42:18.149618    8428 command_runner.go:130] ! I0314 19:33:46.333891       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-442000-m02"
	I0314 19:42:18.149618    8428 command_runner.go:130] ! I0314 19:33:46.348976       1 event.go:307] "Event occurred" object="kube-system/kindnet-r7zdb" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0314 19:42:18.149706    8428 command_runner.go:130] ! I0314 19:33:46.370200       1 event.go:307] "Event occurred" object="kube-system/kube-proxy-w2qls" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0314 19:42:18.149706    8428 command_runner.go:130] ! I0314 19:36:39.868492       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-442000-m02"
	I0314 19:42:18.149706    8428 command_runner.go:130] ! I0314 19:36:41.400896       1 event.go:307] "Event occurred" object="multinode-442000-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RemovingNode" message="Node multinode-442000-m03 event: Removing Node multinode-442000-m03 from Controller"
	I0314 19:42:18.149794    8428 command_runner.go:130] ! I0314 19:36:47.335802       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-442000-m03\" does not exist"
	I0314 19:42:18.149883    8428 command_runner.go:130] ! I0314 19:36:47.336128       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-442000-m02"
	I0314 19:42:18.149883    8428 command_runner.go:130] ! I0314 19:36:47.352987       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-442000-m03" podCIDRs=["10.244.3.0/24"]
	I0314 19:42:18.149883    8428 command_runner.go:130] ! I0314 19:36:51.403261       1 event.go:307] "Event occurred" object="multinode-442000-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-442000-m03 event: Registered Node multinode-442000-m03 in Controller"
	I0314 19:42:18.149973    8428 command_runner.go:130] ! I0314 19:36:54.976864       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-442000-m02"
	I0314 19:42:18.149973    8428 command_runner.go:130] ! I0314 19:38:21.463528       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-442000-m02"
	I0314 19:42:18.149973    8428 command_runner.go:130] ! I0314 19:38:21.463818       1 event.go:307] "Event occurred" object="multinode-442000-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-442000-m03 status is now: NodeNotReady"
	I0314 19:42:18.150063    8428 command_runner.go:130] ! I0314 19:38:21.486796       1 event.go:307] "Event occurred" object="kube-system/kindnet-r7zdb" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0314 19:42:18.150063    8428 command_runner.go:130] ! I0314 19:38:21.501217       1 event.go:307] "Event occurred" object="kube-system/kube-proxy-w2qls" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0314 19:42:18.167222    8428 logs.go:123] Gathering logs for kindnet [999e4c168afe] ...
	I0314 19:42:18.167222    8428 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 999e4c168afe"
	I0314 19:42:18.193087    8428 command_runner.go:130] ! I0314 19:41:08.409720       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0314 19:42:18.193584    8428 command_runner.go:130] ! I0314 19:41:08.410195       1 main.go:107] hostIP = 172.17.93.236
	I0314 19:42:18.193620    8428 command_runner.go:130] ! podIP = 172.17.93.236
	I0314 19:42:18.193620    8428 command_runner.go:130] ! I0314 19:41:08.411178       1 main.go:116] setting mtu 1500 for CNI 
	I0314 19:42:18.193620    8428 command_runner.go:130] ! I0314 19:41:08.411230       1 main.go:146] kindnetd IP family: "ipv4"
	I0314 19:42:18.193620    8428 command_runner.go:130] ! I0314 19:41:08.411277       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0314 19:42:18.193687    8428 command_runner.go:130] ! I0314 19:41:38.747509       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: i/o timeout
	I0314 19:42:18.193687    8428 command_runner.go:130] ! I0314 19:41:38.770843       1 main.go:223] Handling node with IPs: map[172.17.93.236:{}]
	I0314 19:42:18.193687    8428 command_runner.go:130] ! I0314 19:41:38.770994       1 main.go:227] handling current node
	I0314 19:42:18.193725    8428 command_runner.go:130] ! I0314 19:41:38.771413       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:18.193747    8428 command_runner.go:130] ! I0314 19:41:38.771428       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:18.193747    8428 command_runner.go:130] ! I0314 19:41:38.771670       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 172.17.80.135 Flags: [] Table: 0} 
	I0314 19:42:18.193747    8428 command_runner.go:130] ! I0314 19:41:38.771817       1 main.go:223] Handling node with IPs: map[172.17.84.215:{}]
	I0314 19:42:18.193800    8428 command_runner.go:130] ! I0314 19:41:38.771827       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.3.0/24] 
	I0314 19:42:18.193800    8428 command_runner.go:130] ! I0314 19:41:38.771944       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 172.17.84.215 Flags: [] Table: 0} 
	I0314 19:42:18.193800    8428 command_runner.go:130] ! I0314 19:41:48.777997       1 main.go:223] Handling node with IPs: map[172.17.93.236:{}]
	I0314 19:42:18.193800    8428 command_runner.go:130] ! I0314 19:41:48.778091       1 main.go:227] handling current node
	I0314 19:42:18.193800    8428 command_runner.go:130] ! I0314 19:41:48.778105       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:18.193861    8428 command_runner.go:130] ! I0314 19:41:48.778113       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:18.193861    8428 command_runner.go:130] ! I0314 19:41:48.778217       1 main.go:223] Handling node with IPs: map[172.17.84.215:{}]
	I0314 19:42:18.193861    8428 command_runner.go:130] ! I0314 19:41:48.778373       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.3.0/24] 
	I0314 19:42:18.193861    8428 command_runner.go:130] ! I0314 19:41:58.793215       1 main.go:223] Handling node with IPs: map[172.17.93.236:{}]
	I0314 19:42:18.193861    8428 command_runner.go:130] ! I0314 19:41:58.793285       1 main.go:227] handling current node
	I0314 19:42:18.193937    8428 command_runner.go:130] ! I0314 19:41:58.793297       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:18.193937    8428 command_runner.go:130] ! I0314 19:41:58.793304       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:18.193937    8428 command_runner.go:130] ! I0314 19:41:58.793793       1 main.go:223] Handling node with IPs: map[172.17.84.215:{}]
	I0314 19:42:18.193937    8428 command_runner.go:130] ! I0314 19:41:58.793859       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.3.0/24] 
	I0314 19:42:18.193998    8428 command_runner.go:130] ! I0314 19:42:08.808709       1 main.go:223] Handling node with IPs: map[172.17.93.236:{}]
	I0314 19:42:18.193998    8428 command_runner.go:130] ! I0314 19:42:08.808803       1 main.go:227] handling current node
	I0314 19:42:18.193998    8428 command_runner.go:130] ! I0314 19:42:08.808818       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:18.193998    8428 command_runner.go:130] ! I0314 19:42:08.808826       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:18.194061    8428 command_runner.go:130] ! I0314 19:42:08.809153       1 main.go:223] Handling node with IPs: map[172.17.84.215:{}]
	I0314 19:42:18.194061    8428 command_runner.go:130] ! I0314 19:42:08.809168       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.3.0/24] 
	I0314 19:42:18.196186    8428 logs.go:123] Gathering logs for kubelet ...
	I0314 19:42:18.196186    8428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:42:18.230499    8428 command_runner.go:130] > Mar 14 19:40:57 multinode-442000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0314 19:42:18.230499    8428 command_runner.go:130] > Mar 14 19:40:57 multinode-442000 kubelet[1388]: I0314 19:40:57.516074    1388 server.go:467] "Kubelet version" kubeletVersion="v1.28.4"
	I0314 19:42:18.230499    8428 command_runner.go:130] > Mar 14 19:40:57 multinode-442000 kubelet[1388]: I0314 19:40:57.516440    1388 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0314 19:42:18.230499    8428 command_runner.go:130] > Mar 14 19:40:57 multinode-442000 kubelet[1388]: I0314 19:40:57.516773    1388 server.go:895] "Client rotation is on, will bootstrap in background"
	I0314 19:42:18.230499    8428 command_runner.go:130] > Mar 14 19:40:57 multinode-442000 kubelet[1388]: E0314 19:40:57.516893    1388 run.go:74] "command failed" err="failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory"
	I0314 19:42:18.230499    8428 command_runner.go:130] > Mar 14 19:40:57 multinode-442000 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	I0314 19:42:18.230499    8428 command_runner.go:130] > Mar 14 19:40:57 multinode-442000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	I0314 19:42:18.230499    8428 command_runner.go:130] > Mar 14 19:40:58 multinode-442000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1.
	I0314 19:42:18.230499    8428 command_runner.go:130] > Mar 14 19:40:58 multinode-442000 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I0314 19:42:18.230499    8428 command_runner.go:130] > Mar 14 19:40:58 multinode-442000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0314 19:42:18.230499    8428 command_runner.go:130] > Mar 14 19:40:58 multinode-442000 kubelet[1450]: I0314 19:40:58.293295    1450 server.go:467] "Kubelet version" kubeletVersion="v1.28.4"
	I0314 19:42:18.230499    8428 command_runner.go:130] > Mar 14 19:40:58 multinode-442000 kubelet[1450]: I0314 19:40:58.293422    1450 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0314 19:42:18.230499    8428 command_runner.go:130] > Mar 14 19:40:58 multinode-442000 kubelet[1450]: I0314 19:40:58.293759    1450 server.go:895] "Client rotation is on, will bootstrap in background"
	I0314 19:42:18.230499    8428 command_runner.go:130] > Mar 14 19:40:58 multinode-442000 kubelet[1450]: E0314 19:40:58.293809    1450 run.go:74] "command failed" err="failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory"
	I0314 19:42:18.230499    8428 command_runner.go:130] > Mar 14 19:40:58 multinode-442000 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	I0314 19:42:18.230499    8428 command_runner.go:130] > Mar 14 19:40:58 multinode-442000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	I0314 19:42:18.230499    8428 command_runner.go:130] > Mar 14 19:40:58 multinode-442000 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I0314 19:42:18.230499    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0314 19:42:18.230499    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.270178    1523 server.go:467] "Kubelet version" kubeletVersion="v1.28.4"
	I0314 19:42:18.230499    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.270275    1523 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0314 19:42:18.230499    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.270469    1523 server.go:895] "Client rotation is on, will bootstrap in background"
	I0314 19:42:18.230499    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.272943    1523 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	I0314 19:42:18.230499    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.286808    1523 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0314 19:42:18.230499    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.333673    1523 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
	I0314 19:42:18.230499    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.335204    1523 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
	I0314 19:42:18.231058    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.335543    1523 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","To
pologyManagerPolicyOptions":null}
	I0314 19:42:18.231058    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.335688    1523 topology_manager.go:138] "Creating topology manager with none policy"
	I0314 19:42:18.231058    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.335703    1523 container_manager_linux.go:301] "Creating device plugin manager"
	I0314 19:42:18.231058    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.336879    1523 state_mem.go:36] "Initialized new in-memory state store"
	I0314 19:42:18.231058    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.338507    1523 kubelet.go:393] "Attempting to sync node with API server"
	I0314 19:42:18.231173    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.338606    1523 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests"
	I0314 19:42:18.231173    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.339942    1523 kubelet.go:309] "Adding apiserver pod source"
	I0314 19:42:18.231173    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.339973    1523 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
	I0314 19:42:18.231173    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: W0314 19:41:00.342644    1523 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)multinode-442000&limit=500&resourceVersion=0": dial tcp 172.17.93.236:8443: connect: connection refused
	I0314 19:42:18.231284    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: E0314 19:41:00.342728    1523 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)multinode-442000&limit=500&resourceVersion=0": dial tcp 172.17.93.236:8443: connect: connection refused
	I0314 19:42:18.231284    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: W0314 19:41:00.352846    1523 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.17.93.236:8443: connect: connection refused
	I0314 19:42:18.231284    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: E0314 19:41:00.353005    1523 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.17.93.236:8443: connect: connection refused
	I0314 19:42:18.231284    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.362091    1523 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="docker" version="25.0.4" apiVersion="v1"
	I0314 19:42:18.231394    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: W0314 19:41:00.368654    1523 probe.go:268] Flexvolume plugin directory at /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
	I0314 19:42:18.231394    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.370831    1523 server.go:1232] "Started kubelet"
	I0314 19:42:18.231394    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.376404    1523 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
	I0314 19:42:18.231394    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.381472    1523 server.go:162] "Starting to listen" address="0.0.0.0" port=10250
	I0314 19:42:18.231394    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.381715    1523 volume_manager.go:291] "Starting Kubelet Volume Manager"
	I0314 19:42:18.231394    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.383735    1523 server.go:462] "Adding debug handlers to kubelet server"
	I0314 19:42:18.231503    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.385265    1523 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10
	I0314 19:42:18.231503    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.387577    1523 desired_state_of_world_populator.go:151] "Desired state populator starts to run"
	I0314 19:42:18.231503    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.392182    1523 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock"
	I0314 19:42:18.231503    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: E0314 19:41:00.392853    1523 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-442000?timeout=10s\": dial tcp 172.17.93.236:8443: connect: connection refused" interval="200ms"
	I0314 19:42:18.231612    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: W0314 19:41:00.392921    1523 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.17.93.236:8443: connect: connection refused
	I0314 19:42:18.231612    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: E0314 19:41:00.392970    1523 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.17.93.236:8443: connect: connection refused
	I0314 19:42:18.231721    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: E0314 19:41:00.402867    1523 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"multinode-442000.17bcb8e6e82683f3", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"multinode-442000", UID:"multinode-442000", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"multinode-442000"}, FirstTimestamp:time.Date(2024, ti
me.March, 14, 19, 41, 0, 370772979, time.Local), LastTimestamp:time.Date(2024, time.March, 14, 19, 41, 0, 370772979, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"multinode-442000"}': 'Post "https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events": dial tcp 172.17.93.236:8443: connect: connection refused'(may retry after sleeping)
	I0314 19:42:18.231721    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.431568    1523 reconciler_new.go:29] "Reconciler: start to sync state"
	I0314 19:42:18.231721    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.453043    1523 cpu_manager.go:214] "Starting CPU manager" policy="none"
	I0314 19:42:18.231721    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.453062    1523 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s"
	I0314 19:42:18.231840    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.453088    1523 state_mem.go:36] "Initialized new in-memory state store"
	I0314 19:42:18.231840    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.453812    1523 state_mem.go:88] "Updated default CPUSet" cpuSet=""
	I0314 19:42:18.231840    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.453838    1523 state_mem.go:96] "Updated CPUSet assignments" assignments={}
	I0314 19:42:18.231900    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.453846    1523 policy_none.go:49] "None policy: Start"
	I0314 19:42:18.231900    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.459854    1523 memory_manager.go:169] "Starting memorymanager" policy="None"
	I0314 19:42:18.231944    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.459925    1523 state_mem.go:35] "Initializing new in-memory state store"
	I0314 19:42:18.231944    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.460715    1523 state_mem.go:75] "Updated machine memory state"
	I0314 19:42:18.231944    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.466366    1523 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found"
	I0314 19:42:18.231944    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.471455    1523 plugin_manager.go:118] "Starting Kubelet Plugin Manager"
	I0314 19:42:18.231944    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.475344    1523 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4"
	I0314 19:42:18.232145    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.478780    1523 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6"
	I0314 19:42:18.232145    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.478820    1523 status_manager.go:217] "Starting to sync pod status with apiserver"
	I0314 19:42:18.232145    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.478846    1523 kubelet.go:2303] "Starting kubelet main sync loop"
	I0314 19:42:18.232266    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: E0314 19:41:00.478899    1523 kubelet.go:2327] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful"
	I0314 19:42:18.232266    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: W0314 19:41:00.485952    1523 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.17.93.236:8443: connect: connection refused
	I0314 19:42:18.232266    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: E0314 19:41:00.487569    1523 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.17.93.236:8443: connect: connection refused
	I0314 19:42:18.232266    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: E0314 19:41:00.493845    1523 eviction_manager.go:258] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"multinode-442000\" not found"
	I0314 19:42:18.232378    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.501023    1523 kubelet_node_status.go:70] "Attempting to register node" node="multinode-442000"
	I0314 19:42:18.232513    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: E0314 19:41:00.501915    1523 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.17.93.236:8443: connect: connection refused" node="multinode-442000"
	I0314 19:42:18.232620    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: E0314 19:41:00.503739    1523 iptables.go:575] "Could not set up iptables canary" err=<
	I0314 19:42:18.232620    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	I0314 19:42:18.232782    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	I0314 19:42:18.232871    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]:         Perhaps ip6tables or your kernel needs to be upgraded.
	I0314 19:42:18.232871    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	I0314 19:42:18.232871    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.578961    1523 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="af5b88117f99a24e81a324ab026c69a7058a7c1bc88d9b9a5386134abc257bba"
	I0314 19:42:18.232871    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.578983    1523 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="54e39762d7a6437164a9b2c6dd22b1f36b57514310190ce4acc3349001cb1774"
	I0314 19:42:18.232980    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.579017    1523 topology_manager.go:215] "Topology Admit Handler" podUID="2b2434280023596d1e3c90125a7219ed" podNamespace="kube-system" podName="kube-scheduler-multinode-442000"
	I0314 19:42:18.232980    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.592991    1523 topology_manager.go:215] "Topology Admit Handler" podUID="7754d2f32966faec8123dc3b8a2af767" podNamespace="kube-system" podName="kube-apiserver-multinode-442000"
	I0314 19:42:18.232980    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: E0314 19:41:00.594193    1523 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-442000?timeout=10s\": dial tcp 172.17.93.236:8443: connect: connection refused" interval="400ms"
	I0314 19:42:18.233091    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.609977    1523 topology_manager.go:215] "Topology Admit Handler" podUID="a7ee530f2bd843eddeace8cd6ec0d204" podNamespace="kube-system" podName="kube-controller-manager-multinode-442000"
	I0314 19:42:18.233091    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.622973    1523 topology_manager.go:215] "Topology Admit Handler" podUID="fa99a5621d016aa714804afcaa1e0a53" podNamespace="kube-system" podName="etcd-multinode-442000"
	I0314 19:42:18.233091    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.634832    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2b2434280023596d1e3c90125a7219ed-kubeconfig\") pod \"kube-scheduler-multinode-442000\" (UID: \"2b2434280023596d1e3c90125a7219ed\") " pod="kube-system/kube-scheduler-multinode-442000"
	I0314 19:42:18.233091    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.640587    1523 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b179d157b6b2f71cc980c7ea5060a613be77e84e89947fbcb91a687ea7310eaf"
	I0314 19:42:18.233203    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.640610    1523 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b046b896affe9f3219822b857a6b4dfa1427854d5df420b6b2e1cec631372548"
	I0314 19:42:18.233203    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.640625    1523 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fa0f2372c88eef3de0c7caa0041064157c314aff4c14bf6622f34dd89106f773"
	I0314 19:42:18.233203    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.640637    1523 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9b3244b47278e22e56ab0362b7a74ee80ca2806fb1074d718b0278b5bc70be76"
	I0314 19:42:18.233203    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.640648    1523 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a3dba3fc54c01e7fb1675536e155d6b541ed5782f664675ccd953639013f50b0"
	I0314 19:42:18.233203    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.640663    1523 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="102c907609a3ac28e95d46e2671477684c5a043672e21597c677ee9dbfcb7e08"
	I0314 19:42:18.233312    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.640674    1523 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ab390fc53b998ec55449f16c05933add797f430f2cc6f4b55afabf79cd8b0bc7"
	I0314 19:42:18.233312    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.713400    1523 kubelet_node_status.go:70] "Attempting to register node" node="multinode-442000"
	I0314 19:42:18.233312    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: E0314 19:41:00.714712    1523 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.17.93.236:8443: connect: connection refused" node="multinode-442000"
	I0314 19:42:18.233405    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.736377    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7754d2f32966faec8123dc3b8a2af767-ca-certs\") pod \"kube-apiserver-multinode-442000\" (UID: \"7754d2f32966faec8123dc3b8a2af767\") " pod="kube-system/kube-apiserver-multinode-442000"
	I0314 19:42:18.233476    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.736439    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7754d2f32966faec8123dc3b8a2af767-k8s-certs\") pod \"kube-apiserver-multinode-442000\" (UID: \"7754d2f32966faec8123dc3b8a2af767\") " pod="kube-system/kube-apiserver-multinode-442000"
	I0314 19:42:18.233476    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.736466    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7754d2f32966faec8123dc3b8a2af767-usr-share-ca-certificates\") pod \"kube-apiserver-multinode-442000\" (UID: \"7754d2f32966faec8123dc3b8a2af767\") " pod="kube-system/kube-apiserver-multinode-442000"
	I0314 19:42:18.233548    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.736490    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a7ee530f2bd843eddeace8cd6ec0d204-flexvolume-dir\") pod \"kube-controller-manager-multinode-442000\" (UID: \"a7ee530f2bd843eddeace8cd6ec0d204\") " pod="kube-system/kube-controller-manager-multinode-442000"
	I0314 19:42:18.233548    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.736521    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a7ee530f2bd843eddeace8cd6ec0d204-k8s-certs\") pod \"kube-controller-manager-multinode-442000\" (UID: \"a7ee530f2bd843eddeace8cd6ec0d204\") " pod="kube-system/kube-controller-manager-multinode-442000"
	I0314 19:42:18.233619    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.736546    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/fa99a5621d016aa714804afcaa1e0a53-etcd-certs\") pod \"etcd-multinode-442000\" (UID: \"fa99a5621d016aa714804afcaa1e0a53\") " pod="kube-system/etcd-multinode-442000"
	I0314 19:42:18.233690    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.736609    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a7ee530f2bd843eddeace8cd6ec0d204-ca-certs\") pod \"kube-controller-manager-multinode-442000\" (UID: \"a7ee530f2bd843eddeace8cd6ec0d204\") " pod="kube-system/kube-controller-manager-multinode-442000"
	I0314 19:42:18.233690    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.736642    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a7ee530f2bd843eddeace8cd6ec0d204-kubeconfig\") pod \"kube-controller-manager-multinode-442000\" (UID: \"a7ee530f2bd843eddeace8cd6ec0d204\") " pod="kube-system/kube-controller-manager-multinode-442000"
	I0314 19:42:18.233762    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.736675    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a7ee530f2bd843eddeace8cd6ec0d204-usr-share-ca-certificates\") pod \"kube-controller-manager-multinode-442000\" (UID: \"a7ee530f2bd843eddeace8cd6ec0d204\") " pod="kube-system/kube-controller-manager-multinode-442000"
	I0314 19:42:18.233762    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.736706    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/fa99a5621d016aa714804afcaa1e0a53-etcd-data\") pod \"etcd-multinode-442000\" (UID: \"fa99a5621d016aa714804afcaa1e0a53\") " pod="kube-system/etcd-multinode-442000"
	I0314 19:42:18.233837    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: E0314 19:41:00.996146    1523 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-442000?timeout=10s\": dial tcp 172.17.93.236:8443: connect: connection refused" interval="800ms"
	I0314 19:42:18.233911    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 kubelet[1523]: E0314 19:41:01.009288    1523 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"multinode-442000.17bcb8e6e82683f3", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"multinode-442000", UID:"multinode-442000", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"multinode-442000"}, FirstTimestamp:time.Date(2024, ti
me.March, 14, 19, 41, 0, 370772979, time.Local), LastTimestamp:time.Date(2024, time.March, 14, 19, 41, 0, 370772979, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"multinode-442000"}': 'Post "https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events": dial tcp 172.17.93.236:8443: connect: connection refused'(may retry after sleeping)
	I0314 19:42:18.233983    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 kubelet[1523]: I0314 19:41:01.128790    1523 kubelet_node_status.go:70] "Attempting to register node" node="multinode-442000"
	I0314 19:42:18.233983    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 kubelet[1523]: E0314 19:41:01.130034    1523 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.17.93.236:8443: connect: connection refused" node="multinode-442000"
	I0314 19:42:18.233983    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 kubelet[1523]: W0314 19:41:01.475229    1523 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.17.93.236:8443: connect: connection refused
	I0314 19:42:18.234054    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 kubelet[1523]: E0314 19:41:01.475367    1523 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.17.93.236:8443: connect: connection refused
	I0314 19:42:18.234054    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 kubelet[1523]: W0314 19:41:01.647700    1523 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)multinode-442000&limit=500&resourceVersion=0": dial tcp 172.17.93.236:8443: connect: connection refused
	I0314 19:42:18.234054    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 kubelet[1523]: E0314 19:41:01.647839    1523 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)multinode-442000&limit=500&resourceVersion=0": dial tcp 172.17.93.236:8443: connect: connection refused
	I0314 19:42:18.234125    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 kubelet[1523]: I0314 19:41:01.684558    1523 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c70744e60ac50b50085376d0c124ff15cc884b8a836b0085ef71a65ddb06bcfd"
	I0314 19:42:18.234125    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 kubelet[1523]: W0314 19:41:01.767121    1523 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.17.93.236:8443: connect: connection refused
	I0314 19:42:18.234197    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 kubelet[1523]: E0314 19:41:01.767283    1523 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.17.93.236:8443: connect: connection refused
	I0314 19:42:18.234197    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 kubelet[1523]: E0314 19:41:01.797772    1523 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-442000?timeout=10s\": dial tcp 172.17.93.236:8443: connect: connection refused" interval="1.6s"
	I0314 19:42:18.234269    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 kubelet[1523]: W0314 19:41:01.907277    1523 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.17.93.236:8443: connect: connection refused
	I0314 19:42:18.234341    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 kubelet[1523]: E0314 19:41:01.907408    1523 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.17.93.236:8443: connect: connection refused
	I0314 19:42:18.234341    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 kubelet[1523]: I0314 19:41:01.963548    1523 kubelet_node_status.go:70] "Attempting to register node" node="multinode-442000"
	I0314 19:42:18.234341    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 kubelet[1523]: E0314 19:41:01.967786    1523 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.17.93.236:8443: connect: connection refused" node="multinode-442000"
	I0314 19:42:18.234413    8428 command_runner.go:130] > Mar 14 19:41:03 multinode-442000 kubelet[1523]: I0314 19:41:03.581966    1523 kubelet_node_status.go:70] "Attempting to register node" node="multinode-442000"
	I0314 19:42:18.234413    8428 command_runner.go:130] > Mar 14 19:41:05 multinode-442000 kubelet[1523]: I0314 19:41:05.875219    1523 kubelet_node_status.go:108] "Node was previously registered" node="multinode-442000"
	I0314 19:42:18.234413    8428 command_runner.go:130] > Mar 14 19:41:05 multinode-442000 kubelet[1523]: I0314 19:41:05.875953    1523 kubelet_node_status.go:73] "Successfully registered node" node="multinode-442000"
	I0314 19:42:18.234413    8428 command_runner.go:130] > Mar 14 19:41:05 multinode-442000 kubelet[1523]: I0314 19:41:05.881726    1523 kuberuntime_manager.go:1528] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	I0314 19:42:18.234486    8428 command_runner.go:130] > Mar 14 19:41:05 multinode-442000 kubelet[1523]: I0314 19:41:05.882677    1523 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	I0314 19:42:18.234486    8428 command_runner.go:130] > Mar 14 19:41:05 multinode-442000 kubelet[1523]: I0314 19:41:05.894905    1523 setters.go:552] "Node became not ready" node="multinode-442000" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-03-14T19:41:05Z","lastTransitionTime":"2024-03-14T19:41:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"}
	I0314 19:42:18.234558    8428 command_runner.go:130] > Mar 14 19:41:05 multinode-442000 kubelet[1523]: E0314 19:41:05.973748    1523 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"etcd-multinode-442000\" already exists" pod="kube-system/etcd-multinode-442000"
	I0314 19:42:18.234558    8428 command_runner.go:130] > Mar 14 19:41:06 multinode-442000 kubelet[1523]: I0314 19:41:06.346543    1523 apiserver.go:52] "Watching apiserver"
	I0314 19:42:18.234558    8428 command_runner.go:130] > Mar 14 19:41:06 multinode-442000 kubelet[1523]: I0314 19:41:06.355573    1523 topology_manager.go:215] "Topology Admit Handler" podUID="677b9084-0026-4b21-b041-445940624ed7" podNamespace="kube-system" podName="kindnet-7b9lf"
	I0314 19:42:18.234558    8428 command_runner.go:130] > Mar 14 19:41:06 multinode-442000 kubelet[1523]: I0314 19:41:06.355823    1523 topology_manager.go:215] "Topology Admit Handler" podUID="c7f798bf-6722-4731-af8d-ccd5703d116e" podNamespace="kube-system" podName="kube-proxy-cg28g"
	I0314 19:42:18.234629    8428 command_runner.go:130] > Mar 14 19:41:06 multinode-442000 kubelet[1523]: I0314 19:41:06.355970    1523 topology_manager.go:215] "Topology Admit Handler" podUID="2a563b3f-a175-4dc2-9f0b-67dbaefbfaac" podNamespace="kube-system" podName="coredns-5dd5756b68-d22jc"
	I0314 19:42:18.234701    8428 command_runner.go:130] > Mar 14 19:41:06 multinode-442000 kubelet[1523]: I0314 19:41:06.356220    1523 topology_manager.go:215] "Topology Admit Handler" podUID="65d76566-4401-4b28-8452-10ed98624901" podNamespace="kube-system" podName="storage-provisioner"
	I0314 19:42:18.234701    8428 command_runner.go:130] > Mar 14 19:41:06 multinode-442000 kubelet[1523]: I0314 19:41:06.356515    1523 topology_manager.go:215] "Topology Admit Handler" podUID="6ca0ace6-596a-4504-80b5-0cc0cc11f9a2" podNamespace="default" podName="busybox-5b5d89c9d6-7446n"
	I0314 19:42:18.234701    8428 command_runner.go:130] > Mar 14 19:41:06 multinode-442000 kubelet[1523]: E0314 19:41:06.356776    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-7446n" podUID="6ca0ace6-596a-4504-80b5-0cc0cc11f9a2"
	I0314 19:42:18.234772    8428 command_runner.go:130] > Mar 14 19:41:06 multinode-442000 kubelet[1523]: E0314 19:41:06.356948    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-d22jc" podUID="2a563b3f-a175-4dc2-9f0b-67dbaefbfaac"
	I0314 19:42:18.234772    8428 command_runner.go:130] > Mar 14 19:41:06 multinode-442000 kubelet[1523]: I0314 19:41:06.360847    1523 kubelet.go:1872] "Trying to delete pod" pod="kube-system/kube-apiserver-multinode-442000" podUID="02a2d011-5f4c-451c-9698-a88e42e4b6c9"
	I0314 19:42:18.234844    8428 command_runner.go:130] > Mar 14 19:41:06 multinode-442000 kubelet[1523]: I0314 19:41:06.388530    1523 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
	I0314 19:42:18.234844    8428 command_runner.go:130] > Mar 14 19:41:06 multinode-442000 kubelet[1523]: I0314 19:41:06.394882    1523 kubelet.go:1877] "Deleted mirror pod because it is outdated" pod="kube-system/kube-apiserver-multinode-442000"
	I0314 19:42:18.234844    8428 command_runner.go:130] > Mar 14 19:41:06 multinode-442000 kubelet[1523]: I0314 19:41:06.419699    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c7f798bf-6722-4731-af8d-ccd5703d116e-xtables-lock\") pod \"kube-proxy-cg28g\" (UID: \"c7f798bf-6722-4731-af8d-ccd5703d116e\") " pod="kube-system/kube-proxy-cg28g"
	I0314 19:42:18.234917    8428 command_runner.go:130] > Mar 14 19:41:06 multinode-442000 kubelet[1523]: I0314 19:41:06.419828    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/677b9084-0026-4b21-b041-445940624ed7-cni-cfg\") pod \"kindnet-7b9lf\" (UID: \"677b9084-0026-4b21-b041-445940624ed7\") " pod="kube-system/kindnet-7b9lf"
	I0314 19:42:18.234917    8428 command_runner.go:130] > Mar 14 19:41:06 multinode-442000 kubelet[1523]: I0314 19:41:06.419854    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/677b9084-0026-4b21-b041-445940624ed7-lib-modules\") pod \"kindnet-7b9lf\" (UID: \"677b9084-0026-4b21-b041-445940624ed7\") " pod="kube-system/kindnet-7b9lf"
	I0314 19:42:18.234989    8428 command_runner.go:130] > Mar 14 19:41:06 multinode-442000 kubelet[1523]: I0314 19:41:06.419895    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/65d76566-4401-4b28-8452-10ed98624901-tmp\") pod \"storage-provisioner\" (UID: \"65d76566-4401-4b28-8452-10ed98624901\") " pod="kube-system/storage-provisioner"
	I0314 19:42:18.235062    8428 command_runner.go:130] > Mar 14 19:41:06 multinode-442000 kubelet[1523]: I0314 19:41:06.419943    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/677b9084-0026-4b21-b041-445940624ed7-xtables-lock\") pod \"kindnet-7b9lf\" (UID: \"677b9084-0026-4b21-b041-445940624ed7\") " pod="kube-system/kindnet-7b9lf"
	I0314 19:42:18.235062    8428 command_runner.go:130] > Mar 14 19:41:06 multinode-442000 kubelet[1523]: I0314 19:41:06.420062    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c7f798bf-6722-4731-af8d-ccd5703d116e-lib-modules\") pod \"kube-proxy-cg28g\" (UID: \"c7f798bf-6722-4731-af8d-ccd5703d116e\") " pod="kube-system/kube-proxy-cg28g"
	I0314 19:42:18.235062    8428 command_runner.go:130] > Mar 14 19:41:06 multinode-442000 kubelet[1523]: E0314 19:41:06.420370    1523 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0314 19:42:18.235137    8428 command_runner.go:130] > Mar 14 19:41:06 multinode-442000 kubelet[1523]: E0314 19:41:06.420509    1523 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2a563b3f-a175-4dc2-9f0b-67dbaefbfaac-config-volume podName:2a563b3f-a175-4dc2-9f0b-67dbaefbfaac nodeName:}" failed. No retries permitted until 2024-03-14 19:41:06.920467401 +0000 UTC m=+6.742091622 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/2a563b3f-a175-4dc2-9f0b-67dbaefbfaac-config-volume") pod "coredns-5dd5756b68-d22jc" (UID: "2a563b3f-a175-4dc2-9f0b-67dbaefbfaac") : object "kube-system"/"coredns" not registered
	I0314 19:42:18.235208    8428 command_runner.go:130] > Mar 14 19:41:06 multinode-442000 kubelet[1523]: E0314 19:41:06.447169    1523 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0314 19:42:18.235208    8428 command_runner.go:130] > Mar 14 19:41:06 multinode-442000 kubelet[1523]: E0314 19:41:06.447481    1523 projected.go:198] Error preparing data for projected volume kube-api-access-6hh9s for pod default/busybox-5b5d89c9d6-7446n: object "default"/"kube-root-ca.crt" not registered
	I0314 19:42:18.235283    8428 command_runner.go:130] > Mar 14 19:41:06 multinode-442000 kubelet[1523]: E0314 19:41:06.447769    1523 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6ca0ace6-596a-4504-80b5-0cc0cc11f9a2-kube-api-access-6hh9s podName:6ca0ace6-596a-4504-80b5-0cc0cc11f9a2 nodeName:}" failed. No retries permitted until 2024-03-14 19:41:06.9477485 +0000 UTC m=+6.769372721 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-6hh9s" (UniqueName: "kubernetes.io/projected/6ca0ace6-596a-4504-80b5-0cc0cc11f9a2-kube-api-access-6hh9s") pod "busybox-5b5d89c9d6-7446n" (UID: "6ca0ace6-596a-4504-80b5-0cc0cc11f9a2") : object "default"/"kube-root-ca.crt" not registered
	I0314 19:42:18.235283    8428 command_runner.go:130] > Mar 14 19:41:06 multinode-442000 kubelet[1523]: I0314 19:41:06.496544    1523 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="81fdcd9740169a0b72b7c7316eeac39f" path="/var/lib/kubelet/pods/81fdcd9740169a0b72b7c7316eeac39f/volumes"
	I0314 19:42:18.235283    8428 command_runner.go:130] > Mar 14 19:41:06 multinode-442000 kubelet[1523]: I0314 19:41:06.497856    1523 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="92e70beb375f9f247f5f8395dc065033" path="/var/lib/kubelet/pods/92e70beb375f9f247f5f8395dc065033/volumes"
	I0314 19:42:18.235354    8428 command_runner.go:130] > Mar 14 19:41:06 multinode-442000 kubelet[1523]: I0314 19:41:06.840791    1523 kubelet.go:1872] "Trying to delete pod" pod="kube-system/etcd-multinode-442000" podUID="8974ad44-5d36-48f0-bc6b-9115bab5fb5e"
	I0314 19:42:18.235427    8428 command_runner.go:130] > Mar 14 19:41:06 multinode-442000 kubelet[1523]: I0314 19:41:06.864488    1523 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-multinode-442000" podStartSLOduration=0.864428449 podCreationTimestamp="2024-03-14 19:41:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-03-14 19:41:06.656175631 +0000 UTC m=+6.477799952" watchObservedRunningTime="2024-03-14 19:41:06.864428449 +0000 UTC m=+6.686052670"
	I0314 19:42:18.235427    8428 command_runner.go:130] > Mar 14 19:41:06 multinode-442000 kubelet[1523]: I0314 19:41:06.889820    1523 kubelet.go:1877] "Deleted mirror pod because it is outdated" pod="kube-system/etcd-multinode-442000"
	I0314 19:42:18.235427    8428 command_runner.go:130] > Mar 14 19:41:06 multinode-442000 kubelet[1523]: E0314 19:41:06.925613    1523 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0314 19:42:18.235499    8428 command_runner.go:130] > Mar 14 19:41:06 multinode-442000 kubelet[1523]: E0314 19:41:06.925789    1523 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2a563b3f-a175-4dc2-9f0b-67dbaefbfaac-config-volume podName:2a563b3f-a175-4dc2-9f0b-67dbaefbfaac nodeName:}" failed. No retries permitted until 2024-03-14 19:41:07.925744766 +0000 UTC m=+7.747368987 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/2a563b3f-a175-4dc2-9f0b-67dbaefbfaac-config-volume") pod "coredns-5dd5756b68-d22jc" (UID: "2a563b3f-a175-4dc2-9f0b-67dbaefbfaac") : object "kube-system"/"coredns" not registered
	I0314 19:42:18.235499    8428 command_runner.go:130] > Mar 14 19:41:07 multinode-442000 kubelet[1523]: E0314 19:41:07.026456    1523 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0314 19:42:18.235570    8428 command_runner.go:130] > Mar 14 19:41:07 multinode-442000 kubelet[1523]: E0314 19:41:07.026485    1523 projected.go:198] Error preparing data for projected volume kube-api-access-6hh9s for pod default/busybox-5b5d89c9d6-7446n: object "default"/"kube-root-ca.crt" not registered
	I0314 19:42:18.235628    8428 command_runner.go:130] > Mar 14 19:41:07 multinode-442000 kubelet[1523]: E0314 19:41:07.026583    1523 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6ca0ace6-596a-4504-80b5-0cc0cc11f9a2-kube-api-access-6hh9s podName:6ca0ace6-596a-4504-80b5-0cc0cc11f9a2 nodeName:}" failed. No retries permitted until 2024-03-14 19:41:08.02656612 +0000 UTC m=+7.848190341 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-6hh9s" (UniqueName: "kubernetes.io/projected/6ca0ace6-596a-4504-80b5-0cc0cc11f9a2-kube-api-access-6hh9s") pod "busybox-5b5d89c9d6-7446n" (UID: "6ca0ace6-596a-4504-80b5-0cc0cc11f9a2") : object "default"/"kube-root-ca.crt" not registered
	I0314 19:42:18.235691    8428 command_runner.go:130] > Mar 14 19:41:07 multinode-442000 kubelet[1523]: E0314 19:41:07.479340    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-d22jc" podUID="2a563b3f-a175-4dc2-9f0b-67dbaefbfaac"
	I0314 19:42:18.235691    8428 command_runner.go:130] > Mar 14 19:41:07 multinode-442000 kubelet[1523]: E0314 19:41:07.479540    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-7446n" podUID="6ca0ace6-596a-4504-80b5-0cc0cc11f9a2"
	I0314 19:42:18.235691    8428 command_runner.go:130] > Mar 14 19:41:07 multinode-442000 kubelet[1523]: E0314 19:41:07.934416    1523 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0314 19:42:18.235691    8428 command_runner.go:130] > Mar 14 19:41:07 multinode-442000 kubelet[1523]: E0314 19:41:07.934566    1523 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2a563b3f-a175-4dc2-9f0b-67dbaefbfaac-config-volume podName:2a563b3f-a175-4dc2-9f0b-67dbaefbfaac nodeName:}" failed. No retries permitted until 2024-03-14 19:41:09.934544359 +0000 UTC m=+9.756168580 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/2a563b3f-a175-4dc2-9f0b-67dbaefbfaac-config-volume") pod "coredns-5dd5756b68-d22jc" (UID: "2a563b3f-a175-4dc2-9f0b-67dbaefbfaac") : object "kube-system"/"coredns" not registered
	I0314 19:42:18.235691    8428 command_runner.go:130] > Mar 14 19:41:08 multinode-442000 kubelet[1523]: E0314 19:41:08.035285    1523 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0314 19:42:18.235691    8428 command_runner.go:130] > Mar 14 19:41:08 multinode-442000 kubelet[1523]: E0314 19:41:08.035328    1523 projected.go:198] Error preparing data for projected volume kube-api-access-6hh9s for pod default/busybox-5b5d89c9d6-7446n: object "default"/"kube-root-ca.crt" not registered
	I0314 19:42:18.235691    8428 command_runner.go:130] > Mar 14 19:41:08 multinode-442000 kubelet[1523]: E0314 19:41:08.035382    1523 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6ca0ace6-596a-4504-80b5-0cc0cc11f9a2-kube-api-access-6hh9s podName:6ca0ace6-596a-4504-80b5-0cc0cc11f9a2 nodeName:}" failed. No retries permitted until 2024-03-14 19:41:10.035364414 +0000 UTC m=+9.856988635 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-6hh9s" (UniqueName: "kubernetes.io/projected/6ca0ace6-596a-4504-80b5-0cc0cc11f9a2-kube-api-access-6hh9s") pod "busybox-5b5d89c9d6-7446n" (UID: "6ca0ace6-596a-4504-80b5-0cc0cc11f9a2") : object "default"/"kube-root-ca.crt" not registered
	I0314 19:42:18.235691    8428 command_runner.go:130] > Mar 14 19:41:08 multinode-442000 kubelet[1523]: I0314 19:41:08.192454    1523 kubelet.go:1872] "Trying to delete pod" pod="kube-system/etcd-multinode-442000" podUID="8974ad44-5d36-48f0-bc6b-9115bab5fb5e"
	I0314 19:42:18.235691    8428 command_runner.go:130] > Mar 14 19:41:08 multinode-442000 kubelet[1523]: I0314 19:41:08.232807    1523 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/etcd-multinode-442000" podStartSLOduration=2.232765597 podCreationTimestamp="2024-03-14 19:41:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-03-14 19:41:08.211688076 +0000 UTC m=+8.033312297" watchObservedRunningTime="2024-03-14 19:41:08.232765597 +0000 UTC m=+8.054389818"
	I0314 19:42:18.235691    8428 command_runner.go:130] > Mar 14 19:41:09 multinode-442000 kubelet[1523]: E0314 19:41:09.480285    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-7446n" podUID="6ca0ace6-596a-4504-80b5-0cc0cc11f9a2"
	I0314 19:42:18.235691    8428 command_runner.go:130] > Mar 14 19:41:09 multinode-442000 kubelet[1523]: E0314 19:41:09.480350    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-d22jc" podUID="2a563b3f-a175-4dc2-9f0b-67dbaefbfaac"
	I0314 19:42:18.235691    8428 command_runner.go:130] > Mar 14 19:41:09 multinode-442000 kubelet[1523]: E0314 19:41:09.954598    1523 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0314 19:42:18.235691    8428 command_runner.go:130] > Mar 14 19:41:09 multinode-442000 kubelet[1523]: E0314 19:41:09.954683    1523 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2a563b3f-a175-4dc2-9f0b-67dbaefbfaac-config-volume podName:2a563b3f-a175-4dc2-9f0b-67dbaefbfaac nodeName:}" failed. No retries permitted until 2024-03-14 19:41:13.95466674 +0000 UTC m=+13.776290961 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/2a563b3f-a175-4dc2-9f0b-67dbaefbfaac-config-volume") pod "coredns-5dd5756b68-d22jc" (UID: "2a563b3f-a175-4dc2-9f0b-67dbaefbfaac") : object "kube-system"/"coredns" not registered
	I0314 19:42:18.235691    8428 command_runner.go:130] > Mar 14 19:41:10 multinode-442000 kubelet[1523]: E0314 19:41:10.055917    1523 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0314 19:42:18.235691    8428 command_runner.go:130] > Mar 14 19:41:10 multinode-442000 kubelet[1523]: E0314 19:41:10.055948    1523 projected.go:198] Error preparing data for projected volume kube-api-access-6hh9s for pod default/busybox-5b5d89c9d6-7446n: object "default"/"kube-root-ca.crt" not registered
	I0314 19:42:18.235691    8428 command_runner.go:130] > Mar 14 19:41:10 multinode-442000 kubelet[1523]: E0314 19:41:10.055999    1523 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6ca0ace6-596a-4504-80b5-0cc0cc11f9a2-kube-api-access-6hh9s podName:6ca0ace6-596a-4504-80b5-0cc0cc11f9a2 nodeName:}" failed. No retries permitted until 2024-03-14 19:41:14.055983733 +0000 UTC m=+13.877608054 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-6hh9s" (UniqueName: "kubernetes.io/projected/6ca0ace6-596a-4504-80b5-0cc0cc11f9a2-kube-api-access-6hh9s") pod "busybox-5b5d89c9d6-7446n" (UID: "6ca0ace6-596a-4504-80b5-0cc0cc11f9a2") : object "default"/"kube-root-ca.crt" not registered
	I0314 19:42:18.235691    8428 command_runner.go:130] > Mar 14 19:41:11 multinode-442000 kubelet[1523]: E0314 19:41:11.480167    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-d22jc" podUID="2a563b3f-a175-4dc2-9f0b-67dbaefbfaac"
	I0314 19:42:18.236215    8428 command_runner.go:130] > Mar 14 19:41:11 multinode-442000 kubelet[1523]: E0314 19:41:11.480285    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-7446n" podUID="6ca0ace6-596a-4504-80b5-0cc0cc11f9a2"
	I0314 19:42:18.236288    8428 command_runner.go:130] > Mar 14 19:41:13 multinode-442000 kubelet[1523]: E0314 19:41:13.480095    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-d22jc" podUID="2a563b3f-a175-4dc2-9f0b-67dbaefbfaac"
	I0314 19:42:18.236288    8428 command_runner.go:130] > Mar 14 19:41:13 multinode-442000 kubelet[1523]: E0314 19:41:13.480797    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-7446n" podUID="6ca0ace6-596a-4504-80b5-0cc0cc11f9a2"
	I0314 19:42:18.236288    8428 command_runner.go:130] > Mar 14 19:41:13 multinode-442000 kubelet[1523]: E0314 19:41:13.988392    1523 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0314 19:42:18.236288    8428 command_runner.go:130] > Mar 14 19:41:13 multinode-442000 kubelet[1523]: E0314 19:41:13.988528    1523 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2a563b3f-a175-4dc2-9f0b-67dbaefbfaac-config-volume podName:2a563b3f-a175-4dc2-9f0b-67dbaefbfaac nodeName:}" failed. No retries permitted until 2024-03-14 19:41:21.98850961 +0000 UTC m=+21.810133831 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/2a563b3f-a175-4dc2-9f0b-67dbaefbfaac-config-volume") pod "coredns-5dd5756b68-d22jc" (UID: "2a563b3f-a175-4dc2-9f0b-67dbaefbfaac") : object "kube-system"/"coredns" not registered
	I0314 19:42:18.236288    8428 command_runner.go:130] > Mar 14 19:41:14 multinode-442000 kubelet[1523]: E0314 19:41:14.089208    1523 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0314 19:42:18.236288    8428 command_runner.go:130] > Mar 14 19:41:14 multinode-442000 kubelet[1523]: E0314 19:41:14.089365    1523 projected.go:198] Error preparing data for projected volume kube-api-access-6hh9s for pod default/busybox-5b5d89c9d6-7446n: object "default"/"kube-root-ca.crt" not registered
	I0314 19:42:18.236288    8428 command_runner.go:130] > Mar 14 19:41:14 multinode-442000 kubelet[1523]: E0314 19:41:14.089427    1523 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6ca0ace6-596a-4504-80b5-0cc0cc11f9a2-kube-api-access-6hh9s podName:6ca0ace6-596a-4504-80b5-0cc0cc11f9a2 nodeName:}" failed. No retries permitted until 2024-03-14 19:41:22.089409571 +0000 UTC m=+21.911033792 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-6hh9s" (UniqueName: "kubernetes.io/projected/6ca0ace6-596a-4504-80b5-0cc0cc11f9a2-kube-api-access-6hh9s") pod "busybox-5b5d89c9d6-7446n" (UID: "6ca0ace6-596a-4504-80b5-0cc0cc11f9a2") : object "default"/"kube-root-ca.crt" not registered
	I0314 19:42:18.236288    8428 command_runner.go:130] > Mar 14 19:41:15 multinode-442000 kubelet[1523]: E0314 19:41:15.480116    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-d22jc" podUID="2a563b3f-a175-4dc2-9f0b-67dbaefbfaac"
	I0314 19:42:18.236288    8428 command_runner.go:130] > Mar 14 19:41:15 multinode-442000 kubelet[1523]: E0314 19:41:15.480286    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-7446n" podUID="6ca0ace6-596a-4504-80b5-0cc0cc11f9a2"
	I0314 19:42:18.236288    8428 command_runner.go:130] > Mar 14 19:41:17 multinode-442000 kubelet[1523]: E0314 19:41:17.479583    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-7446n" podUID="6ca0ace6-596a-4504-80b5-0cc0cc11f9a2"
	I0314 19:42:18.236288    8428 command_runner.go:130] > Mar 14 19:41:17 multinode-442000 kubelet[1523]: E0314 19:41:17.480025    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-d22jc" podUID="2a563b3f-a175-4dc2-9f0b-67dbaefbfaac"
	I0314 19:42:18.236288    8428 command_runner.go:130] > Mar 14 19:41:19 multinode-442000 kubelet[1523]: E0314 19:41:19.480562    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-d22jc" podUID="2a563b3f-a175-4dc2-9f0b-67dbaefbfaac"
	I0314 19:42:18.236288    8428 command_runner.go:130] > Mar 14 19:41:19 multinode-442000 kubelet[1523]: E0314 19:41:19.480625    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-7446n" podUID="6ca0ace6-596a-4504-80b5-0cc0cc11f9a2"
	I0314 19:42:18.236288    8428 command_runner.go:130] > Mar 14 19:41:21 multinode-442000 kubelet[1523]: E0314 19:41:21.479895    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-7446n" podUID="6ca0ace6-596a-4504-80b5-0cc0cc11f9a2"
	I0314 19:42:18.236288    8428 command_runner.go:130] > Mar 14 19:41:21 multinode-442000 kubelet[1523]: E0314 19:41:21.480437    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-d22jc" podUID="2a563b3f-a175-4dc2-9f0b-67dbaefbfaac"
	I0314 19:42:18.236811    8428 command_runner.go:130] > Mar 14 19:41:22 multinode-442000 kubelet[1523]: E0314 19:41:22.061436    1523 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0314 19:42:18.236811    8428 command_runner.go:130] > Mar 14 19:41:22 multinode-442000 kubelet[1523]: E0314 19:41:22.061515    1523 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2a563b3f-a175-4dc2-9f0b-67dbaefbfaac-config-volume podName:2a563b3f-a175-4dc2-9f0b-67dbaefbfaac nodeName:}" failed. No retries permitted until 2024-03-14 19:41:38.061499618 +0000 UTC m=+37.883123839 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/2a563b3f-a175-4dc2-9f0b-67dbaefbfaac-config-volume") pod "coredns-5dd5756b68-d22jc" (UID: "2a563b3f-a175-4dc2-9f0b-67dbaefbfaac") : object "kube-system"/"coredns" not registered
	I0314 19:42:18.236890    8428 command_runner.go:130] > Mar 14 19:41:22 multinode-442000 kubelet[1523]: E0314 19:41:22.162555    1523 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0314 19:42:18.236890    8428 command_runner.go:130] > Mar 14 19:41:22 multinode-442000 kubelet[1523]: E0314 19:41:22.162603    1523 projected.go:198] Error preparing data for projected volume kube-api-access-6hh9s for pod default/busybox-5b5d89c9d6-7446n: object "default"/"kube-root-ca.crt" not registered
	I0314 19:42:18.236890    8428 command_runner.go:130] > Mar 14 19:41:22 multinode-442000 kubelet[1523]: E0314 19:41:22.162667    1523 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6ca0ace6-596a-4504-80b5-0cc0cc11f9a2-kube-api-access-6hh9s podName:6ca0ace6-596a-4504-80b5-0cc0cc11f9a2 nodeName:}" failed. No retries permitted until 2024-03-14 19:41:38.162650651 +0000 UTC m=+37.984274872 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-6hh9s" (UniqueName: "kubernetes.io/projected/6ca0ace6-596a-4504-80b5-0cc0cc11f9a2-kube-api-access-6hh9s") pod "busybox-5b5d89c9d6-7446n" (UID: "6ca0ace6-596a-4504-80b5-0cc0cc11f9a2") : object "default"/"kube-root-ca.crt" not registered
	I0314 19:42:18.236890    8428 command_runner.go:130] > Mar 14 19:41:23 multinode-442000 kubelet[1523]: E0314 19:41:23.480157    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-d22jc" podUID="2a563b3f-a175-4dc2-9f0b-67dbaefbfaac"
	I0314 19:42:18.236890    8428 command_runner.go:130] > Mar 14 19:41:23 multinode-442000 kubelet[1523]: E0314 19:41:23.481151    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-7446n" podUID="6ca0ace6-596a-4504-80b5-0cc0cc11f9a2"
	I0314 19:42:18.236890    8428 command_runner.go:130] > Mar 14 19:41:25 multinode-442000 kubelet[1523]: E0314 19:41:25.479970    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-d22jc" podUID="2a563b3f-a175-4dc2-9f0b-67dbaefbfaac"
	I0314 19:42:18.236890    8428 command_runner.go:130] > Mar 14 19:41:25 multinode-442000 kubelet[1523]: E0314 19:41:25.480065    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-7446n" podUID="6ca0ace6-596a-4504-80b5-0cc0cc11f9a2"
	I0314 19:42:18.236890    8428 command_runner.go:130] > Mar 14 19:41:27 multinode-442000 kubelet[1523]: E0314 19:41:27.480032    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-d22jc" podUID="2a563b3f-a175-4dc2-9f0b-67dbaefbfaac"
	I0314 19:42:18.236890    8428 command_runner.go:130] > Mar 14 19:41:27 multinode-442000 kubelet[1523]: E0314 19:41:27.480122    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-7446n" podUID="6ca0ace6-596a-4504-80b5-0cc0cc11f9a2"
	I0314 19:42:18.236890    8428 command_runner.go:130] > Mar 14 19:41:29 multinode-442000 kubelet[1523]: E0314 19:41:29.480034    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-7446n" podUID="6ca0ace6-596a-4504-80b5-0cc0cc11f9a2"
	I0314 19:42:18.236890    8428 command_runner.go:130] > Mar 14 19:41:29 multinode-442000 kubelet[1523]: E0314 19:41:29.480291    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-d22jc" podUID="2a563b3f-a175-4dc2-9f0b-67dbaefbfaac"
	I0314 19:42:18.236890    8428 command_runner.go:130] > Mar 14 19:41:31 multinode-442000 kubelet[1523]: E0314 19:41:31.479554    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-d22jc" podUID="2a563b3f-a175-4dc2-9f0b-67dbaefbfaac"
	I0314 19:42:18.236890    8428 command_runner.go:130] > Mar 14 19:41:31 multinode-442000 kubelet[1523]: E0314 19:41:31.479650    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-7446n" podUID="6ca0ace6-596a-4504-80b5-0cc0cc11f9a2"
	I0314 19:42:18.236890    8428 command_runner.go:130] > Mar 14 19:41:33 multinode-442000 kubelet[1523]: E0314 19:41:33.479299    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-d22jc" podUID="2a563b3f-a175-4dc2-9f0b-67dbaefbfaac"
	I0314 19:42:18.236890    8428 command_runner.go:130] > Mar 14 19:41:33 multinode-442000 kubelet[1523]: E0314 19:41:33.479835    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-7446n" podUID="6ca0ace6-596a-4504-80b5-0cc0cc11f9a2"
	I0314 19:42:18.236890    8428 command_runner.go:130] > Mar 14 19:41:35 multinode-442000 kubelet[1523]: E0314 19:41:35.479778    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-7446n" podUID="6ca0ace6-596a-4504-80b5-0cc0cc11f9a2"
	I0314 19:42:18.237426    8428 command_runner.go:130] > Mar 14 19:41:35 multinode-442000 kubelet[1523]: E0314 19:41:35.480230    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-d22jc" podUID="2a563b3f-a175-4dc2-9f0b-67dbaefbfaac"
	I0314 19:42:18.237494    8428 command_runner.go:130] > Mar 14 19:41:37 multinode-442000 kubelet[1523]: E0314 19:41:37.480388    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-d22jc" podUID="2a563b3f-a175-4dc2-9f0b-67dbaefbfaac"
	I0314 19:42:18.237494    8428 command_runner.go:130] > Mar 14 19:41:37 multinode-442000 kubelet[1523]: E0314 19:41:37.480921    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-7446n" podUID="6ca0ace6-596a-4504-80b5-0cc0cc11f9a2"
	I0314 19:42:18.237494    8428 command_runner.go:130] > Mar 14 19:41:38 multinode-442000 kubelet[1523]: E0314 19:41:38.089907    1523 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0314 19:42:18.237494    8428 command_runner.go:130] > Mar 14 19:41:38 multinode-442000 kubelet[1523]: E0314 19:41:38.090056    1523 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2a563b3f-a175-4dc2-9f0b-67dbaefbfaac-config-volume podName:2a563b3f-a175-4dc2-9f0b-67dbaefbfaac nodeName:}" failed. No retries permitted until 2024-03-14 19:42:10.090036325 +0000 UTC m=+69.911660546 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/2a563b3f-a175-4dc2-9f0b-67dbaefbfaac-config-volume") pod "coredns-5dd5756b68-d22jc" (UID: "2a563b3f-a175-4dc2-9f0b-67dbaefbfaac") : object "kube-system"/"coredns" not registered
	I0314 19:42:18.237494    8428 command_runner.go:130] > Mar 14 19:41:38 multinode-442000 kubelet[1523]: E0314 19:41:38.191172    1523 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0314 19:42:18.237494    8428 command_runner.go:130] > Mar 14 19:41:38 multinode-442000 kubelet[1523]: E0314 19:41:38.191351    1523 projected.go:198] Error preparing data for projected volume kube-api-access-6hh9s for pod default/busybox-5b5d89c9d6-7446n: object "default"/"kube-root-ca.crt" not registered
	I0314 19:42:18.237494    8428 command_runner.go:130] > Mar 14 19:41:38 multinode-442000 kubelet[1523]: E0314 19:41:38.191425    1523 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6ca0ace6-596a-4504-80b5-0cc0cc11f9a2-kube-api-access-6hh9s podName:6ca0ace6-596a-4504-80b5-0cc0cc11f9a2 nodeName:}" failed. No retries permitted until 2024-03-14 19:42:10.191406835 +0000 UTC m=+70.013031056 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-6hh9s" (UniqueName: "kubernetes.io/projected/6ca0ace6-596a-4504-80b5-0cc0cc11f9a2-kube-api-access-6hh9s") pod "busybox-5b5d89c9d6-7446n" (UID: "6ca0ace6-596a-4504-80b5-0cc0cc11f9a2") : object "default"/"kube-root-ca.crt" not registered
	I0314 19:42:18.237494    8428 command_runner.go:130] > Mar 14 19:41:38 multinode-442000 kubelet[1523]: I0314 19:41:38.578418    1523 scope.go:117] "RemoveContainer" containerID="07c2872c48edaa090b20d66267963c0d69c5c9eb97824b199af2d7e611ac596a"
	I0314 19:42:18.237494    8428 command_runner.go:130] > Mar 14 19:41:38 multinode-442000 kubelet[1523]: I0314 19:41:38.578814    1523 scope.go:117] "RemoveContainer" containerID="2876622a2618d9b60f7cb4f182054a8b2d30209e3bd14c5d4afe515101547bc8"
	I0314 19:42:18.237494    8428 command_runner.go:130] > Mar 14 19:41:38 multinode-442000 kubelet[1523]: E0314 19:41:38.579025    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(65d76566-4401-4b28-8452-10ed98624901)\"" pod="kube-system/storage-provisioner" podUID="65d76566-4401-4b28-8452-10ed98624901"
	I0314 19:42:18.237494    8428 command_runner.go:130] > Mar 14 19:41:39 multinode-442000 kubelet[1523]: E0314 19:41:39.479691    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-d22jc" podUID="2a563b3f-a175-4dc2-9f0b-67dbaefbfaac"
	I0314 19:42:18.237494    8428 command_runner.go:130] > Mar 14 19:41:39 multinode-442000 kubelet[1523]: E0314 19:41:39.479909    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-7446n" podUID="6ca0ace6-596a-4504-80b5-0cc0cc11f9a2"
	I0314 19:42:18.237494    8428 command_runner.go:130] > Mar 14 19:41:41 multinode-442000 kubelet[1523]: E0314 19:41:41.479574    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-7446n" podUID="6ca0ace6-596a-4504-80b5-0cc0cc11f9a2"
	I0314 19:42:18.237494    8428 command_runner.go:130] > Mar 14 19:41:41 multinode-442000 kubelet[1523]: E0314 19:41:41.480003    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-d22jc" podUID="2a563b3f-a175-4dc2-9f0b-67dbaefbfaac"
	I0314 19:42:18.237494    8428 command_runner.go:130] > Mar 14 19:41:41 multinode-442000 kubelet[1523]: I0314 19:41:41.518811    1523 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	I0314 19:42:18.237494    8428 command_runner.go:130] > Mar 14 19:41:53 multinode-442000 kubelet[1523]: I0314 19:41:53.480206    1523 scope.go:117] "RemoveContainer" containerID="2876622a2618d9b60f7cb4f182054a8b2d30209e3bd14c5d4afe515101547bc8"
	I0314 19:42:18.237494    8428 command_runner.go:130] > Mar 14 19:42:00 multinode-442000 kubelet[1523]: I0314 19:42:00.447192    1523 scope.go:117] "RemoveContainer" containerID="9585e3eb2ead2f471eb0d22c8e29e4bfd954095774af365d80329ea39fff78e1"
	I0314 19:42:18.237494    8428 command_runner.go:130] > Mar 14 19:42:00 multinode-442000 kubelet[1523]: I0314 19:42:00.490865    1523 scope.go:117] "RemoveContainer" containerID="cd640f130e429bd4182c258358ec791604b8f307f9c45f2e3880e9b1a7df666a"
	I0314 19:42:18.237494    8428 command_runner.go:130] > Mar 14 19:42:00 multinode-442000 kubelet[1523]: E0314 19:42:00.516969    1523 iptables.go:575] "Could not set up iptables canary" err=<
	I0314 19:42:18.237494    8428 command_runner.go:130] > Mar 14 19:42:00 multinode-442000 kubelet[1523]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	I0314 19:42:18.237494    8428 command_runner.go:130] > Mar 14 19:42:00 multinode-442000 kubelet[1523]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	I0314 19:42:18.237494    8428 command_runner.go:130] > Mar 14 19:42:00 multinode-442000 kubelet[1523]:         Perhaps ip6tables or your kernel needs to be upgraded.
	I0314 19:42:18.237494    8428 command_runner.go:130] > Mar 14 19:42:00 multinode-442000 kubelet[1523]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	I0314 19:42:18.237494    8428 command_runner.go:130] > Mar 14 19:42:11 multinode-442000 kubelet[1523]: I0314 19:42:11.167906    1523 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="89f326046d00d990fbe8611867f6438ef498caad91d78b4f265633a7cd56307f"
	I0314 19:42:18.237494    8428 command_runner.go:130] > Mar 14 19:42:11 multinode-442000 kubelet[1523]: I0314 19:42:11.214897    1523 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cddebe360bf3a58d057146523ff9f043ddb40843d3e55a24f8f364524780a439"
	I0314 19:42:18.278378    8428 logs.go:123] Gathering logs for kube-scheduler [32d90a3ea213] ...
	I0314 19:42:18.278378    8428 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32d90a3ea213"
	I0314 19:42:18.305382    8428 command_runner.go:130] ! I0314 19:41:03.376319       1 serving.go:348] Generated self-signed cert in-memory
	I0314 19:42:18.305382    8428 command_runner.go:130] ! W0314 19:41:05.770317       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	I0314 19:42:18.305382    8428 command_runner.go:130] ! W0314 19:41:05.770426       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	I0314 19:42:18.305382    8428 command_runner.go:130] ! W0314 19:41:05.770581       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	I0314 19:42:18.305382    8428 command_runner.go:130] ! W0314 19:41:05.770640       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0314 19:42:18.305382    8428 command_runner.go:130] ! I0314 19:41:05.841573       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.4"
	I0314 19:42:18.305382    8428 command_runner.go:130] ! I0314 19:41:05.841674       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0314 19:42:18.305382    8428 command_runner.go:130] ! I0314 19:41:05.844125       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0314 19:42:18.305382    8428 command_runner.go:130] ! I0314 19:41:05.845062       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0314 19:42:18.305382    8428 command_runner.go:130] ! I0314 19:41:05.845143       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0314 19:42:18.305382    8428 command_runner.go:130] ! I0314 19:41:05.845293       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0314 19:42:18.305382    8428 command_runner.go:130] ! I0314 19:41:05.946840       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0314 19:42:18.306387    8428 logs.go:123] Gathering logs for kube-proxy [2a62baf3f1b4] ...
	I0314 19:42:18.306387    8428 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2a62baf3f1b4"
	I0314 19:42:18.335928    8428 command_runner.go:130] ! I0314 19:19:18.247796       1 server_others.go:69] "Using iptables proxy"
	I0314 19:42:18.335988    8428 command_runner.go:130] ! I0314 19:19:18.275162       1 node.go:141] Successfully retrieved node IP: 172.17.86.124
	I0314 19:42:18.335988    8428 command_runner.go:130] ! I0314 19:19:18.379821       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0314 19:42:18.335988    8428 command_runner.go:130] ! I0314 19:19:18.379851       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0314 19:42:18.336060    8428 command_runner.go:130] ! I0314 19:19:18.395429       1 server_others.go:152] "Using iptables Proxier"
	I0314 19:42:18.336060    8428 command_runner.go:130] ! I0314 19:19:18.395506       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0314 19:42:18.336060    8428 command_runner.go:130] ! I0314 19:19:18.395856       1 server.go:846] "Version info" version="v1.28.4"
	I0314 19:42:18.336115    8428 command_runner.go:130] ! I0314 19:19:18.395890       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0314 19:42:18.336115    8428 command_runner.go:130] ! I0314 19:19:18.417861       1 config.go:188] "Starting service config controller"
	I0314 19:42:18.336115    8428 command_runner.go:130] ! I0314 19:19:18.417913       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0314 19:42:18.336170    8428 command_runner.go:130] ! I0314 19:19:18.417950       1 config.go:97] "Starting endpoint slice config controller"
	I0314 19:42:18.336170    8428 command_runner.go:130] ! I0314 19:19:18.420511       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0314 19:42:18.336208    8428 command_runner.go:130] ! I0314 19:19:18.426566       1 config.go:315] "Starting node config controller"
	I0314 19:42:18.336208    8428 command_runner.go:130] ! I0314 19:19:18.426600       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0314 19:42:18.336258    8428 command_runner.go:130] ! I0314 19:19:18.519508       1 shared_informer.go:318] Caches are synced for service config
	I0314 19:42:18.336258    8428 command_runner.go:130] ! I0314 19:19:18.524347       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0314 19:42:18.336293    8428 command_runner.go:130] ! I0314 19:19:18.527360       1 shared_informer.go:318] Caches are synced for node config
	I0314 19:42:18.337010    8428 logs.go:123] Gathering logs for container status ...
	I0314 19:42:18.337010    8428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:42:18.428614    8428 command_runner.go:130] > CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	I0314 19:42:18.428614    8428 command_runner.go:130] > b159aedddf94a       ead0a4a53df89                                                                                         7 seconds ago        Running             coredns                   1                   89f326046d00d       coredns-5dd5756b68-d22jc
	I0314 19:42:18.428614    8428 command_runner.go:130] > 813492ad2d666       8c811b4aec35f                                                                                         7 seconds ago        Running             busybox                   1                   cddebe360bf3a       busybox-5b5d89c9d6-7446n
	I0314 19:42:18.428614    8428 command_runner.go:130] > 3167caea2534f       6e38f40d628db                                                                                         25 seconds ago       Running             storage-provisioner       2                   a723f141543f2       storage-provisioner
	I0314 19:42:18.428614    8428 command_runner.go:130] > 999e4c168afef       4950bb10b3f87                                                                                         About a minute ago   Running             kindnet-cni               1                   a9176b5544663       kindnet-7b9lf
	I0314 19:42:18.428614    8428 command_runner.go:130] > 497007582e446       83f6cc407eed8                                                                                         About a minute ago   Running             kube-proxy                1                   f513a7aff6720       kube-proxy-cg28g
	I0314 19:42:18.428614    8428 command_runner.go:130] > 2876622a2618d       6e38f40d628db                                                                                         About a minute ago   Exited              storage-provisioner       1                   a723f141543f2       storage-provisioner
	I0314 19:42:18.429135    8428 command_runner.go:130] > 32d90a3ea2131       e3db313c6dbc0                                                                                         About a minute ago   Running             kube-scheduler            1                   c70744e60ac50       kube-scheduler-multinode-442000
	I0314 19:42:18.429213    8428 command_runner.go:130] > a598d24960de8       7fe0e6f37db33                                                                                         About a minute ago   Running             kube-apiserver            0                   a27fa2188ee4c       kube-apiserver-multinode-442000
	I0314 19:42:18.429292    8428 command_runner.go:130] > 12baf105f0bb2       d058aa5ab969c                                                                                         About a minute ago   Running             kube-controller-manager   1                   67475bf80ddd9       kube-controller-manager-multinode-442000
	I0314 19:42:18.429401    8428 command_runner.go:130] > a81a9c43c3552       73deb9a3f7025                                                                                         About a minute ago   Running             etcd                      0                   35dd339c8a08d       etcd-multinode-442000
	I0314 19:42:18.429476    8428 command_runner.go:130] > 0cd43cdaa31c9       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   19 minutes ago       Exited              busybox                   0                   fa0f2372c88ee       busybox-5b5d89c9d6-7446n
	I0314 19:42:18.429550    8428 command_runner.go:130] > 8899bc0038935       ead0a4a53df89                                                                                         22 minutes ago       Exited              coredns                   0                   a3dba3fc54c01       coredns-5dd5756b68-d22jc
	I0314 19:42:18.429656    8428 command_runner.go:130] > 1a321c0e89971       kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988              22 minutes ago       Exited              kindnet-cni               0                   b046b896affe9       kindnet-7b9lf
	I0314 19:42:18.429683    8428 command_runner.go:130] > 2a62baf3f1b46       83f6cc407eed8                                                                                         23 minutes ago       Exited              kube-proxy                0                   9b3244b47278e       kube-proxy-cg28g
	I0314 19:42:18.429683    8428 command_runner.go:130] > dbb603289bf16       e3db313c6dbc0                                                                                         23 minutes ago       Exited              kube-scheduler            0                   54e39762d7a64       kube-scheduler-multinode-442000
	I0314 19:42:18.429683    8428 command_runner.go:130] > 16b80f73683dc       d058aa5ab969c                                                                                         23 minutes ago       Exited              kube-controller-manager   0                   102c907609a3a       kube-controller-manager-multinode-442000
	I0314 19:42:20.942057    8428 api_server.go:253] Checking apiserver healthz at https://172.17.93.236:8443/healthz ...
	I0314 19:42:20.950293    8428 api_server.go:279] https://172.17.93.236:8443/healthz returned 200:
	ok
	I0314 19:42:20.950754    8428 round_trippers.go:463] GET https://172.17.93.236:8443/version
	I0314 19:42:20.950754    8428 round_trippers.go:469] Request Headers:
	I0314 19:42:20.950754    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:42:20.950754    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:42:20.952431    8428 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0314 19:42:20.952431    8428 round_trippers.go:577] Response Headers:
	I0314 19:42:20.952431    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:42:20.952431    8428 round_trippers.go:580]     Content-Length: 264
	I0314 19:42:20.952431    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:42:21 GMT
	I0314 19:42:20.952431    8428 round_trippers.go:580]     Audit-Id: ddea6ce7-c94f-4e9e-8283-b11429c3c424
	I0314 19:42:20.952431    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:42:20.952431    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:42:20.952431    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:42:20.952431    8428 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.4",
	  "gitCommit": "bae2c62678db2b5053817bc97181fcc2e8388103",
	  "gitTreeState": "clean",
	  "buildDate": "2023-11-15T16:48:54Z",
	  "goVersion": "go1.20.11",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0314 19:42:20.952924    8428 api_server.go:141] control plane version: v1.28.4
	I0314 19:42:20.952924    8428 api_server.go:131] duration metric: took 3.7413464s to wait for apiserver health ...
	I0314 19:42:20.952924    8428 system_pods.go:43] waiting for kube-system pods to appear ...
	I0314 19:42:20.959195    8428 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0314 19:42:20.984809    8428 command_runner.go:130] > a598d24960de
	I0314 19:42:20.984892    8428 logs.go:276] 1 containers: [a598d24960de]
	I0314 19:42:20.993697    8428 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0314 19:42:21.017448    8428 command_runner.go:130] > a81a9c43c355
	I0314 19:42:21.018226    8428 logs.go:276] 1 containers: [a81a9c43c355]
	I0314 19:42:21.025637    8428 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0314 19:42:21.050369    8428 command_runner.go:130] > b159aedddf94
	I0314 19:42:21.050430    8428 command_runner.go:130] > 8899bc003893
	I0314 19:42:21.050529    8428 logs.go:276] 2 containers: [b159aedddf94 8899bc003893]
	I0314 19:42:21.057547    8428 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0314 19:42:21.080768    8428 command_runner.go:130] > 32d90a3ea213
	I0314 19:42:21.080768    8428 command_runner.go:130] > dbb603289bf1
	I0314 19:42:21.081742    8428 logs.go:276] 2 containers: [32d90a3ea213 dbb603289bf1]
	I0314 19:42:21.091487    8428 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0314 19:42:21.144402    8428 command_runner.go:130] > 497007582e44
	I0314 19:42:21.144475    8428 command_runner.go:130] > 2a62baf3f1b4
	I0314 19:42:21.144523    8428 logs.go:276] 2 containers: [497007582e44 2a62baf3f1b4]
	I0314 19:42:21.154982    8428 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0314 19:42:21.203077    8428 command_runner.go:130] > 12baf105f0bb
	I0314 19:42:21.203231    8428 command_runner.go:130] > 16b80f73683d
	I0314 19:42:21.203231    8428 logs.go:276] 2 containers: [12baf105f0bb 16b80f73683d]
	I0314 19:42:21.214969    8428 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0314 19:42:21.242294    8428 command_runner.go:130] > 999e4c168afe
	I0314 19:42:21.242294    8428 command_runner.go:130] > 1a321c0e8997
	I0314 19:42:21.242294    8428 logs.go:276] 2 containers: [999e4c168afe 1a321c0e8997]
	I0314 19:42:21.242294    8428 logs.go:123] Gathering logs for kube-scheduler [32d90a3ea213] ...
	I0314 19:42:21.242294    8428 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32d90a3ea213"
	I0314 19:42:21.271269    8428 command_runner.go:130] ! I0314 19:41:03.376319       1 serving.go:348] Generated self-signed cert in-memory
	I0314 19:42:21.271269    8428 command_runner.go:130] ! W0314 19:41:05.770317       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	I0314 19:42:21.271269    8428 command_runner.go:130] ! W0314 19:41:05.770426       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	I0314 19:42:21.271269    8428 command_runner.go:130] ! W0314 19:41:05.770581       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	I0314 19:42:21.271269    8428 command_runner.go:130] ! W0314 19:41:05.770640       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0314 19:42:21.271269    8428 command_runner.go:130] ! I0314 19:41:05.841573       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.4"
	I0314 19:42:21.271269    8428 command_runner.go:130] ! I0314 19:41:05.841674       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0314 19:42:21.271269    8428 command_runner.go:130] ! I0314 19:41:05.844125       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0314 19:42:21.271269    8428 command_runner.go:130] ! I0314 19:41:05.845062       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0314 19:42:21.271269    8428 command_runner.go:130] ! I0314 19:41:05.845143       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0314 19:42:21.271269    8428 command_runner.go:130] ! I0314 19:41:05.845293       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0314 19:42:21.271269    8428 command_runner.go:130] ! I0314 19:41:05.946840       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0314 19:42:21.273943    8428 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:42:21.274013    8428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 19:42:21.471978    8428 command_runner.go:130] > Name:               multinode-442000
	I0314 19:42:21.471978    8428 command_runner.go:130] > Roles:              control-plane
	I0314 19:42:21.472046    8428 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0314 19:42:21.472046    8428 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0314 19:42:21.472046    8428 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0314 19:42:21.472046    8428 command_runner.go:130] >                     kubernetes.io/hostname=multinode-442000
	I0314 19:42:21.472046    8428 command_runner.go:130] >                     kubernetes.io/os=linux
	I0314 19:42:21.472046    8428 command_runner.go:130] >                     minikube.k8s.io/commit=c6f78a3db54ac629870afb44fb5bc8be9e04a8c7
	I0314 19:42:21.472107    8428 command_runner.go:130] >                     minikube.k8s.io/name=multinode-442000
	I0314 19:42:21.472107    8428 command_runner.go:130] >                     minikube.k8s.io/primary=true
	I0314 19:42:21.472107    8428 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_03_14T19_19_05_0700
	I0314 19:42:21.472107    8428 command_runner.go:130] >                     minikube.k8s.io/version=v1.32.0
	I0314 19:42:21.472107    8428 command_runner.go:130] >                     node-role.kubernetes.io/control-plane=
	I0314 19:42:21.472107    8428 command_runner.go:130] >                     node.kubernetes.io/exclude-from-external-load-balancers=
	I0314 19:42:21.472165    8428 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0314 19:42:21.472165    8428 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0314 19:42:21.472165    8428 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0314 19:42:21.472165    8428 command_runner.go:130] > CreationTimestamp:  Thu, 14 Mar 2024 19:19:00 +0000
	I0314 19:42:21.472222    8428 command_runner.go:130] > Taints:             <none>
	I0314 19:42:21.472288    8428 command_runner.go:130] > Unschedulable:      false
	I0314 19:42:21.472288    8428 command_runner.go:130] > Lease:
	I0314 19:42:21.472288    8428 command_runner.go:130] >   HolderIdentity:  multinode-442000
	I0314 19:42:21.472288    8428 command_runner.go:130] >   AcquireTime:     <unset>
	I0314 19:42:21.472288    8428 command_runner.go:130] >   RenewTime:       Thu, 14 Mar 2024 19:42:17 +0000
	I0314 19:42:21.472288    8428 command_runner.go:130] > Conditions:
	I0314 19:42:21.472334    8428 command_runner.go:130] >   Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	I0314 19:42:21.472334    8428 command_runner.go:130] >   ----             ------  -----------------                 ------------------                ------                       -------
	I0314 19:42:21.472334    8428 command_runner.go:130] >   MemoryPressure   False   Thu, 14 Mar 2024 19:41:41 +0000   Thu, 14 Mar 2024 19:18:58 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	I0314 19:42:21.472334    8428 command_runner.go:130] >   DiskPressure     False   Thu, 14 Mar 2024 19:41:41 +0000   Thu, 14 Mar 2024 19:18:58 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	I0314 19:42:21.472395    8428 command_runner.go:130] >   PIDPressure      False   Thu, 14 Mar 2024 19:41:41 +0000   Thu, 14 Mar 2024 19:18:58 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	I0314 19:42:21.472395    8428 command_runner.go:130] >   Ready            True    Thu, 14 Mar 2024 19:41:41 +0000   Thu, 14 Mar 2024 19:41:41 +0000   KubeletReady                 kubelet is posting ready status
	I0314 19:42:21.472395    8428 command_runner.go:130] > Addresses:
	I0314 19:42:21.472395    8428 command_runner.go:130] >   InternalIP:  172.17.93.236
	I0314 19:42:21.472395    8428 command_runner.go:130] >   Hostname:    multinode-442000
	I0314 19:42:21.472395    8428 command_runner.go:130] > Capacity:
	I0314 19:42:21.472475    8428 command_runner.go:130] >   cpu:                2
	I0314 19:42:21.472511    8428 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0314 19:42:21.472545    8428 command_runner.go:130] >   hugepages-2Mi:      0
	I0314 19:42:21.472545    8428 command_runner.go:130] >   memory:             2164268Ki
	I0314 19:42:21.472545    8428 command_runner.go:130] >   pods:               110
	I0314 19:42:21.472545    8428 command_runner.go:130] > Allocatable:
	I0314 19:42:21.472545    8428 command_runner.go:130] >   cpu:                2
	I0314 19:42:21.472545    8428 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0314 19:42:21.472545    8428 command_runner.go:130] >   hugepages-2Mi:      0
	I0314 19:42:21.472606    8428 command_runner.go:130] >   memory:             2164268Ki
	I0314 19:42:21.472606    8428 command_runner.go:130] >   pods:               110
	I0314 19:42:21.472606    8428 command_runner.go:130] > System Info:
	I0314 19:42:21.472635    8428 command_runner.go:130] >   Machine ID:                 37c811f81f1d4d709fd4a6eb79d70749
	I0314 19:42:21.472635    8428 command_runner.go:130] >   System UUID:                8469b663-ea90-da4f-856d-11034a8f65d8
	I0314 19:42:21.472635    8428 command_runner.go:130] >   Boot ID:                    91589624-f8f3-469e-b556-aa6dd64e54de
	I0314 19:42:21.472635    8428 command_runner.go:130] >   Kernel Version:             5.10.207
	I0314 19:42:21.472687    8428 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0314 19:42:21.472703    8428 command_runner.go:130] >   Operating System:           linux
	I0314 19:42:21.472703    8428 command_runner.go:130] >   Architecture:               amd64
	I0314 19:42:21.472703    8428 command_runner.go:130] >   Container Runtime Version:  docker://25.0.4
	I0314 19:42:21.472703    8428 command_runner.go:130] >   Kubelet Version:            v1.28.4
	I0314 19:42:21.472703    8428 command_runner.go:130] >   Kube-Proxy Version:         v1.28.4
	I0314 19:42:21.472703    8428 command_runner.go:130] > PodCIDR:                      10.244.0.0/24
	I0314 19:42:21.472703    8428 command_runner.go:130] > PodCIDRs:                     10.244.0.0/24
	I0314 19:42:21.472703    8428 command_runner.go:130] > Non-terminated Pods:          (9 in total)
	I0314 19:42:21.472703    8428 command_runner.go:130] >   Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0314 19:42:21.472786    8428 command_runner.go:130] >   ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	I0314 19:42:21.472786    8428 command_runner.go:130] >   default                     busybox-5b5d89c9d6-7446n                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	I0314 19:42:21.472786    8428 command_runner.go:130] >   kube-system                 coredns-5dd5756b68-d22jc                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     23m
	I0314 19:42:21.472786    8428 command_runner.go:130] >   kube-system                 etcd-multinode-442000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         75s
	I0314 19:42:21.472847    8428 command_runner.go:130] >   kube-system                 kindnet-7b9lf                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      23m
	I0314 19:42:21.472847    8428 command_runner.go:130] >   kube-system                 kube-apiserver-multinode-442000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         75s
	I0314 19:42:21.472877    8428 command_runner.go:130] >   kube-system                 kube-controller-manager-multinode-442000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	I0314 19:42:21.472919    8428 command_runner.go:130] >   kube-system                 kube-proxy-cg28g                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	I0314 19:42:21.472919    8428 command_runner.go:130] >   kube-system                 kube-scheduler-multinode-442000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	I0314 19:42:21.472919    8428 command_runner.go:130] >   kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	I0314 19:42:21.472952    8428 command_runner.go:130] > Allocated resources:
	I0314 19:42:21.472970    8428 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0314 19:42:21.472970    8428 command_runner.go:130] >   Resource           Requests     Limits
	I0314 19:42:21.472970    8428 command_runner.go:130] >   --------           --------     ------
	I0314 19:42:21.472970    8428 command_runner.go:130] >   cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	I0314 19:42:21.472970    8428 command_runner.go:130] >   memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	I0314 19:42:21.472970    8428 command_runner.go:130] >   ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	I0314 19:42:21.472970    8428 command_runner.go:130] >   hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	I0314 19:42:21.472970    8428 command_runner.go:130] > Events:
	I0314 19:42:21.473033    8428 command_runner.go:130] >   Type    Reason                   Age                From             Message
	I0314 19:42:21.473033    8428 command_runner.go:130] >   ----    ------                   ----               ----             -------
	I0314 19:42:21.473057    8428 command_runner.go:130] >   Normal  Starting                 23m                kube-proxy       
	I0314 19:42:21.473057    8428 command_runner.go:130] >   Normal  Starting                 72s                kube-proxy       
	I0314 19:42:21.473057    8428 command_runner.go:130] >   Normal  NodeHasSufficientMemory  23m (x8 over 23m)  kubelet          Node multinode-442000 status is now: NodeHasSufficientMemory
	I0314 19:42:21.473057    8428 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    23m (x8 over 23m)  kubelet          Node multinode-442000 status is now: NodeHasNoDiskPressure
	I0314 19:42:21.473057    8428 command_runner.go:130] >   Normal  NodeHasSufficientPID     23m (x7 over 23m)  kubelet          Node multinode-442000 status is now: NodeHasSufficientPID
	I0314 19:42:21.473118    8428 command_runner.go:130] >   Normal  NodeAllocatableEnforced  23m                kubelet          Updated Node Allocatable limit across pods
	I0314 19:42:21.473118    8428 command_runner.go:130] >   Normal  NodeHasSufficientMemory  23m                kubelet          Node multinode-442000 status is now: NodeHasSufficientMemory
	I0314 19:42:21.473118    8428 command_runner.go:130] >   Normal  NodeAllocatableEnforced  23m                kubelet          Updated Node Allocatable limit across pods
	I0314 19:42:21.473175    8428 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    23m                kubelet          Node multinode-442000 status is now: NodeHasNoDiskPressure
	I0314 19:42:21.473175    8428 command_runner.go:130] >   Normal  NodeHasSufficientPID     23m                kubelet          Node multinode-442000 status is now: NodeHasSufficientPID
	I0314 19:42:21.473175    8428 command_runner.go:130] >   Normal  Starting                 23m                kubelet          Starting kubelet.
	I0314 19:42:21.473175    8428 command_runner.go:130] >   Normal  RegisteredNode           23m                node-controller  Node multinode-442000 event: Registered Node multinode-442000 in Controller
	I0314 19:42:21.473175    8428 command_runner.go:130] >   Normal  NodeReady                22m                kubelet          Node multinode-442000 status is now: NodeReady
	I0314 19:42:21.473251    8428 command_runner.go:130] >   Normal  Starting                 81s                kubelet          Starting kubelet.
	I0314 19:42:21.473251    8428 command_runner.go:130] >   Normal  NodeHasSufficientMemory  81s (x8 over 81s)  kubelet          Node multinode-442000 status is now: NodeHasSufficientMemory
	I0314 19:42:21.473281    8428 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    81s (x8 over 81s)  kubelet          Node multinode-442000 status is now: NodeHasNoDiskPressure
	I0314 19:42:21.473281    8428 command_runner.go:130] >   Normal  NodeHasSufficientPID     81s (x7 over 81s)  kubelet          Node multinode-442000 status is now: NodeHasSufficientPID
	I0314 19:42:21.473281    8428 command_runner.go:130] >   Normal  NodeAllocatableEnforced  81s                kubelet          Updated Node Allocatable limit across pods
	I0314 19:42:21.473318    8428 command_runner.go:130] >   Normal  RegisteredNode           63s                node-controller  Node multinode-442000 event: Registered Node multinode-442000 in Controller
	I0314 19:42:21.473318    8428 command_runner.go:130] > Name:               multinode-442000-m02
	I0314 19:42:21.473357    8428 command_runner.go:130] > Roles:              <none>
	I0314 19:42:21.473373    8428 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0314 19:42:21.473373    8428 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0314 19:42:21.473373    8428 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0314 19:42:21.473373    8428 command_runner.go:130] >                     kubernetes.io/hostname=multinode-442000-m02
	I0314 19:42:21.473373    8428 command_runner.go:130] >                     kubernetes.io/os=linux
	I0314 19:42:21.473373    8428 command_runner.go:130] >                     minikube.k8s.io/commit=c6f78a3db54ac629870afb44fb5bc8be9e04a8c7
	I0314 19:42:21.473434    8428 command_runner.go:130] >                     minikube.k8s.io/name=multinode-442000
	I0314 19:42:21.473465    8428 command_runner.go:130] >                     minikube.k8s.io/primary=false
	I0314 19:42:21.473465    8428 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_03_14T19_22_02_0700
	I0314 19:42:21.473465    8428 command_runner.go:130] >                     minikube.k8s.io/version=v1.32.0
	I0314 19:42:21.473500    8428 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0314 19:42:21.473500    8428 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0314 19:42:21.473500    8428 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0314 19:42:21.473500    8428 command_runner.go:130] > CreationTimestamp:  Thu, 14 Mar 2024 19:22:02 +0000
	I0314 19:42:21.473559    8428 command_runner.go:130] > Taints:             node.kubernetes.io/unreachable:NoExecute
	I0314 19:42:21.473559    8428 command_runner.go:130] >                     node.kubernetes.io/unreachable:NoSchedule
	I0314 19:42:21.473599    8428 command_runner.go:130] > Unschedulable:      false
	I0314 19:42:21.473599    8428 command_runner.go:130] > Lease:
	I0314 19:42:21.473599    8428 command_runner.go:130] >   HolderIdentity:  multinode-442000-m02
	I0314 19:42:21.473634    8428 command_runner.go:130] >   AcquireTime:     <unset>
	I0314 19:42:21.473634    8428 command_runner.go:130] >   RenewTime:       Thu, 14 Mar 2024 19:38:03 +0000
	I0314 19:42:21.473634    8428 command_runner.go:130] > Conditions:
	I0314 19:42:21.473684    8428 command_runner.go:130] >   Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	I0314 19:42:21.473684    8428 command_runner.go:130] >   ----             ------    -----------------                 ------------------                ------              -------
	I0314 19:42:21.473684    8428 command_runner.go:130] >   MemoryPressure   Unknown   Thu, 14 Mar 2024 19:33:15 +0000   Thu, 14 Mar 2024 19:41:59 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0314 19:42:21.473733    8428 command_runner.go:130] >   DiskPressure     Unknown   Thu, 14 Mar 2024 19:33:15 +0000   Thu, 14 Mar 2024 19:41:59 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0314 19:42:21.473766    8428 command_runner.go:130] >   PIDPressure      Unknown   Thu, 14 Mar 2024 19:33:15 +0000   Thu, 14 Mar 2024 19:41:59 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0314 19:42:21.473766    8428 command_runner.go:130] >   Ready            Unknown   Thu, 14 Mar 2024 19:33:15 +0000   Thu, 14 Mar 2024 19:41:59 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0314 19:42:21.473766    8428 command_runner.go:130] > Addresses:
	I0314 19:42:21.473810    8428 command_runner.go:130] >   InternalIP:  172.17.80.135
	I0314 19:42:21.473810    8428 command_runner.go:130] >   Hostname:    multinode-442000-m02
	I0314 19:42:21.473846    8428 command_runner.go:130] > Capacity:
	I0314 19:42:21.473846    8428 command_runner.go:130] >   cpu:                2
	I0314 19:42:21.473846    8428 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0314 19:42:21.473902    8428 command_runner.go:130] >   hugepages-2Mi:      0
	I0314 19:42:21.473902    8428 command_runner.go:130] >   memory:             2164268Ki
	I0314 19:42:21.473902    8428 command_runner.go:130] >   pods:               110
	I0314 19:42:21.473902    8428 command_runner.go:130] > Allocatable:
	I0314 19:42:21.473902    8428 command_runner.go:130] >   cpu:                2
	I0314 19:42:21.473902    8428 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0314 19:42:21.473953    8428 command_runner.go:130] >   hugepages-2Mi:      0
	I0314 19:42:21.473953    8428 command_runner.go:130] >   memory:             2164268Ki
	I0314 19:42:21.473953    8428 command_runner.go:130] >   pods:               110
	I0314 19:42:21.473953    8428 command_runner.go:130] > System Info:
	I0314 19:42:21.473953    8428 command_runner.go:130] >   Machine ID:                 35b6f7da4d3943d99d8a5913cae1c8fb
	I0314 19:42:21.474005    8428 command_runner.go:130] >   System UUID:                0b9b8376-0767-f940-9973-d373e3dc050d
	I0314 19:42:21.474005    8428 command_runner.go:130] >   Boot ID:                    45d479cc-26e8-46a6-9431-50637071f586
	I0314 19:42:21.474005    8428 command_runner.go:130] >   Kernel Version:             5.10.207
	I0314 19:42:21.474005    8428 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0314 19:42:21.474005    8428 command_runner.go:130] >   Operating System:           linux
	I0314 19:42:21.474005    8428 command_runner.go:130] >   Architecture:               amd64
	I0314 19:42:21.474005    8428 command_runner.go:130] >   Container Runtime Version:  docker://25.0.4
	I0314 19:42:21.474081    8428 command_runner.go:130] >   Kubelet Version:            v1.28.4
	I0314 19:42:21.474112    8428 command_runner.go:130] >   Kube-Proxy Version:         v1.28.4
	I0314 19:42:21.474112    8428 command_runner.go:130] > PodCIDR:                      10.244.1.0/24
	I0314 19:42:21.474146    8428 command_runner.go:130] > PodCIDRs:                     10.244.1.0/24
	I0314 19:42:21.474146    8428 command_runner.go:130] > Non-terminated Pods:          (3 in total)
	I0314 19:42:21.474146    8428 command_runner.go:130] >   Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0314 19:42:21.474194    8428 command_runner.go:130] >   ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	I0314 19:42:21.474194    8428 command_runner.go:130] >   default                     busybox-5b5d89c9d6-8drpb    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	I0314 19:42:21.474236    8428 command_runner.go:130] >   kube-system                 kindnet-c7m4p               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      20m
	I0314 19:42:21.474236    8428 command_runner.go:130] >   kube-system                 kube-proxy-72dzs            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	I0314 19:42:21.474236    8428 command_runner.go:130] > Allocated resources:
	I0314 19:42:21.474236    8428 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0314 19:42:21.474236    8428 command_runner.go:130] >   Resource           Requests   Limits
	I0314 19:42:21.474236    8428 command_runner.go:130] >   --------           --------   ------
	I0314 19:42:21.474236    8428 command_runner.go:130] >   cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	I0314 19:42:21.474236    8428 command_runner.go:130] >   memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	I0314 19:42:21.474318    8428 command_runner.go:130] >   ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	I0314 19:42:21.474318    8428 command_runner.go:130] >   hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	I0314 19:42:21.474351    8428 command_runner.go:130] > Events:
	I0314 19:42:21.474351    8428 command_runner.go:130] >   Type    Reason                   Age                From             Message
	I0314 19:42:21.474351    8428 command_runner.go:130] >   ----    ------                   ----               ----             -------
	I0314 19:42:21.474385    8428 command_runner.go:130] >   Normal  Starting                 20m                kube-proxy       
	I0314 19:42:21.474385    8428 command_runner.go:130] >   Normal  NodeHasSufficientMemory  20m (x5 over 20m)  kubelet          Node multinode-442000-m02 status is now: NodeHasSufficientMemory
	I0314 19:42:21.474385    8428 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    20m (x5 over 20m)  kubelet          Node multinode-442000-m02 status is now: NodeHasNoDiskPressure
	I0314 19:42:21.474445    8428 command_runner.go:130] >   Normal  NodeHasSufficientPID     20m (x5 over 20m)  kubelet          Node multinode-442000-m02 status is now: NodeHasSufficientPID
	I0314 19:42:21.474445    8428 command_runner.go:130] >   Normal  RegisteredNode           20m                node-controller  Node multinode-442000-m02 event: Registered Node multinode-442000-m02 in Controller
	I0314 19:42:21.474477    8428 command_runner.go:130] >   Normal  NodeReady                20m                kubelet          Node multinode-442000-m02 status is now: NodeReady
	I0314 19:42:21.474518    8428 command_runner.go:130] >   Normal  RegisteredNode           63s                node-controller  Node multinode-442000-m02 event: Registered Node multinode-442000-m02 in Controller
	I0314 19:42:21.474518    8428 command_runner.go:130] >   Normal  NodeNotReady             22s                node-controller  Node multinode-442000-m02 status is now: NodeNotReady
	I0314 19:42:21.474518    8428 command_runner.go:130] > Name:               multinode-442000-m03
	I0314 19:42:21.474518    8428 command_runner.go:130] > Roles:              <none>
	I0314 19:42:21.474577    8428 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0314 19:42:21.474577    8428 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0314 19:42:21.474606    8428 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0314 19:42:21.474606    8428 command_runner.go:130] >                     kubernetes.io/hostname=multinode-442000-m03
	I0314 19:42:21.474652    8428 command_runner.go:130] >                     kubernetes.io/os=linux
	I0314 19:42:21.474668    8428 command_runner.go:130] >                     minikube.k8s.io/commit=c6f78a3db54ac629870afb44fb5bc8be9e04a8c7
	I0314 19:42:21.474668    8428 command_runner.go:130] >                     minikube.k8s.io/name=multinode-442000
	I0314 19:42:21.474668    8428 command_runner.go:130] >                     minikube.k8s.io/primary=false
	I0314 19:42:21.474668    8428 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_03_14T19_36_47_0700
	I0314 19:42:21.474668    8428 command_runner.go:130] >                     minikube.k8s.io/version=v1.32.0
	I0314 19:42:21.474732    8428 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0314 19:42:21.474732    8428 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0314 19:42:21.474754    8428 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0314 19:42:21.474754    8428 command_runner.go:130] > CreationTimestamp:  Thu, 14 Mar 2024 19:36:47 +0000
	I0314 19:42:21.474754    8428 command_runner.go:130] > Taints:             node.kubernetes.io/unreachable:NoExecute
	I0314 19:42:21.474754    8428 command_runner.go:130] >                     node.kubernetes.io/unreachable:NoSchedule
	I0314 19:42:21.474815    8428 command_runner.go:130] > Unschedulable:      false
	I0314 19:42:21.474815    8428 command_runner.go:130] > Lease:
	I0314 19:42:21.474815    8428 command_runner.go:130] >   HolderIdentity:  multinode-442000-m03
	I0314 19:42:21.474845    8428 command_runner.go:130] >   AcquireTime:     <unset>
	I0314 19:42:21.474845    8428 command_runner.go:130] >   RenewTime:       Thu, 14 Mar 2024 19:37:37 +0000
	I0314 19:42:21.474845    8428 command_runner.go:130] > Conditions:
	I0314 19:42:21.474877    8428 command_runner.go:130] >   Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	I0314 19:42:21.474877    8428 command_runner.go:130] >   ----             ------    -----------------                 ------------------                ------              -------
	I0314 19:42:21.474877    8428 command_runner.go:130] >   MemoryPressure   Unknown   Thu, 14 Mar 2024 19:36:54 +0000   Thu, 14 Mar 2024 19:38:21 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0314 19:42:21.474937    8428 command_runner.go:130] >   DiskPressure     Unknown   Thu, 14 Mar 2024 19:36:54 +0000   Thu, 14 Mar 2024 19:38:21 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0314 19:42:21.474937    8428 command_runner.go:130] >   PIDPressure      Unknown   Thu, 14 Mar 2024 19:36:54 +0000   Thu, 14 Mar 2024 19:38:21 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0314 19:42:21.474972    8428 command_runner.go:130] >   Ready            Unknown   Thu, 14 Mar 2024 19:36:54 +0000   Thu, 14 Mar 2024 19:38:21 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0314 19:42:21.474972    8428 command_runner.go:130] > Addresses:
	I0314 19:42:21.474972    8428 command_runner.go:130] >   InternalIP:  172.17.84.215
	I0314 19:42:21.475006    8428 command_runner.go:130] >   Hostname:    multinode-442000-m03
	I0314 19:42:21.475038    8428 command_runner.go:130] > Capacity:
	I0314 19:42:21.475038    8428 command_runner.go:130] >   cpu:                2
	I0314 19:42:21.475055    8428 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0314 19:42:21.475055    8428 command_runner.go:130] >   hugepages-2Mi:      0
	I0314 19:42:21.475055    8428 command_runner.go:130] >   memory:             2164268Ki
	I0314 19:42:21.475055    8428 command_runner.go:130] >   pods:               110
	I0314 19:42:21.475055    8428 command_runner.go:130] > Allocatable:
	I0314 19:42:21.475055    8428 command_runner.go:130] >   cpu:                2
	I0314 19:42:21.475055    8428 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0314 19:42:21.475055    8428 command_runner.go:130] >   hugepages-2Mi:      0
	I0314 19:42:21.475055    8428 command_runner.go:130] >   memory:             2164268Ki
	I0314 19:42:21.475055    8428 command_runner.go:130] >   pods:               110
	I0314 19:42:21.475055    8428 command_runner.go:130] > System Info:
	I0314 19:42:21.475055    8428 command_runner.go:130] >   Machine ID:                 dc7772516bfe448db22a5c28796f53ab
	I0314 19:42:21.475157    8428 command_runner.go:130] >   System UUID:                71573585-d564-f043-9154-3d5854ce61b8
	I0314 19:42:21.475157    8428 command_runner.go:130] >   Boot ID:                    fed746b2-110b-43ee-9065-09983ba74a37
	I0314 19:42:21.475157    8428 command_runner.go:130] >   Kernel Version:             5.10.207
	I0314 19:42:21.475157    8428 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0314 19:42:21.475157    8428 command_runner.go:130] >   Operating System:           linux
	I0314 19:42:21.475157    8428 command_runner.go:130] >   Architecture:               amd64
	I0314 19:42:21.475157    8428 command_runner.go:130] >   Container Runtime Version:  docker://25.0.4
	I0314 19:42:21.475157    8428 command_runner.go:130] >   Kubelet Version:            v1.28.4
	I0314 19:42:21.475157    8428 command_runner.go:130] >   Kube-Proxy Version:         v1.28.4
	I0314 19:42:21.475157    8428 command_runner.go:130] > PodCIDR:                      10.244.3.0/24
	I0314 19:42:21.475157    8428 command_runner.go:130] > PodCIDRs:                     10.244.3.0/24
	I0314 19:42:21.475157    8428 command_runner.go:130] > Non-terminated Pods:          (2 in total)
	I0314 19:42:21.475157    8428 command_runner.go:130] >   Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0314 19:42:21.475157    8428 command_runner.go:130] >   ---------                   ----                ------------  ----------  ---------------  -------------  ---
	I0314 19:42:21.475157    8428 command_runner.go:130] >   kube-system                 kindnet-r7zdb       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      15m
	I0314 19:42:21.475157    8428 command_runner.go:130] >   kube-system                 kube-proxy-w2qls    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	I0314 19:42:21.475157    8428 command_runner.go:130] > Allocated resources:
	I0314 19:42:21.475157    8428 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0314 19:42:21.475157    8428 command_runner.go:130] >   Resource           Requests   Limits
	I0314 19:42:21.475157    8428 command_runner.go:130] >   --------           --------   ------
	I0314 19:42:21.475157    8428 command_runner.go:130] >   cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	I0314 19:42:21.475157    8428 command_runner.go:130] >   memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	I0314 19:42:21.475157    8428 command_runner.go:130] >   ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	I0314 19:42:21.475157    8428 command_runner.go:130] >   hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	I0314 19:42:21.475157    8428 command_runner.go:130] > Events:
	I0314 19:42:21.475157    8428 command_runner.go:130] >   Type    Reason                   Age                    From             Message
	I0314 19:42:21.475157    8428 command_runner.go:130] >   ----    ------                   ----                   ----             -------
	I0314 19:42:21.475157    8428 command_runner.go:130] >   Normal  Starting                 15m                    kube-proxy       
	I0314 19:42:21.475157    8428 command_runner.go:130] >   Normal  Starting                 5m32s                  kube-proxy       
	I0314 19:42:21.475157    8428 command_runner.go:130] >   Normal  NodeHasSufficientMemory  15m (x5 over 15m)      kubelet          Node multinode-442000-m03 status is now: NodeHasSufficientMemory
	I0314 19:42:21.475157    8428 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    15m (x5 over 15m)      kubelet          Node multinode-442000-m03 status is now: NodeHasNoDiskPressure
	I0314 19:42:21.475157    8428 command_runner.go:130] >   Normal  NodeHasSufficientPID     15m (x5 over 15m)      kubelet          Node multinode-442000-m03 status is now: NodeHasSufficientPID
	I0314 19:42:21.475157    8428 command_runner.go:130] >   Normal  NodeReady                15m                    kubelet          Node multinode-442000-m03 status is now: NodeReady
	I0314 19:42:21.475157    8428 command_runner.go:130] >   Normal  NodeHasSufficientMemory  5m34s (x5 over 5m36s)  kubelet          Node multinode-442000-m03 status is now: NodeHasSufficientMemory
	I0314 19:42:21.475157    8428 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    5m34s (x5 over 5m36s)  kubelet          Node multinode-442000-m03 status is now: NodeHasNoDiskPressure
	I0314 19:42:21.475157    8428 command_runner.go:130] >   Normal  NodeHasSufficientPID     5m34s (x5 over 5m36s)  kubelet          Node multinode-442000-m03 status is now: NodeHasSufficientPID
	I0314 19:42:21.475157    8428 command_runner.go:130] >   Normal  RegisteredNode           5m30s                  node-controller  Node multinode-442000-m03 event: Registered Node multinode-442000-m03 in Controller
	I0314 19:42:21.475157    8428 command_runner.go:130] >   Normal  NodeReady                5m27s                  kubelet          Node multinode-442000-m03 status is now: NodeReady
	I0314 19:42:21.475157    8428 command_runner.go:130] >   Normal  NodeNotReady             4m                     node-controller  Node multinode-442000-m03 status is now: NodeNotReady
	I0314 19:42:21.475157    8428 command_runner.go:130] >   Normal  RegisteredNode           63s                    node-controller  Node multinode-442000-m03 event: Registered Node multinode-442000-m03 in Controller
	I0314 19:42:21.484880    8428 logs.go:123] Gathering logs for etcd [a81a9c43c355] ...
	I0314 19:42:21.484880    8428 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a81a9c43c355"
	I0314 19:42:21.519214    8428 command_runner.go:130] ! {"level":"warn","ts":"2024-03-14T19:41:01.944953Z","caller":"embed/config.go:673","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I0314 19:42:21.519385    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:01.945607Z","caller":"etcdmain/etcd.go:73","msg":"Running: ","args":["etcd","--advertise-client-urls=https://172.17.93.236:2379","--cert-file=/var/lib/minikube/certs/etcd/server.crt","--client-cert-auth=true","--data-dir=/var/lib/minikube/etcd","--experimental-initial-corrupt-check=true","--experimental-watch-progress-notify-interval=5s","--initial-advertise-peer-urls=https://172.17.93.236:2380","--initial-cluster=multinode-442000=https://172.17.93.236:2380","--key-file=/var/lib/minikube/certs/etcd/server.key","--listen-client-urls=https://127.0.0.1:2379,https://172.17.93.236:2379","--listen-metrics-urls=http://127.0.0.1:2381","--listen-peer-urls=https://172.17.93.236:2380","--name=multinode-442000","--peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt","--peer-client-cert-auth=true","--peer-key-file=/var/lib/minikube/certs/etcd/peer.key","--peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt","--prox
y-refresh-interval=70000","--snapshot-count=10000","--trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt"]}
	I0314 19:42:21.519442    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:01.945676Z","caller":"etcdmain/etcd.go:116","msg":"server has been already initialized","data-dir":"/var/lib/minikube/etcd","dir-type":"member"}
	I0314 19:42:21.519442    8428 command_runner.go:130] ! {"level":"warn","ts":"2024-03-14T19:41:01.945701Z","caller":"embed/config.go:673","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I0314 19:42:21.519562    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:01.94571Z","caller":"embed/etcd.go:127","msg":"configuring peer listeners","listen-peer-urls":["https://172.17.93.236:2380"]}
	I0314 19:42:21.519634    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:01.94582Z","caller":"embed/etcd.go:495","msg":"starting with peer TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I0314 19:42:21.519634    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:01.94751Z","caller":"embed/etcd.go:135","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://172.17.93.236:2379"]}
	I0314 19:42:21.519634    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:01.948798Z","caller":"embed/etcd.go:309","msg":"starting an etcd server","etcd-version":"3.5.9","git-sha":"bdbbde998","go-version":"go1.19.9","go-os":"linux","go-arch":"amd64","max-cpu-set":2,"max-cpu-available":2,"member-initialized":true,"name":"multinode-442000","data-dir":"/var/lib/minikube/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/minikube/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://172.17.93.236:2380"],"listen-peer-urls":["https://172.17.93.236:2380"],"advertise-client-urls":["https://172.17.93.236:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.17.93.236:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"cors":["*"],"host-whitelist":["*"],"initial-
cluster":"","initial-cluster-state":"new","initial-cluster-token":"","quota-backend-bytes":2147483648,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"initial-corrupt-check":true,"corrupt-check-time-interval":"0s","compact-check-time-enabled":false,"compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s"}
	I0314 19:42:21.519634    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:01.989049Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"39.493838ms"}
	I0314 19:42:21.519634    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:02.0258Z","caller":"etcdserver/server.go:530","msg":"No snapshot found. Recovering WAL from scratch!"}
	I0314 19:42:21.519634    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:02.055698Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"76b99849a2fc5549","local-member-id":"fa26a6ed08186c39","commit-index":1967}
	I0314 19:42:21.519634    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:02.067927Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fa26a6ed08186c39 switched to configuration voters=()"}
	I0314 19:42:21.519634    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:02.067975Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fa26a6ed08186c39 became follower at term 2"}
	I0314 19:42:21.519634    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:02.068051Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft fa26a6ed08186c39 [peers: [], term: 2, commit: 1967, applied: 0, lastindex: 1967, lastterm: 2]"}
	I0314 19:42:21.519634    8428 command_runner.go:130] ! {"level":"warn","ts":"2024-03-14T19:41:02.100633Z","caller":"auth/store.go:1238","msg":"simple token is not cryptographically signed"}
	I0314 19:42:21.519634    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:02.113992Z","caller":"mvcc/kvstore.go:323","msg":"restored last compact revision","meta-bucket-name":"meta","meta-bucket-name-key":"finishedCompactRev","restored-compact-revision":1090}
	I0314 19:42:21.519634    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:02.125551Z","caller":"mvcc/kvstore.go:393","msg":"kvstore restored","current-rev":1704}
	I0314 19:42:21.519634    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:02.137052Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	I0314 19:42:21.519634    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:02.152836Z","caller":"etcdserver/corrupt.go:95","msg":"starting initial corruption check","local-member-id":"fa26a6ed08186c39","timeout":"7s"}
	I0314 19:42:21.520181    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:02.153448Z","caller":"etcdserver/corrupt.go:165","msg":"initial corruption checking passed; no corruption","local-member-id":"fa26a6ed08186c39"}
	I0314 19:42:21.520244    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:02.153504Z","caller":"etcdserver/server.go:854","msg":"starting etcd server","local-member-id":"fa26a6ed08186c39","local-server-version":"3.5.9","cluster-version":"to_be_decided"}
	I0314 19:42:21.520300    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:02.154089Z","caller":"etcdserver/server.go:754","msg":"starting initial election tick advance","election-ticks":10}
	I0314 19:42:21.520300    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:02.154894Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	I0314 19:42:21.520370    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:02.154977Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	I0314 19:42:21.520423    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:02.154992Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	I0314 19:42:21.520423    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:02.158559Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fa26a6ed08186c39 switched to configuration voters=(18025278095570267193)"}
	I0314 19:42:21.520482    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:02.158756Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"76b99849a2fc5549","local-member-id":"fa26a6ed08186c39","added-peer-id":"fa26a6ed08186c39","added-peer-peer-urls":["https://172.17.86.124:2380"]}
	I0314 19:42:21.520535    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:02.158933Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"76b99849a2fc5549","local-member-id":"fa26a6ed08186c39","cluster-version":"3.5"}
	I0314 19:42:21.520535    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:02.158969Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	I0314 19:42:21.520603    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:02.159838Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I0314 19:42:21.520714    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:02.160148Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"fa26a6ed08186c39","initial-advertise-peer-urls":["https://172.17.93.236:2380"],"listen-peer-urls":["https://172.17.93.236:2380"],"advertise-client-urls":["https://172.17.93.236:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.17.93.236:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	I0314 19:42:21.520714    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:02.160272Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	I0314 19:42:21.520769    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:02.161335Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"172.17.93.236:2380"}
	I0314 19:42:21.520769    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:02.161389Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"172.17.93.236:2380"}
	I0314 19:42:21.520769    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:03.281331Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fa26a6ed08186c39 is starting a new election at term 2"}
	I0314 19:42:21.520876    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:03.281645Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fa26a6ed08186c39 became pre-candidate at term 2"}
	I0314 19:42:21.520919    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:03.281829Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fa26a6ed08186c39 received MsgPreVoteResp from fa26a6ed08186c39 at term 2"}
	I0314 19:42:21.520974    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:03.281928Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fa26a6ed08186c39 became candidate at term 3"}
	I0314 19:42:21.520974    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:03.282044Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fa26a6ed08186c39 received MsgVoteResp from fa26a6ed08186c39 at term 3"}
	I0314 19:42:21.520974    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:03.282164Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fa26a6ed08186c39 became leader at term 3"}
	I0314 19:42:21.521043    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:03.282332Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: fa26a6ed08186c39 elected leader fa26a6ed08186c39 at term 3"}
	I0314 19:42:21.521096    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:03.292472Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"fa26a6ed08186c39","local-member-attributes":"{Name:multinode-442000 ClientURLs:[https://172.17.93.236:2379]}","request-path":"/0/members/fa26a6ed08186c39/attributes","cluster-id":"76b99849a2fc5549","publish-timeout":"7s"}
	I0314 19:42:21.521155    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:03.292867Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	I0314 19:42:21.521155    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:03.296522Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	I0314 19:42:21.521220    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:03.298446Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	I0314 19:42:21.521220    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:03.311867Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.17.93.236:2379"}
	I0314 19:42:21.521292    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:03.311957Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	I0314 19:42:21.521292    8428 command_runner.go:130] ! {"level":"info","ts":"2024-03-14T19:41:03.31205Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	I0314 19:42:21.528085    8428 logs.go:123] Gathering logs for coredns [b159aedddf94] ...
	I0314 19:42:21.528158    8428 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b159aedddf94"
	I0314 19:42:21.558526    8428 command_runner.go:130] > .:53
	I0314 19:42:21.558526    8428 command_runner.go:130] > [INFO] plugin/reload: Running configuration SHA512 = d518b2f22d7013b4ce33ee954d9f8802810eac8bed02a1cf0be20d76208a6f83258316421f15df605ab13f1704501370ffcd7655fbac5818a200880248c94b94
	I0314 19:42:21.558526    8428 command_runner.go:130] > CoreDNS-1.10.1
	I0314 19:42:21.558605    8428 command_runner.go:130] > linux/amd64, go1.20, 055b2c3
	I0314 19:42:21.558605    8428 command_runner.go:130] > [INFO] 127.0.0.1:38965 - 37747 "HINFO IN 9162400456686827331.1281991328183180689. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.052220616s
	I0314 19:42:21.558839    8428 logs.go:123] Gathering logs for kube-proxy [2a62baf3f1b4] ...
	I0314 19:42:21.558839    8428 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2a62baf3f1b4"
	I0314 19:42:21.585131    8428 command_runner.go:130] ! I0314 19:19:18.247796       1 server_others.go:69] "Using iptables proxy"
	I0314 19:42:21.585752    8428 command_runner.go:130] ! I0314 19:19:18.275162       1 node.go:141] Successfully retrieved node IP: 172.17.86.124
	I0314 19:42:21.585800    8428 command_runner.go:130] ! I0314 19:19:18.379821       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0314 19:42:21.585800    8428 command_runner.go:130] ! I0314 19:19:18.379851       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0314 19:42:21.585800    8428 command_runner.go:130] ! I0314 19:19:18.395429       1 server_others.go:152] "Using iptables Proxier"
	I0314 19:42:21.585800    8428 command_runner.go:130] ! I0314 19:19:18.395506       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0314 19:42:21.585851    8428 command_runner.go:130] ! I0314 19:19:18.395856       1 server.go:846] "Version info" version="v1.28.4"
	I0314 19:42:21.585851    8428 command_runner.go:130] ! I0314 19:19:18.395890       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0314 19:42:21.585851    8428 command_runner.go:130] ! I0314 19:19:18.417861       1 config.go:188] "Starting service config controller"
	I0314 19:42:21.585896    8428 command_runner.go:130] ! I0314 19:19:18.417913       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0314 19:42:21.585896    8428 command_runner.go:130] ! I0314 19:19:18.417950       1 config.go:97] "Starting endpoint slice config controller"
	I0314 19:42:21.585964    8428 command_runner.go:130] ! I0314 19:19:18.420511       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0314 19:42:21.585964    8428 command_runner.go:130] ! I0314 19:19:18.426566       1 config.go:315] "Starting node config controller"
	I0314 19:42:21.585964    8428 command_runner.go:130] ! I0314 19:19:18.426600       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0314 19:42:21.585964    8428 command_runner.go:130] ! I0314 19:19:18.519508       1 shared_informer.go:318] Caches are synced for service config
	I0314 19:42:21.586006    8428 command_runner.go:130] ! I0314 19:19:18.524347       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0314 19:42:21.586006    8428 command_runner.go:130] ! I0314 19:19:18.527360       1 shared_informer.go:318] Caches are synced for node config
	I0314 19:42:21.588004    8428 logs.go:123] Gathering logs for kube-controller-manager [12baf105f0bb] ...
	I0314 19:42:21.588067    8428 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12baf105f0bb"
	I0314 19:42:21.619007    8428 command_runner.go:130] ! I0314 19:41:03.101287       1 serving.go:348] Generated self-signed cert in-memory
	I0314 19:42:21.619007    8428 command_runner.go:130] ! I0314 19:41:03.872151       1 controllermanager.go:189] "Starting" version="v1.28.4"
	I0314 19:42:21.619007    8428 command_runner.go:130] ! I0314 19:41:03.874301       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0314 19:42:21.619007    8428 command_runner.go:130] ! I0314 19:41:03.879645       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0314 19:42:21.619007    8428 command_runner.go:130] ! I0314 19:41:03.880765       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0314 19:42:21.619007    8428 command_runner.go:130] ! I0314 19:41:03.883873       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0314 19:42:21.619007    8428 command_runner.go:130] ! I0314 19:41:03.883977       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0314 19:42:21.619007    8428 command_runner.go:130] ! I0314 19:41:07.787609       1 shared_informer.go:311] Waiting for caches to sync for tokens
	I0314 19:42:21.619007    8428 command_runner.go:130] ! I0314 19:41:07.796442       1 controllermanager.go:642] "Started controller" controller="replicationcontroller-controller"
	I0314 19:42:21.619007    8428 command_runner.go:130] ! I0314 19:41:07.796953       1 replica_set.go:214] "Starting controller" name="replicationcontroller"
	I0314 19:42:21.619007    8428 command_runner.go:130] ! I0314 19:41:07.798900       1 shared_informer.go:311] Waiting for caches to sync for ReplicationController
	I0314 19:42:21.619007    8428 command_runner.go:130] ! I0314 19:41:07.848846       1 controllermanager.go:642] "Started controller" controller="namespace-controller"
	I0314 19:42:21.619007    8428 command_runner.go:130] ! I0314 19:41:07.849015       1 namespace_controller.go:197] "Starting namespace controller"
	I0314 19:42:21.619007    8428 command_runner.go:130] ! I0314 19:41:07.849025       1 shared_informer.go:311] Waiting for caches to sync for namespace
	I0314 19:42:21.619007    8428 command_runner.go:130] ! I0314 19:41:07.855296       1 controllermanager.go:642] "Started controller" controller="persistentvolumeclaim-protection-controller"
	I0314 19:42:21.619007    8428 command_runner.go:130] ! I0314 19:41:07.858491       1 pvc_protection_controller.go:102] "Starting PVC protection controller"
	I0314 19:42:21.619007    8428 command_runner.go:130] ! I0314 19:41:07.858512       1 shared_informer.go:311] Waiting for caches to sync for PVC protection
	I0314 19:42:21.619007    8428 command_runner.go:130] ! I0314 19:41:07.864964       1 controllermanager.go:642] "Started controller" controller="endpoints-controller"
	I0314 19:42:21.619007    8428 command_runner.go:130] ! I0314 19:41:07.865080       1 endpoints_controller.go:174] "Starting endpoint controller"
	I0314 19:42:21.619007    8428 command_runner.go:130] ! I0314 19:41:07.865088       1 shared_informer.go:311] Waiting for caches to sync for endpoint
	I0314 19:42:21.619007    8428 command_runner.go:130] ! I0314 19:41:07.870629       1 controllermanager.go:642] "Started controller" controller="daemonset-controller"
	I0314 19:42:21.619007    8428 command_runner.go:130] ! I0314 19:41:07.871089       1 daemon_controller.go:291] "Starting daemon sets controller"
	I0314 19:42:21.619007    8428 command_runner.go:130] ! I0314 19:41:07.871332       1 shared_informer.go:311] Waiting for caches to sync for daemon sets
	I0314 19:42:21.619007    8428 command_runner.go:130] ! I0314 19:41:07.889997       1 shared_informer.go:318] Caches are synced for tokens
	I0314 19:42:21.619007    8428 command_runner.go:130] ! I0314 19:41:07.899597       1 controllermanager.go:642] "Started controller" controller="horizontal-pod-autoscaler-controller"
	I0314 19:42:21.619007    8428 command_runner.go:130] ! I0314 19:41:07.900355       1 horizontal.go:200] "Starting HPA controller"
	I0314 19:42:21.619007    8428 command_runner.go:130] ! I0314 19:41:07.901325       1 shared_informer.go:311] Waiting for caches to sync for HPA
	I0314 19:42:21.619007    8428 command_runner.go:130] ! I0314 19:41:07.921217       1 controllermanager.go:642] "Started controller" controller="disruption-controller"
	I0314 19:42:21.619007    8428 command_runner.go:130] ! I0314 19:41:07.922072       1 disruption.go:433] "Sending events to api server."
	I0314 19:42:21.619007    8428 command_runner.go:130] ! I0314 19:41:07.922293       1 disruption.go:444] "Starting disruption controller"
	I0314 19:42:21.619007    8428 command_runner.go:130] ! I0314 19:41:07.922481       1 shared_informer.go:311] Waiting for caches to sync for disruption
	I0314 19:42:21.619007    8428 command_runner.go:130] ! I0314 19:41:07.927437       1 controllermanager.go:642] "Started controller" controller="persistentvolume-attach-detach-controller"
	I0314 19:42:21.619007    8428 command_runner.go:130] ! I0314 19:41:07.929290       1 attach_detach_controller.go:337] "Starting attach detach controller"
	I0314 19:42:21.619007    8428 command_runner.go:130] ! I0314 19:41:07.929325       1 shared_informer.go:311] Waiting for caches to sync for attach detach
	I0314 19:42:21.619007    8428 command_runner.go:130] ! I0314 19:41:07.936410       1 controllermanager.go:642] "Started controller" controller="ephemeral-volume-controller"
	I0314 19:42:21.619007    8428 command_runner.go:130] ! I0314 19:41:07.936565       1 controller.go:169] "Starting ephemeral volume controller"
	I0314 19:42:21.619007    8428 command_runner.go:130] ! I0314 19:41:07.936765       1 shared_informer.go:311] Waiting for caches to sync for ephemeral
	I0314 19:42:21.619007    8428 command_runner.go:130] ! I0314 19:41:07.954720       1 controllermanager.go:642] "Started controller" controller="cronjob-controller"
	I0314 19:42:21.619547    8428 command_runner.go:130] ! I0314 19:41:07.954939       1 cronjob_controllerv2.go:139] "Starting cronjob controller v2"
	I0314 19:42:21.619547    8428 command_runner.go:130] ! I0314 19:41:07.955142       1 shared_informer.go:311] Waiting for caches to sync for cronjob
	I0314 19:42:21.619602    8428 command_runner.go:130] ! I0314 19:41:07.970387       1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-kubelet-serving"
	I0314 19:42:21.619602    8428 command_runner.go:130] ! I0314 19:41:07.970474       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I0314 19:42:21.619652    8428 command_runner.go:130] ! I0314 19:41:07.970624       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0314 19:42:21.619652    8428 command_runner.go:130] ! I0314 19:41:07.971307       1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-kubelet-client"
	I0314 19:42:21.619704    8428 command_runner.go:130] ! I0314 19:41:07.975049       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I0314 19:42:21.619755    8428 command_runner.go:130] ! I0314 19:41:07.973288       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0314 19:42:21.619755    8428 command_runner.go:130] ! I0314 19:41:07.974848       1 controllermanager.go:642] "Started controller" controller="certificatesigningrequest-signing-controller"
	I0314 19:42:21.619809    8428 command_runner.go:130] ! I0314 19:41:07.974977       1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-legacy-unknown"
	I0314 19:42:21.619857    8428 command_runner.go:130] ! I0314 19:41:07.977476       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0314 19:42:21.619857    8428 command_runner.go:130] ! I0314 19:41:07.974992       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0314 19:42:21.619902    8428 command_runner.go:130] ! I0314 19:41:07.975020       1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-kube-apiserver-client"
	I0314 19:42:21.619942    8428 command_runner.go:130] ! I0314 19:41:07.977827       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I0314 19:42:21.619960    8428 command_runner.go:130] ! I0314 19:41:07.975030       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0314 19:42:21.620014    8428 command_runner.go:130] ! I0314 19:41:07.990774       1 controllermanager.go:642] "Started controller" controller="ttl-controller"
	I0314 19:42:21.620050    8428 command_runner.go:130] ! I0314 19:41:07.995647       1 ttl_controller.go:124] "Starting TTL controller"
	I0314 19:42:21.620071    8428 command_runner.go:130] ! I0314 19:41:07.995667       1 shared_informer.go:311] Waiting for caches to sync for TTL
	I0314 19:42:21.620109    8428 command_runner.go:130] ! I0314 19:41:08.019000       1 controllermanager.go:642] "Started controller" controller="token-cleaner-controller"
	I0314 19:42:21.620157    8428 command_runner.go:130] ! I0314 19:41:08.019415       1 tokencleaner.go:112] "Starting token cleaner controller"
	I0314 19:42:21.620225    8428 command_runner.go:130] ! I0314 19:41:08.019568       1 shared_informer.go:311] Waiting for caches to sync for token_cleaner
	I0314 19:42:21.620225    8428 command_runner.go:130] ! I0314 19:41:08.019700       1 shared_informer.go:318] Caches are synced for token_cleaner
	I0314 19:42:21.620225    8428 command_runner.go:130] ! E0314 19:41:08.029770       1 core.go:92] "Failed to start service controller" err="WARNING: no cloud provider provided, services of type LoadBalancer will fail"
	I0314 19:42:21.620225    8428 command_runner.go:130] ! I0314 19:41:08.029950       1 controllermanager.go:620] "Warning: skipping controller" controller="service-lb-controller"
	I0314 19:42:21.620225    8428 command_runner.go:130] ! I0314 19:41:08.030066       1 core.go:228] "Warning: configure-cloud-routes is set, but no cloud provider specified. Will not configure cloud provider routes."
	I0314 19:42:21.620225    8428 command_runner.go:130] ! I0314 19:41:08.030148       1 controllermanager.go:620] "Warning: skipping controller" controller="node-route-controller"
	I0314 19:42:21.620225    8428 command_runner.go:130] ! I0314 19:41:08.056856       1 controllermanager.go:642] "Started controller" controller="clusterrole-aggregation-controller"
	I0314 19:42:21.620225    8428 command_runner.go:130] ! I0314 19:41:08.058933       1 clusterroleaggregation_controller.go:189] "Starting ClusterRoleAggregator controller"
	I0314 19:42:21.620225    8428 command_runner.go:130] ! I0314 19:41:08.059323       1 shared_informer.go:311] Waiting for caches to sync for ClusterRoleAggregator
	I0314 19:42:21.620225    8428 command_runner.go:130] ! I0314 19:41:08.062839       1 controllermanager.go:642] "Started controller" controller="endpointslice-controller"
	I0314 19:42:21.620225    8428 command_runner.go:130] ! I0314 19:41:08.063208       1 endpointslice_controller.go:264] "Starting endpoint slice controller"
	I0314 19:42:21.620225    8428 command_runner.go:130] ! I0314 19:41:08.063512       1 shared_informer.go:311] Waiting for caches to sync for endpoint_slice
	I0314 19:42:21.620225    8428 command_runner.go:130] ! I0314 19:41:08.070376       1 node_lifecycle_controller.go:431] "Controller will reconcile labels"
	I0314 19:42:21.620225    8428 command_runner.go:130] ! I0314 19:41:08.070635       1 controllermanager.go:642] "Started controller" controller="node-lifecycle-controller"
	I0314 19:42:21.620225    8428 command_runner.go:130] ! I0314 19:41:08.070748       1 node_lifecycle_controller.go:465] "Sending events to api server"
	I0314 19:42:21.620225    8428 command_runner.go:130] ! I0314 19:41:08.071006       1 node_lifecycle_controller.go:476] "Starting node controller"
	I0314 19:42:21.620225    8428 command_runner.go:130] ! I0314 19:41:08.071615       1 shared_informer.go:311] Waiting for caches to sync for taint
	I0314 19:42:21.620225    8428 command_runner.go:130] ! I0314 19:41:08.079849       1 controllermanager.go:642] "Started controller" controller="persistentvolume-binder-controller"
	I0314 19:42:21.620225    8428 command_runner.go:130] ! I0314 19:41:08.080117       1 pv_controller_base.go:319] "Starting persistent volume controller"
	I0314 19:42:21.620225    8428 command_runner.go:130] ! I0314 19:41:08.081765       1 shared_informer.go:311] Waiting for caches to sync for persistent volume
	I0314 19:42:21.620225    8428 command_runner.go:130] ! I0314 19:41:08.084328       1 controllermanager.go:642] "Started controller" controller="ttl-after-finished-controller"
	I0314 19:42:21.620225    8428 command_runner.go:130] ! I0314 19:41:08.084731       1 ttlafterfinished_controller.go:109] "Starting TTL after finished controller"
	I0314 19:42:21.620225    8428 command_runner.go:130] ! I0314 19:41:08.085301       1 shared_informer.go:311] Waiting for caches to sync for TTL after finished
	I0314 19:42:21.620225    8428 command_runner.go:130] ! I0314 19:41:08.092529       1 controllermanager.go:642] "Started controller" controller="garbage-collector-controller"
	I0314 19:42:21.620225    8428 command_runner.go:130] ! I0314 19:41:08.092761       1 garbagecollector.go:155] "Starting controller" controller="garbagecollector"
	I0314 19:42:21.620225    8428 command_runner.go:130] ! I0314 19:41:08.092771       1 shared_informer.go:311] Waiting for caches to sync for garbage collector
	I0314 19:42:21.620225    8428 command_runner.go:130] ! I0314 19:41:08.097268       1 controllermanager.go:642] "Started controller" controller="persistentvolume-expander-controller"
	I0314 19:42:21.620225    8428 command_runner.go:130] ! I0314 19:41:08.097521       1 expand_controller.go:328] "Starting expand controller"
	I0314 19:42:21.620225    8428 command_runner.go:130] ! I0314 19:41:08.097531       1 shared_informer.go:311] Waiting for caches to sync for expand
	I0314 19:42:21.620225    8428 command_runner.go:130] ! I0314 19:41:08.097559       1 graph_builder.go:294] "Running" component="GraphBuilder"
	I0314 19:42:21.620225    8428 command_runner.go:130] ! I0314 19:41:08.117374       1 controllermanager.go:642] "Started controller" controller="root-ca-certificate-publisher-controller"
	I0314 19:42:21.620225    8428 command_runner.go:130] ! I0314 19:41:08.117512       1 publisher.go:102] "Starting root CA cert publisher controller"
	I0314 19:42:21.620225    8428 command_runner.go:130] ! I0314 19:41:08.117524       1 shared_informer.go:311] Waiting for caches to sync for crt configmap
	I0314 19:42:21.620225    8428 command_runner.go:130] ! I0314 19:41:08.126388       1 controllermanager.go:642] "Started controller" controller="statefulset-controller"
	I0314 19:42:21.620225    8428 command_runner.go:130] ! I0314 19:41:08.127645       1 stateful_set.go:161] "Starting stateful set controller"
	I0314 19:42:21.620753    8428 command_runner.go:130] ! I0314 19:41:08.127702       1 shared_informer.go:311] Waiting for caches to sync for stateful set
	I0314 19:42:21.620793    8428 command_runner.go:130] ! I0314 19:41:08.131336       1 controllermanager.go:642] "Started controller" controller="certificatesigningrequest-cleaner-controller"
	I0314 19:42:21.620793    8428 command_runner.go:130] ! I0314 19:41:08.131505       1 cleaner.go:83] "Starting CSR cleaner controller"
	I0314 19:42:21.620841    8428 command_runner.go:130] ! E0314 19:41:08.142589       1 core.go:213] "Failed to start cloud node lifecycle controller" err="no cloud provider provided"
	I0314 19:42:21.620882    8428 command_runner.go:130] ! I0314 19:41:08.142621       1 controllermanager.go:620] "Warning: skipping controller" controller="cloud-node-lifecycle-controller"
	I0314 19:42:21.620882    8428 command_runner.go:130] ! I0314 19:41:08.150057       1 controllermanager.go:642] "Started controller" controller="pod-garbage-collector-controller"
	I0314 19:42:21.620882    8428 command_runner.go:130] ! I0314 19:41:08.152574       1 gc_controller.go:101] "Starting GC controller"
	I0314 19:42:21.620882    8428 command_runner.go:130] ! I0314 19:41:08.152724       1 shared_informer.go:311] Waiting for caches to sync for GC
	I0314 19:42:21.620882    8428 command_runner.go:130] ! I0314 19:41:08.302881       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="ingresses.networking.k8s.io"
	I0314 19:42:21.620882    8428 command_runner.go:130] ! I0314 19:41:08.303337       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="endpoints"
	I0314 19:42:21.620882    8428 command_runner.go:130] ! W0314 19:41:08.303671       1 shared_informer.go:593] resyncPeriod 21h24m41.293167603s is smaller than resyncCheckPeriod 22h48m56.659186017s and the informer has already started. Changing it to 22h48m56.659186017s
	I0314 19:42:21.620882    8428 command_runner.go:130] ! I0314 19:41:08.303970       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="limitranges"
	I0314 19:42:21.620882    8428 command_runner.go:130] ! I0314 19:41:08.304292       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="podtemplates"
	I0314 19:42:21.620882    8428 command_runner.go:130] ! I0314 19:41:08.304532       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="endpointslices.discovery.k8s.io"
	I0314 19:42:21.620882    8428 command_runner.go:130] ! I0314 19:41:08.304816       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="daemonsets.apps"
	I0314 19:42:21.620882    8428 command_runner.go:130] ! I0314 19:41:08.305073       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="rolebindings.rbac.authorization.k8s.io"
	I0314 19:42:21.620882    8428 command_runner.go:130] ! I0314 19:41:08.305373       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="roles.rbac.authorization.k8s.io"
	I0314 19:42:21.620882    8428 command_runner.go:130] ! I0314 19:41:08.305634       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="leases.coordination.k8s.io"
	I0314 19:42:21.620882    8428 command_runner.go:130] ! I0314 19:41:08.305976       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="statefulsets.apps"
	I0314 19:42:21.620882    8428 command_runner.go:130] ! I0314 19:41:08.306286       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="horizontalpodautoscalers.autoscaling"
	I0314 19:42:21.620882    8428 command_runner.go:130] ! I0314 19:41:08.306541       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="networkpolicies.networking.k8s.io"
	I0314 19:42:21.620882    8428 command_runner.go:130] ! I0314 19:41:08.306699       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="controllerrevisions.apps"
	I0314 19:42:21.620882    8428 command_runner.go:130] ! I0314 19:41:08.306843       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="deployments.apps"
	I0314 19:42:21.620882    8428 command_runner.go:130] ! I0314 19:41:08.307119       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="jobs.batch"
	I0314 19:42:21.620882    8428 command_runner.go:130] ! I0314 19:41:08.307379       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="serviceaccounts"
	I0314 19:42:21.620882    8428 command_runner.go:130] ! I0314 19:41:08.307553       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="cronjobs.batch"
	I0314 19:42:21.620882    8428 command_runner.go:130] ! I0314 19:41:08.307700       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="poddisruptionbudgets.policy"
	I0314 19:42:21.620882    8428 command_runner.go:130] ! I0314 19:41:08.308022       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="csistoragecapacities.storage.k8s.io"
	I0314 19:42:21.620882    8428 command_runner.go:130] ! I0314 19:41:08.308207       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="replicasets.apps"
	I0314 19:42:21.620882    8428 command_runner.go:130] ! I0314 19:41:08.308473       1 controllermanager.go:642] "Started controller" controller="resourcequota-controller"
	I0314 19:42:21.620882    8428 command_runner.go:130] ! I0314 19:41:08.308664       1 resource_quota_controller.go:294] "Starting resource quota controller"
	I0314 19:42:21.620882    8428 command_runner.go:130] ! I0314 19:41:08.309850       1 shared_informer.go:311] Waiting for caches to sync for resource quota
	I0314 19:42:21.620882    8428 command_runner.go:130] ! I0314 19:41:08.310060       1 resource_quota_monitor.go:305] "QuotaMonitor running"
	I0314 19:42:21.620882    8428 command_runner.go:130] ! I0314 19:41:08.344084       1 controllermanager.go:642] "Started controller" controller="serviceaccount-controller"
	I0314 19:42:21.620882    8428 command_runner.go:130] ! I0314 19:41:08.344536       1 serviceaccounts_controller.go:111] "Starting service account controller"
	I0314 19:42:21.620882    8428 command_runner.go:130] ! I0314 19:41:08.344832       1 shared_informer.go:311] Waiting for caches to sync for service account
	I0314 19:42:21.620882    8428 command_runner.go:130] ! I0314 19:41:08.397742       1 controllermanager.go:642] "Started controller" controller="deployment-controller"
	I0314 19:42:21.620882    8428 command_runner.go:130] ! I0314 19:41:08.400742       1 deployment_controller.go:168] "Starting controller" controller="deployment"
	I0314 19:42:21.621408    8428 command_runner.go:130] ! I0314 19:41:08.401126       1 shared_informer.go:311] Waiting for caches to sync for deployment
	I0314 19:42:21.621408    8428 command_runner.go:130] ! I0314 19:41:08.448054       1 controllermanager.go:642] "Started controller" controller="bootstrap-signer-controller"
	I0314 19:42:21.621408    8428 command_runner.go:130] ! I0314 19:41:08.448538       1 shared_informer.go:311] Waiting for caches to sync for bootstrap_signer
	I0314 19:42:21.621490    8428 command_runner.go:130] ! I0314 19:41:08.495738       1 controllermanager.go:642] "Started controller" controller="persistentvolume-protection-controller"
	I0314 19:42:21.621490    8428 command_runner.go:130] ! I0314 19:41:08.496045       1 pv_protection_controller.go:78] "Starting PV protection controller"
	I0314 19:42:21.621490    8428 command_runner.go:130] ! I0314 19:41:08.496112       1 shared_informer.go:311] Waiting for caches to sync for PV protection
	I0314 19:42:21.621490    8428 command_runner.go:130] ! I0314 19:41:08.547967       1 controllermanager.go:642] "Started controller" controller="endpointslice-mirroring-controller"
	I0314 19:42:21.621572    8428 command_runner.go:130] ! I0314 19:41:08.548352       1 endpointslicemirroring_controller.go:223] "Starting EndpointSliceMirroring controller"
	I0314 19:42:21.621572    8428 command_runner.go:130] ! I0314 19:41:08.548556       1 shared_informer.go:311] Waiting for caches to sync for endpoint_slice_mirroring
	I0314 19:42:21.621572    8428 command_runner.go:130] ! I0314 19:41:08.593742       1 controllermanager.go:642] "Started controller" controller="job-controller"
	I0314 19:42:21.621572    8428 command_runner.go:130] ! I0314 19:41:08.593860       1 job_controller.go:226] "Starting job controller"
	I0314 19:42:21.621655    8428 command_runner.go:130] ! I0314 19:41:08.594297       1 shared_informer.go:311] Waiting for caches to sync for job
	I0314 19:42:21.621655    8428 command_runner.go:130] ! I0314 19:41:08.650392       1 controllermanager.go:642] "Started controller" controller="replicaset-controller"
	I0314 19:42:21.621736    8428 command_runner.go:130] ! I0314 19:41:08.650668       1 replica_set.go:214] "Starting controller" name="replicaset"
	I0314 19:42:21.621736    8428 command_runner.go:130] ! I0314 19:41:08.650851       1 shared_informer.go:311] Waiting for caches to sync for ReplicaSet
	I0314 19:42:21.621736    8428 command_runner.go:130] ! I0314 19:41:08.704591       1 controllermanager.go:642] "Started controller" controller="certificatesigningrequest-approving-controller"
	I0314 19:42:21.621736    8428 command_runner.go:130] ! I0314 19:41:08.704627       1 certificate_controller.go:115] "Starting certificate controller" name="csrapproving"
	I0314 19:42:21.621816    8428 command_runner.go:130] ! I0314 19:41:08.704645       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrapproving
	I0314 19:42:21.621816    8428 command_runner.go:130] ! I0314 19:41:18.768485       1 range_allocator.go:111] "No Secondary Service CIDR provided. Skipping filtering out secondary service addresses"
	I0314 19:42:21.621816    8428 command_runner.go:130] ! I0314 19:41:18.768824       1 controllermanager.go:642] "Started controller" controller="node-ipam-controller"
	I0314 19:42:21.621816    8428 command_runner.go:130] ! I0314 19:41:18.769281       1 node_ipam_controller.go:162] "Starting ipam controller"
	I0314 19:42:21.621816    8428 command_runner.go:130] ! I0314 19:41:18.769315       1 shared_informer.go:311] Waiting for caches to sync for node
	I0314 19:42:21.621898    8428 command_runner.go:130] ! I0314 19:41:18.779639       1 shared_informer.go:311] Waiting for caches to sync for resource quota
	I0314 19:42:21.621898    8428 command_runner.go:130] ! I0314 19:41:18.796167       1 shared_informer.go:318] Caches are synced for PV protection
	I0314 19:42:21.621898    8428 command_runner.go:130] ! I0314 19:41:18.796514       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-442000-m02"
	I0314 19:42:21.621898    8428 command_runner.go:130] ! I0314 19:41:18.796299       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-442000\" does not exist"
	I0314 19:42:21.621980    8428 command_runner.go:130] ! I0314 19:41:18.799471       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-442000-m02\" does not exist"
	I0314 19:42:21.621980    8428 command_runner.go:130] ! I0314 19:41:18.799722       1 shared_informer.go:318] Caches are synced for ReplicationController
	I0314 19:42:21.621980    8428 command_runner.go:130] ! I0314 19:41:18.799937       1 shared_informer.go:318] Caches are synced for TTL
	I0314 19:42:21.621980    8428 command_runner.go:130] ! I0314 19:41:18.800165       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-442000-m03\" does not exist"
	I0314 19:42:21.622077    8428 command_runner.go:130] ! I0314 19:41:18.802329       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-442000-m02"
	I0314 19:42:21.622077    8428 command_runner.go:130] ! I0314 19:41:18.802379       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-442000-m02"
	I0314 19:42:21.622077    8428 command_runner.go:130] ! I0314 19:41:18.806338       1 shared_informer.go:318] Caches are synced for certificate-csrapproving
	I0314 19:42:21.622158    8428 command_runner.go:130] ! I0314 19:41:18.836188       1 shared_informer.go:318] Caches are synced for attach detach
	I0314 19:42:21.622158    8428 command_runner.go:130] ! I0314 19:41:18.842003       1 shared_informer.go:318] Caches are synced for ephemeral
	I0314 19:42:21.622158    8428 command_runner.go:130] ! I0314 19:41:18.842516       1 shared_informer.go:318] Caches are synced for stateful set
	I0314 19:42:21.622158    8428 command_runner.go:130] ! I0314 19:41:18.845380       1 shared_informer.go:318] Caches are synced for service account
	I0314 19:42:21.622158    8428 command_runner.go:130] ! I0314 19:41:18.848744       1 shared_informer.go:318] Caches are synced for bootstrap_signer
	I0314 19:42:21.622239    8428 command_runner.go:130] ! I0314 19:41:18.849154       1 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring
	I0314 19:42:21.622239    8428 command_runner.go:130] ! I0314 19:41:18.849988       1 shared_informer.go:318] Caches are synced for namespace
	I0314 19:42:21.622239    8428 command_runner.go:130] ! I0314 19:41:18.850447       1 shared_informer.go:311] Waiting for caches to sync for garbage collector
	I0314 19:42:21.622239    8428 command_runner.go:130] ! I0314 19:41:18.851139       1 shared_informer.go:318] Caches are synced for ReplicaSet
	I0314 19:42:21.622319    8428 command_runner.go:130] ! I0314 19:41:18.852942       1 shared_informer.go:318] Caches are synced for GC
	I0314 19:42:21.622319    8428 command_runner.go:130] ! I0314 19:41:18.860631       1 shared_informer.go:318] Caches are synced for ClusterRoleAggregator
	I0314 19:42:21.622319    8428 command_runner.go:130] ! I0314 19:41:18.862001       1 shared_informer.go:318] Caches are synced for cronjob
	I0314 19:42:21.622319    8428 command_runner.go:130] ! I0314 19:41:18.862045       1 shared_informer.go:318] Caches are synced for PVC protection
	I0314 19:42:21.622400    8428 command_runner.go:130] ! I0314 19:41:18.864453       1 shared_informer.go:318] Caches are synced for endpoint_slice
	I0314 19:42:21.622400    8428 command_runner.go:130] ! I0314 19:41:18.865205       1 shared_informer.go:318] Caches are synced for endpoint
	I0314 19:42:21.622400    8428 command_runner.go:130] ! I0314 19:41:18.870312       1 shared_informer.go:318] Caches are synced for node
	I0314 19:42:21.622400    8428 command_runner.go:130] ! I0314 19:41:18.871490       1 range_allocator.go:174] "Sending events to api server"
	I0314 19:42:21.622482    8428 command_runner.go:130] ! I0314 19:41:18.871652       1 range_allocator.go:178] "Starting range CIDR allocator"
	I0314 19:42:21.622482    8428 command_runner.go:130] ! I0314 19:41:18.871843       1 shared_informer.go:311] Waiting for caches to sync for cidrallocator
	I0314 19:42:21.622482    8428 command_runner.go:130] ! I0314 19:41:18.871901       1 shared_informer.go:318] Caches are synced for cidrallocator
	I0314 19:42:21.622482    8428 command_runner.go:130] ! I0314 19:41:18.871655       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-serving
	I0314 19:42:21.622482    8428 command_runner.go:130] ! I0314 19:41:18.871600       1 shared_informer.go:318] Caches are synced for daemon sets
	I0314 19:42:21.622563    8428 command_runner.go:130] ! I0314 19:41:18.877449       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-client
	I0314 19:42:21.622563    8428 command_runner.go:130] ! I0314 19:41:18.878919       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-legacy-unknown
	I0314 19:42:21.622563    8428 command_runner.go:130] ! I0314 19:41:18.880521       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0314 19:42:21.622563    8428 command_runner.go:130] ! I0314 19:41:18.886337       1 shared_informer.go:318] Caches are synced for TTL after finished
	I0314 19:42:21.622643    8428 command_runner.go:130] ! I0314 19:41:18.895206       1 shared_informer.go:318] Caches are synced for job
	I0314 19:42:21.622643    8428 command_runner.go:130] ! I0314 19:41:18.898522       1 shared_informer.go:318] Caches are synced for expand
	I0314 19:42:21.622643    8428 command_runner.go:130] ! I0314 19:41:18.902360       1 shared_informer.go:318] Caches are synced for deployment
	I0314 19:42:21.622643    8428 command_runner.go:130] ! I0314 19:41:18.905493       1 shared_informer.go:318] Caches are synced for HPA
	I0314 19:42:21.622643    8428 command_runner.go:130] ! I0314 19:41:18.906213       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="54.805878ms"
	I0314 19:42:21.622722    8428 command_runner.go:130] ! I0314 19:41:18.908178       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="47.802µs"
	I0314 19:42:21.622722    8428 command_runner.go:130] ! I0314 19:41:18.908549       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="56.720551ms"
	I0314 19:42:21.622722    8428 command_runner.go:130] ! I0314 19:41:18.911784       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="133.705µs"
	I0314 19:42:21.622803    8428 command_runner.go:130] ! I0314 19:41:18.919410       1 shared_informer.go:318] Caches are synced for crt configmap
	I0314 19:42:21.622803    8428 command_runner.go:130] ! I0314 19:41:18.923587       1 shared_informer.go:318] Caches are synced for disruption
	I0314 19:42:21.622803    8428 command_runner.go:130] ! I0314 19:41:18.974303       1 shared_informer.go:318] Caches are synced for taint
	I0314 19:42:21.622803    8428 command_runner.go:130] ! I0314 19:41:18.974653       1 node_lifecycle_controller.go:1225] "Initializing eviction metric for zone" zone=""
	I0314 19:42:21.622891    8428 command_runner.go:130] ! I0314 19:41:18.975178       1 taint_manager.go:205] "Starting NoExecuteTaintManager"
	I0314 19:42:21.622891    8428 command_runner.go:130] ! I0314 19:41:18.975416       1 taint_manager.go:210] "Sending events to api server"
	I0314 19:42:21.622891    8428 command_runner.go:130] ! I0314 19:41:18.977051       1 event.go:307] "Event occurred" object="multinode-442000" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-442000 event: Registered Node multinode-442000 in Controller"
	I0314 19:42:21.622973    8428 command_runner.go:130] ! I0314 19:41:18.977995       1 event.go:307] "Event occurred" object="multinode-442000-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-442000-m02 event: Registered Node multinode-442000-m02 in Controller"
	I0314 19:42:21.622973    8428 command_runner.go:130] ! I0314 19:41:18.978165       1 event.go:307] "Event occurred" object="multinode-442000-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-442000-m03 event: Registered Node multinode-442000-m03 in Controller"
	I0314 19:42:21.623054    8428 command_runner.go:130] ! I0314 19:41:18.980168       1 shared_informer.go:318] Caches are synced for resource quota
	I0314 19:42:21.623054    8428 command_runner.go:130] ! I0314 19:41:18.982162       1 shared_informer.go:318] Caches are synced for persistent volume
	I0314 19:42:21.623054    8428 command_runner.go:130] ! I0314 19:41:19.001384       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-442000"
	I0314 19:42:21.623054    8428 command_runner.go:130] ! I0314 19:41:19.002299       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-442000-m02"
	I0314 19:42:21.623136    8428 command_runner.go:130] ! I0314 19:41:19.002838       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-442000-m03"
	I0314 19:42:21.623136    8428 command_runner.go:130] ! I0314 19:41:19.003844       1 node_lifecycle_controller.go:1071] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I0314 19:42:21.623136    8428 command_runner.go:130] ! I0314 19:41:19.010468       1 shared_informer.go:318] Caches are synced for resource quota
	I0314 19:42:21.623136    8428 command_runner.go:130] ! I0314 19:41:19.393074       1 shared_informer.go:318] Caches are synced for garbage collector
	I0314 19:42:21.623219    8428 command_runner.go:130] ! I0314 19:41:19.393161       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0314 19:42:21.623219    8428 command_runner.go:130] ! I0314 19:41:19.450734       1 shared_informer.go:318] Caches are synced for garbage collector
	I0314 19:42:21.623219    8428 command_runner.go:130] ! I0314 19:41:41.542550       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-442000-m02"
	I0314 19:42:21.623300    8428 command_runner.go:130] ! I0314 19:41:44.029818       1 event.go:307] "Event occurred" object="kube-system/storage-provisioner" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod kube-system/storage-provisioner"
	I0314 19:42:21.623300    8428 command_runner.go:130] ! I0314 19:41:44.029853       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68-d22jc" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod kube-system/coredns-5dd5756b68-d22jc"
	I0314 19:42:21.623300    8428 command_runner.go:130] ! I0314 19:41:44.029866       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6-7446n" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-5b5d89c9d6-7446n"
	I0314 19:42:21.623383    8428 command_runner.go:130] ! I0314 19:41:59.058949       1 event.go:307] "Event occurred" object="multinode-442000-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-442000-m02 status is now: NodeNotReady"
	I0314 19:42:21.623383    8428 command_runner.go:130] ! I0314 19:41:59.074940       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6-8drpb" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0314 19:42:21.623383    8428 command_runner.go:130] ! I0314 19:41:59.085508       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="9.938337ms"
	I0314 19:42:21.623465    8428 command_runner.go:130] ! I0314 19:41:59.086845       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="54.804µs"
	I0314 19:42:21.623465    8428 command_runner.go:130] ! I0314 19:41:59.099029       1 event.go:307] "Event occurred" object="kube-system/kindnet-c7m4p" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0314 19:42:21.623545    8428 command_runner.go:130] ! I0314 19:41:59.122329       1 event.go:307] "Event occurred" object="kube-system/kube-proxy-72dzs" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0314 19:42:21.623545    8428 command_runner.go:130] ! I0314 19:42:12.281109       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="13.332951ms"
	I0314 19:42:21.623545    8428 command_runner.go:130] ! I0314 19:42:12.281325       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="115.209µs"
	I0314 19:42:21.623545    8428 command_runner.go:130] ! I0314 19:42:12.305037       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="74.006µs"
	I0314 19:42:21.623626    8428 command_runner.go:130] ! I0314 19:42:12.366507       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="32.074928ms"
	I0314 19:42:21.623626    8428 command_runner.go:130] ! I0314 19:42:12.368560       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="106.408µs"
	I0314 19:42:21.637843    8428 logs.go:123] Gathering logs for kindnet [999e4c168afe] ...
	I0314 19:42:21.637843    8428 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 999e4c168afe"
	I0314 19:42:21.664882    8428 command_runner.go:130] ! I0314 19:41:08.409720       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0314 19:42:21.664931    8428 command_runner.go:130] ! I0314 19:41:08.410195       1 main.go:107] hostIP = 172.17.93.236
	I0314 19:42:21.664931    8428 command_runner.go:130] ! podIP = 172.17.93.236
	I0314 19:42:21.664931    8428 command_runner.go:130] ! I0314 19:41:08.411178       1 main.go:116] setting mtu 1500 for CNI 
	I0314 19:42:21.664931    8428 command_runner.go:130] ! I0314 19:41:08.411230       1 main.go:146] kindnetd IP family: "ipv4"
	I0314 19:42:21.664931    8428 command_runner.go:130] ! I0314 19:41:08.411277       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0314 19:42:21.664931    8428 command_runner.go:130] ! I0314 19:41:38.747509       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: i/o timeout
	I0314 19:42:21.665066    8428 command_runner.go:130] ! I0314 19:41:38.770843       1 main.go:223] Handling node with IPs: map[172.17.93.236:{}]
	I0314 19:42:21.665066    8428 command_runner.go:130] ! I0314 19:41:38.770994       1 main.go:227] handling current node
	I0314 19:42:21.665066    8428 command_runner.go:130] ! I0314 19:41:38.771413       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:21.665129    8428 command_runner.go:130] ! I0314 19:41:38.771428       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:21.665129    8428 command_runner.go:130] ! I0314 19:41:38.771670       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 172.17.80.135 Flags: [] Table: 0} 
	I0314 19:42:21.665129    8428 command_runner.go:130] ! I0314 19:41:38.771817       1 main.go:223] Handling node with IPs: map[172.17.84.215:{}]
	I0314 19:42:21.665203    8428 command_runner.go:130] ! I0314 19:41:38.771827       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.3.0/24] 
	I0314 19:42:21.665261    8428 command_runner.go:130] ! I0314 19:41:38.771944       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 172.17.84.215 Flags: [] Table: 0} 
	I0314 19:42:21.665308    8428 command_runner.go:130] ! I0314 19:41:48.777997       1 main.go:223] Handling node with IPs: map[172.17.93.236:{}]
	I0314 19:42:21.665308    8428 command_runner.go:130] ! I0314 19:41:48.778091       1 main.go:227] handling current node
	I0314 19:42:21.665308    8428 command_runner.go:130] ! I0314 19:41:48.778105       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:21.665308    8428 command_runner.go:130] ! I0314 19:41:48.778113       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:21.665376    8428 command_runner.go:130] ! I0314 19:41:48.778217       1 main.go:223] Handling node with IPs: map[172.17.84.215:{}]
	I0314 19:42:21.665376    8428 command_runner.go:130] ! I0314 19:41:48.778373       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.3.0/24] 
	I0314 19:42:21.665420    8428 command_runner.go:130] ! I0314 19:41:58.793215       1 main.go:223] Handling node with IPs: map[172.17.93.236:{}]
	I0314 19:42:21.665457    8428 command_runner.go:130] ! I0314 19:41:58.793285       1 main.go:227] handling current node
	I0314 19:42:21.665514    8428 command_runner.go:130] ! I0314 19:41:58.793297       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:21.665557    8428 command_runner.go:130] ! I0314 19:41:58.793304       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:21.665557    8428 command_runner.go:130] ! I0314 19:41:58.793793       1 main.go:223] Handling node with IPs: map[172.17.84.215:{}]
	I0314 19:42:21.665557    8428 command_runner.go:130] ! I0314 19:41:58.793859       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.3.0/24] 
	I0314 19:42:21.665618    8428 command_runner.go:130] ! I0314 19:42:08.808709       1 main.go:223] Handling node with IPs: map[172.17.93.236:{}]
	I0314 19:42:21.665618    8428 command_runner.go:130] ! I0314 19:42:08.808803       1 main.go:227] handling current node
	I0314 19:42:21.665663    8428 command_runner.go:130] ! I0314 19:42:08.808818       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:21.665663    8428 command_runner.go:130] ! I0314 19:42:08.808826       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:21.665739    8428 command_runner.go:130] ! I0314 19:42:08.809153       1 main.go:223] Handling node with IPs: map[172.17.84.215:{}]
	I0314 19:42:21.665739    8428 command_runner.go:130] ! I0314 19:42:08.809168       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.3.0/24] 
	I0314 19:42:21.665794    8428 command_runner.go:130] ! I0314 19:42:18.821697       1 main.go:223] Handling node with IPs: map[172.17.93.236:{}]
	I0314 19:42:21.665851    8428 command_runner.go:130] ! I0314 19:42:18.821789       1 main.go:227] handling current node
	I0314 19:42:21.665851    8428 command_runner.go:130] ! I0314 19:42:18.821805       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:21.665895    8428 command_runner.go:130] ! I0314 19:42:18.821814       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:21.665895    8428 command_runner.go:130] ! I0314 19:42:18.822290       1 main.go:223] Handling node with IPs: map[172.17.84.215:{}]
	I0314 19:42:21.665895    8428 command_runner.go:130] ! I0314 19:42:18.822324       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.3.0/24] 
	I0314 19:42:21.669432    8428 logs.go:123] Gathering logs for Docker ...
	I0314 19:42:21.669432    8428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0314 19:42:21.699492    8428 command_runner.go:130] > Mar 14 19:39:36 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0314 19:42:21.699492    8428 command_runner.go:130] > Mar 14 19:39:36 minikube cri-dockerd[222]: time="2024-03-14T19:39:36Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0314 19:42:21.699492    8428 command_runner.go:130] > Mar 14 19:39:36 minikube cri-dockerd[222]: time="2024-03-14T19:39:36Z" level=info msg="Start docker client with request timeout 0s"
	I0314 19:42:21.699583    8428 command_runner.go:130] > Mar 14 19:39:36 minikube cri-dockerd[222]: time="2024-03-14T19:39:36Z" level=fatal msg="failed to get docker version: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0314 19:42:21.699583    8428 command_runner.go:130] > Mar 14 19:39:37 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0314 19:42:21.699583    8428 command_runner.go:130] > Mar 14 19:39:37 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0314 19:42:21.699583    8428 command_runner.go:130] > Mar 14 19:39:37 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0314 19:42:21.699583    8428 command_runner.go:130] > Mar 14 19:39:39 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 1.
	I0314 19:42:21.699583    8428 command_runner.go:130] > Mar 14 19:39:39 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0314 19:42:21.699679    8428 command_runner.go:130] > Mar 14 19:39:39 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0314 19:42:21.699679    8428 command_runner.go:130] > Mar 14 19:39:39 minikube cri-dockerd[402]: time="2024-03-14T19:39:39Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0314 19:42:21.699679    8428 command_runner.go:130] > Mar 14 19:39:39 minikube cri-dockerd[402]: time="2024-03-14T19:39:39Z" level=info msg="Start docker client with request timeout 0s"
	I0314 19:42:21.699679    8428 command_runner.go:130] > Mar 14 19:39:39 minikube cri-dockerd[402]: time="2024-03-14T19:39:39Z" level=fatal msg="failed to get docker version: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0314 19:42:21.699679    8428 command_runner.go:130] > Mar 14 19:39:39 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0314 19:42:21.699774    8428 command_runner.go:130] > Mar 14 19:39:39 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0314 19:42:21.699774    8428 command_runner.go:130] > Mar 14 19:39:39 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0314 19:42:21.699774    8428 command_runner.go:130] > Mar 14 19:39:41 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 2.
	I0314 19:42:21.699774    8428 command_runner.go:130] > Mar 14 19:39:41 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0314 19:42:21.699774    8428 command_runner.go:130] > Mar 14 19:39:41 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0314 19:42:21.699858    8428 command_runner.go:130] > Mar 14 19:39:41 minikube cri-dockerd[422]: time="2024-03-14T19:39:41Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0314 19:42:21.699858    8428 command_runner.go:130] > Mar 14 19:39:41 minikube cri-dockerd[422]: time="2024-03-14T19:39:41Z" level=info msg="Start docker client with request timeout 0s"
	I0314 19:42:21.699858    8428 command_runner.go:130] > Mar 14 19:39:41 minikube cri-dockerd[422]: time="2024-03-14T19:39:41Z" level=fatal msg="failed to get docker version: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0314 19:42:21.699858    8428 command_runner.go:130] > Mar 14 19:39:41 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0314 19:42:21.699858    8428 command_runner.go:130] > Mar 14 19:39:41 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0314 19:42:21.699940    8428 command_runner.go:130] > Mar 14 19:39:41 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0314 19:42:21.699940    8428 command_runner.go:130] > Mar 14 19:39:44 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 3.
	I0314 19:42:21.699940    8428 command_runner.go:130] > Mar 14 19:39:44 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0314 19:42:21.699940    8428 command_runner.go:130] > Mar 14 19:39:44 minikube systemd[1]: cri-docker.service: Start request repeated too quickly.
	I0314 19:42:21.699940    8428 command_runner.go:130] > Mar 14 19:39:44 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0314 19:42:21.699940    8428 command_runner.go:130] > Mar 14 19:39:44 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0314 19:42:21.700022    8428 command_runner.go:130] > Mar 14 19:40:26 multinode-442000 systemd[1]: Starting Docker Application Container Engine...
	I0314 19:42:21.700022    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[650]: time="2024-03-14T19:40:27.010258466Z" level=info msg="Starting up"
	I0314 19:42:21.700022    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[650]: time="2024-03-14T19:40:27.011413188Z" level=info msg="containerd not running, starting managed containerd"
	I0314 19:42:21.700104    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[650]: time="2024-03-14T19:40:27.012927209Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=656
	I0314 19:42:21.700104    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.042687292Z" level=info msg="starting containerd" revision=dcf2847247e18caba8dce86522029642f60fe96b version=v1.7.14
	I0314 19:42:21.700104    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.069138554Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0314 19:42:21.700104    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.069242083Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0314 19:42:21.700184    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.069344111Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0314 19:42:21.700184    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.069362416Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0314 19:42:21.700184    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.070081016Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0314 19:42:21.700184    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.070164740Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0314 19:42:21.700270    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.070380400Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0314 19:42:21.700270    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.070511536Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0314 19:42:21.700270    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.070532642Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0314 19:42:21.700351    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.070544145Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0314 19:42:21.700383    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.070983067Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0314 19:42:21.700411    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.071556427Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0314 19:42:21.700411    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.074554061Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0314 19:42:21.700411    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.074645687Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0314 19:42:21.700411    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.074800830Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0314 19:42:21.700411    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.074883153Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0314 19:42:21.700411    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.075687977Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0314 19:42:21.700411    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.075800308Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0314 19:42:21.700411    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.075818813Z" level=info msg="metadata content store policy set" policy=shared
	I0314 19:42:21.700411    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.081334348Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0314 19:42:21.700411    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.081440978Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0314 19:42:21.700411    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.081463484Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0314 19:42:21.700411    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.081526902Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0314 19:42:21.700411    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.081545007Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0314 19:42:21.700411    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.081621128Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0314 19:42:21.700411    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.082036144Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0314 19:42:21.700411    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.082193387Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0314 19:42:21.700411    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.082276711Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0314 19:42:21.700411    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.082349431Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0314 19:42:21.700411    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.082368036Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0314 19:42:21.700411    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.082385141Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0314 19:42:21.700411    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.082401545Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0314 19:42:21.700411    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.082417450Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0314 19:42:21.700411    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.082433154Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0314 19:42:21.700411    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.082457161Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0314 19:42:21.700411    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.082515377Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0314 19:42:21.700411    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.082533482Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0314 19:42:21.701018    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.082554788Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0314 19:42:21.701051    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.082572093Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0314 19:42:21.701051    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.082586997Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0314 19:42:21.701051    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.082601801Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0314 19:42:21.701127    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.082616305Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0314 19:42:21.701127    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.082631109Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0314 19:42:21.701127    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.082643913Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0314 19:42:21.701127    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.082659317Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0314 19:42:21.701127    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.082673721Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0314 19:42:21.701210    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.082690226Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0314 19:42:21.701210    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.082704230Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0314 19:42:21.701210    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.082717333Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0314 19:42:21.701210    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.082730637Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0314 19:42:21.701210    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.082747942Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0314 19:42:21.701286    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.082771048Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0314 19:42:21.701318    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.082785952Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0314 19:42:21.701318    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.082799956Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0314 19:42:21.701346    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.082936994Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0314 19:42:21.701346    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.082973004Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	I0314 19:42:21.701346    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.082986808Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0314 19:42:21.701346    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.082998612Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	I0314 19:42:21.701346    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.083067631Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0314 19:42:21.701346    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.083095839Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0314 19:42:21.701346    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.083107842Z" level=info msg="NRI interface is disabled by configuration."
	I0314 19:42:21.701346    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.083364013Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0314 19:42:21.701346    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.083531860Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0314 19:42:21.701346    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.083575672Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0314 19:42:21.701346    8428 command_runner.go:130] > Mar 14 19:40:27 multinode-442000 dockerd[656]: time="2024-03-14T19:40:27.083609482Z" level=info msg="containerd successfully booted in 0.043398s"
	I0314 19:42:21.701346    8428 command_runner.go:130] > Mar 14 19:40:28 multinode-442000 dockerd[650]: time="2024-03-14T19:40:28.063674621Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0314 19:42:21.701346    8428 command_runner.go:130] > Mar 14 19:40:28 multinode-442000 dockerd[650]: time="2024-03-14T19:40:28.220876850Z" level=info msg="Loading containers: start."
	I0314 19:42:21.701346    8428 command_runner.go:130] > Mar 14 19:40:28 multinode-442000 dockerd[650]: time="2024-03-14T19:40:28.643208421Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.18.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0314 19:42:21.701346    8428 command_runner.go:130] > Mar 14 19:40:28 multinode-442000 dockerd[650]: time="2024-03-14T19:40:28.726589336Z" level=info msg="Loading containers: done."
	I0314 19:42:21.701346    8428 command_runner.go:130] > Mar 14 19:40:28 multinode-442000 dockerd[650]: time="2024-03-14T19:40:28.750141296Z" level=info msg="Docker daemon" commit=061aa95 containerd-snapshotter=false storage-driver=overlay2 version=25.0.4
	I0314 19:42:21.701346    8428 command_runner.go:130] > Mar 14 19:40:28 multinode-442000 dockerd[650]: time="2024-03-14T19:40:28.750832983Z" level=info msg="Daemon has completed initialization"
	I0314 19:42:21.701346    8428 command_runner.go:130] > Mar 14 19:40:28 multinode-442000 systemd[1]: Started Docker Application Container Engine.
	I0314 19:42:21.701346    8428 command_runner.go:130] > Mar 14 19:40:28 multinode-442000 dockerd[650]: time="2024-03-14T19:40:28.799522730Z" level=info msg="API listen on [::]:2376"
	I0314 19:42:21.701346    8428 command_runner.go:130] > Mar 14 19:40:28 multinode-442000 dockerd[650]: time="2024-03-14T19:40:28.799691776Z" level=info msg="API listen on /var/run/docker.sock"
	I0314 19:42:21.701346    8428 command_runner.go:130] > Mar 14 19:40:52 multinode-442000 systemd[1]: Stopping Docker Application Container Engine...
	I0314 19:42:21.701346    8428 command_runner.go:130] > Mar 14 19:40:52 multinode-442000 dockerd[650]: time="2024-03-14T19:40:52.824796168Z" level=info msg="Processing signal 'terminated'"
	I0314 19:42:21.701346    8428 command_runner.go:130] > Mar 14 19:40:52 multinode-442000 dockerd[650]: time="2024-03-14T19:40:52.825961557Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0314 19:42:21.701346    8428 command_runner.go:130] > Mar 14 19:40:52 multinode-442000 dockerd[650]: time="2024-03-14T19:40:52.826585605Z" level=info msg="Daemon shutdown complete"
	I0314 19:42:21.701346    8428 command_runner.go:130] > Mar 14 19:40:52 multinode-442000 dockerd[650]: time="2024-03-14T19:40:52.826653911Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0314 19:42:21.701346    8428 command_runner.go:130] > Mar 14 19:40:52 multinode-442000 dockerd[650]: time="2024-03-14T19:40:52.826812323Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0314 19:42:21.701346    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 systemd[1]: docker.service: Deactivated successfully.
	I0314 19:42:21.701346    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 systemd[1]: Stopped Docker Application Container Engine.
	I0314 19:42:21.701346    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 systemd[1]: Starting Docker Application Container Engine...
	I0314 19:42:21.701346    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1043]: time="2024-03-14T19:40:53.899936864Z" level=info msg="Starting up"
	I0314 19:42:21.701872    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1043]: time="2024-03-14T19:40:53.900739426Z" level=info msg="containerd not running, starting managed containerd"
	I0314 19:42:21.701872    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1043]: time="2024-03-14T19:40:53.901763504Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1049
	I0314 19:42:21.701872    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.930795337Z" level=info msg="starting containerd" revision=dcf2847247e18caba8dce86522029642f60fe96b version=v1.7.14
	I0314 19:42:21.701872    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.957961927Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0314 19:42:21.701872    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.958063735Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0314 19:42:21.701971    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.958107338Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0314 19:42:21.701971    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.958123339Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0314 19:42:21.701971    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.958150841Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0314 19:42:21.701971    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.958163842Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0314 19:42:21.702049    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.958360458Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0314 19:42:21.702049    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.958444864Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0314 19:42:21.702125    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.958463766Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0314 19:42:21.702125    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.958475466Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0314 19:42:21.702125    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.958502569Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0314 19:42:21.702125    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.958670881Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0314 19:42:21.702201    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.961627209Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0314 19:42:21.702201    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.961715316Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0314 19:42:21.702201    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.961871928Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0314 19:42:21.702284    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.961949634Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0314 19:42:21.702317    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.961985336Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0314 19:42:21.702317    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.962005238Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0314 19:42:21.702365    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.962017139Z" level=info msg="metadata content store policy set" policy=shared
	I0314 19:42:21.702398    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.962188852Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0314 19:42:21.702420    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.962280259Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0314 19:42:21.702420    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.962311462Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0314 19:42:21.702457    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.962328263Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0314 19:42:21.702457    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.962344564Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0314 19:42:21.702493    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.962393368Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0314 19:42:21.702522    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.962810900Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0314 19:42:21.702522    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.962939310Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0314 19:42:21.702522    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.963018216Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0314 19:42:21.702522    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.963036317Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0314 19:42:21.702522    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.963060419Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0314 19:42:21.702522    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.963076820Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0314 19:42:21.702522    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.963091221Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0314 19:42:21.702522    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.963106323Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0314 19:42:21.702522    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.963121324Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0314 19:42:21.702522    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.963135425Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0314 19:42:21.702522    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.963148726Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0314 19:42:21.702522    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.963162027Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0314 19:42:21.702522    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.963184029Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0314 19:42:21.702522    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.963205330Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0314 19:42:21.702522    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.963220631Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0314 19:42:21.702522    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.963270235Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0314 19:42:21.702522    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.963286336Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0314 19:42:21.702522    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.963300438Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0314 19:42:21.702522    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.963313039Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0314 19:42:21.702522    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.963326640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0314 19:42:21.702522    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.963341141Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0314 19:42:21.702522    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.963357642Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0314 19:42:21.702522    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.963369743Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0314 19:42:21.702522    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.963382444Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0314 19:42:21.702522    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.963395545Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0314 19:42:21.702522    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.963411646Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0314 19:42:21.702522    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.963433148Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0314 19:42:21.702522    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.963449149Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0314 19:42:21.702522    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.963461550Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0314 19:42:21.702522    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.963512954Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0314 19:42:21.703048    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.963529855Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	I0314 19:42:21.703048    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.963593860Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0314 19:42:21.703048    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.963606261Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	I0314 19:42:21.703126    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.963665466Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0314 19:42:21.703126    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.963679767Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0314 19:42:21.703126    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.963695368Z" level=info msg="NRI interface is disabled by configuration."
	I0314 19:42:21.703126    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.964176205Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0314 19:42:21.703204    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.964503330Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0314 19:42:21.703204    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.965392899Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0314 19:42:21.703204    8428 command_runner.go:130] > Mar 14 19:40:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:40:53.966787506Z" level=info msg="containerd successfully booted in 0.037267s"
	I0314 19:42:21.703280    8428 command_runner.go:130] > Mar 14 19:40:54 multinode-442000 dockerd[1043]: time="2024-03-14T19:40:54.945087153Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0314 19:42:21.703280    8428 command_runner.go:130] > Mar 14 19:40:54 multinode-442000 dockerd[1043]: time="2024-03-14T19:40:54.972020025Z" level=info msg="Loading containers: start."
	I0314 19:42:21.703280    8428 command_runner.go:130] > Mar 14 19:40:55 multinode-442000 dockerd[1043]: time="2024-03-14T19:40:55.259462934Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.18.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0314 19:42:21.703353    8428 command_runner.go:130] > Mar 14 19:40:55 multinode-442000 dockerd[1043]: time="2024-03-14T19:40:55.336883289Z" level=info msg="Loading containers: done."
	I0314 19:42:21.703353    8428 command_runner.go:130] > Mar 14 19:40:55 multinode-442000 dockerd[1043]: time="2024-03-14T19:40:55.370669888Z" level=info msg="Docker daemon" commit=061aa95 containerd-snapshotter=false storage-driver=overlay2 version=25.0.4
	I0314 19:42:21.703353    8428 command_runner.go:130] > Mar 14 19:40:55 multinode-442000 dockerd[1043]: time="2024-03-14T19:40:55.370874904Z" level=info msg="Daemon has completed initialization"
	I0314 19:42:21.703353    8428 command_runner.go:130] > Mar 14 19:40:55 multinode-442000 dockerd[1043]: time="2024-03-14T19:40:55.415311921Z" level=info msg="API listen on /var/run/docker.sock"
	I0314 19:42:21.703428    8428 command_runner.go:130] > Mar 14 19:40:55 multinode-442000 dockerd[1043]: time="2024-03-14T19:40:55.415467233Z" level=info msg="API listen on [::]:2376"
	I0314 19:42:21.703428    8428 command_runner.go:130] > Mar 14 19:40:55 multinode-442000 systemd[1]: Started Docker Application Container Engine.
	I0314 19:42:21.703428    8428 command_runner.go:130] > Mar 14 19:40:56 multinode-442000 systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0314 19:42:21.703428    8428 command_runner.go:130] > Mar 14 19:40:56 multinode-442000 cri-dockerd[1267]: time="2024-03-14T19:40:56Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0314 19:42:21.703501    8428 command_runner.go:130] > Mar 14 19:40:56 multinode-442000 cri-dockerd[1267]: time="2024-03-14T19:40:56Z" level=info msg="Start docker client with request timeout 0s"
	I0314 19:42:21.703501    8428 command_runner.go:130] > Mar 14 19:40:56 multinode-442000 cri-dockerd[1267]: time="2024-03-14T19:40:56Z" level=info msg="Hairpin mode is set to hairpin-veth"
	I0314 19:42:21.703501    8428 command_runner.go:130] > Mar 14 19:40:56 multinode-442000 cri-dockerd[1267]: time="2024-03-14T19:40:56Z" level=info msg="Loaded network plugin cni"
	I0314 19:42:21.703501    8428 command_runner.go:130] > Mar 14 19:40:56 multinode-442000 cri-dockerd[1267]: time="2024-03-14T19:40:56Z" level=info msg="Docker cri networking managed by network plugin cni"
	I0314 19:42:21.703689    8428 command_runner.go:130] > Mar 14 19:40:56 multinode-442000 cri-dockerd[1267]: time="2024-03-14T19:40:56Z" level=info msg="Docker Info: &{ID:04f4855f-417a-422c-b5bb-3cf8a43fb438 Containers:18 ContainersRunning:0 ContainersPaused:0 ContainersStopped:18 Images:10 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:[] Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:[] Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6tables:true Debug:false NFd:26 OomKillDisable:false NGoroutines:52 SystemTime:2024-03-14T19:40:56.401787998Z LoggingDriver:json-file CgroupDriver:cgroupfs CgroupVersion:2 NEventsListener:0 Ke
rnelVersion:5.10.207 OperatingSystem:Buildroot 2023.02.9 OSVersion:2023.02.9 OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:0xc0004c0150 NCPU:2 MemTotal:2216210432 GenericResources:[] DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:multinode-442000 Labels:[provider=hyperv] ExperimentalBuild:false ServerVersion:25.0.4 ClusterStore: ClusterAdvertise: Runtimes:map[io.containerd.runc.v2:{Path:runc Args:[] Shim:<nil>} runc:{Path:runc Args:[] Shim:<nil>}] DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:[] Nodes:0 Managers:0 Cluster:<nil> Warnings:[]} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dcf2847247e18caba8dce86522029642f60fe96b Expected:dcf2847247e18caba8dce86522029642f60fe96b} RuncCommit:{ID:51d5e94601ceffbbd85688df1c928ecccbfa4685 Expected:51d5e94601ceffbbd85688df1c928ecccbfa4685} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[nam
e=seccomp,profile=builtin name=cgroupns] ProductLicense:Community Engine DefaultAddressPools:[] Warnings:[]}"
	I0314 19:42:21.703725    8428 command_runner.go:130] > Mar 14 19:40:56 multinode-442000 cri-dockerd[1267]: time="2024-03-14T19:40:56Z" level=info msg="Setting cgroupDriver cgroupfs"
	I0314 19:42:21.703725    8428 command_runner.go:130] > Mar 14 19:40:56 multinode-442000 cri-dockerd[1267]: time="2024-03-14T19:40:56Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	I0314 19:42:21.703725    8428 command_runner.go:130] > Mar 14 19:40:56 multinode-442000 cri-dockerd[1267]: time="2024-03-14T19:40:56Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	I0314 19:42:21.703802    8428 command_runner.go:130] > Mar 14 19:40:56 multinode-442000 cri-dockerd[1267]: time="2024-03-14T19:40:56Z" level=info msg="Start cri-dockerd grpc backend"
	I0314 19:42:21.703802    8428 command_runner.go:130] > Mar 14 19:40:56 multinode-442000 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	I0314 19:42:21.703839    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 cri-dockerd[1267]: time="2024-03-14T19:41:00Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"busybox-5b5d89c9d6-7446n_default\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"fa0f2372c88eef3de0c7caa0041064157c314aff4c14bf6622f34dd89106f773\""
	I0314 19:42:21.703839    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 cri-dockerd[1267]: time="2024-03-14T19:41:00Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-5dd5756b68-d22jc_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"a3dba3fc54c01e7fb1675536e155d6b541ed5782f664675ccd953639013f50b0\""
	I0314 19:42:21.703905    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:01.294795352Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0314 19:42:21.703905    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:01.294882858Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0314 19:42:21.703905    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:01.294903860Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:21.703983    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:01.295303891Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:21.704045    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:01.380666857Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0314 19:42:21.704045    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:01.380946878Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0314 19:42:21.704045    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:01.381075288Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:21.704045    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:01.381588628Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:21.704045    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:01.418754186Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0314 19:42:21.704045    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:01.418872295Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0314 19:42:21.704045    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:01.418919499Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:21.704045    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:01.419130315Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:21.704045    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 cri-dockerd[1267]: time="2024-03-14T19:41:01Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/35dd339c8a08d84d0d1a4d2c062b04d44baff78d20c6ed33ce967d50c18eaa3c/resolv.conf as [nameserver 172.17.80.1]"
	I0314 19:42:21.704045    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:01.449937485Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0314 19:42:21.704045    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:01.450067495Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0314 19:42:21.704045    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:01.450100297Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:21.704045    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:01.450295012Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:21.704045    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 cri-dockerd[1267]: time="2024-03-14T19:41:01Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/67475bf80ddd91df7549842450a8d92c27cd16f814cd4e4c750a7cad7d82fc9f/resolv.conf as [nameserver 172.17.80.1]"
	I0314 19:42:21.704045    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 cri-dockerd[1267]: time="2024-03-14T19:41:01Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a27fa2188ee4cf0c44cde0f8cae03a83655bc574c856082192e3261801efcc72/resolv.conf as [nameserver 172.17.80.1]"
	I0314 19:42:21.704045    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 cri-dockerd[1267]: time="2024-03-14T19:41:01Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/c70744e60ac50b50085376d0c124ff15cc884b8a836b0085ef71a65ddb06bcfd/resolv.conf as [nameserver 172.17.80.1]"
	I0314 19:42:21.704045    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:01.782527266Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0314 19:42:21.704045    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:01.782834890Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0314 19:42:21.704045    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:01.782945299Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:21.704045    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:01.783324628Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:21.704045    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:01.950307171Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0314 19:42:21.704045    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:01.950638097Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0314 19:42:21.704045    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:01.950847113Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:21.704045    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:01.951959699Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:21.704572    8428 command_runner.go:130] > Mar 14 19:41:02 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:02.033329657Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0314 19:42:21.704572    8428 command_runner.go:130] > Mar 14 19:41:02 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:02.033826996Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0314 19:42:21.704572    8428 command_runner.go:130] > Mar 14 19:41:02 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:02.034090516Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:21.704572    8428 command_runner.go:130] > Mar 14 19:41:02 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:02.034801671Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:21.704652    8428 command_runner.go:130] > Mar 14 19:41:02 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:02.038389546Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0314 19:42:21.704652    8428 command_runner.go:130] > Mar 14 19:41:02 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:02.038570160Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0314 19:42:21.704652    8428 command_runner.go:130] > Mar 14 19:41:02 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:02.038686569Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:21.704727    8428 command_runner.go:130] > Mar 14 19:41:02 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:02.038972291Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:21.704727    8428 command_runner.go:130] > Mar 14 19:41:05 multinode-442000 cri-dockerd[1267]: time="2024-03-14T19:41:05Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	I0314 19:42:21.704727    8428 command_runner.go:130] > Mar 14 19:41:07 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:07.056067890Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0314 19:42:21.704803    8428 command_runner.go:130] > Mar 14 19:41:07 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:07.056148096Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0314 19:42:21.704803    8428 command_runner.go:130] > Mar 14 19:41:07 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:07.056166397Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:21.704803    8428 command_runner.go:130] > Mar 14 19:41:07 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:07.056406816Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:21.704876    8428 command_runner.go:130] > Mar 14 19:41:07 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:07.109761119Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0314 19:42:21.704876    8428 command_runner.go:130] > Mar 14 19:41:07 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:07.110023440Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0314 19:42:21.704876    8428 command_runner.go:130] > Mar 14 19:41:07 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:07.110099145Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:21.704950    8428 command_runner.go:130] > Mar 14 19:41:07 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:07.110475674Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:21.704950    8428 command_runner.go:130] > Mar 14 19:41:07 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:07.116978275Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0314 19:42:21.704950    8428 command_runner.go:130] > Mar 14 19:41:07 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:07.117046280Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0314 19:42:21.705024    8428 command_runner.go:130] > Mar 14 19:41:07 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:07.117060481Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:21.705024    8428 command_runner.go:130] > Mar 14 19:41:07 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:07.117158888Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:21.705024    8428 command_runner.go:130] > Mar 14 19:41:07 multinode-442000 cri-dockerd[1267]: time="2024-03-14T19:41:07Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a723f141543f2007cc07e048ef5836fca4ae70749b7266630f6c890bb233c09a/resolv.conf as [nameserver 172.17.80.1]"
	I0314 19:42:21.705099    8428 command_runner.go:130] > Mar 14 19:41:07 multinode-442000 cri-dockerd[1267]: time="2024-03-14T19:41:07Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/f513a7aff67200987eb0f28647720ea4cb9bbdb684fc85d1b08c0dd54563517d/resolv.conf as [nameserver 172.17.80.1]"
	I0314 19:42:21.705099    8428 command_runner.go:130] > Mar 14 19:41:07 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:07.432676357Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0314 19:42:21.705099    8428 command_runner.go:130] > Mar 14 19:41:07 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:07.432829669Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0314 19:42:21.705181    8428 command_runner.go:130] > Mar 14 19:41:07 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:07.432849370Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:21.705181    8428 command_runner.go:130] > Mar 14 19:41:07 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:07.433004382Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:21.705181    8428 command_runner.go:130] > Mar 14 19:41:07 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:07.579105320Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0314 19:42:21.705257    8428 command_runner.go:130] > Mar 14 19:41:07 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:07.580432922Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0314 19:42:21.705257    8428 command_runner.go:130] > Mar 14 19:41:07 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:07.580451623Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:21.705257    8428 command_runner.go:130] > Mar 14 19:41:07 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:07.580554931Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:21.705257    8428 command_runner.go:130] > Mar 14 19:41:07 multinode-442000 cri-dockerd[1267]: time="2024-03-14T19:41:07Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a9176b55446637c4407c9a64ce7d85fce2b395bcc0a22061f5f7ff304ff2d47f/resolv.conf as [nameserver 172.17.80.1]"
	I0314 19:42:21.705336    8428 command_runner.go:130] > Mar 14 19:41:07 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:07.897653021Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0314 19:42:21.705336    8428 command_runner.go:130] > Mar 14 19:41:07 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:07.897936143Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0314 19:42:21.705336    8428 command_runner.go:130] > Mar 14 19:41:07 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:07.898062553Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:21.705411    8428 command_runner.go:130] > Mar 14 19:41:07 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:07.898459584Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:21.705411    8428 command_runner.go:130] > Mar 14 19:41:37 multinode-442000 dockerd[1043]: time="2024-03-14T19:41:37.705977514Z" level=info msg="ignoring event" container=2876622a2618d9b60f7cb4f182054a8b2d30209e3bd14c5d4afe515101547bc8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0314 19:42:21.705411    8428 command_runner.go:130] > Mar 14 19:41:37 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:37.706482647Z" level=info msg="shim disconnected" id=2876622a2618d9b60f7cb4f182054a8b2d30209e3bd14c5d4afe515101547bc8 namespace=moby
	I0314 19:42:21.705487    8428 command_runner.go:130] > Mar 14 19:41:37 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:37.706677460Z" level=warning msg="cleaning up after shim disconnected" id=2876622a2618d9b60f7cb4f182054a8b2d30209e3bd14c5d4afe515101547bc8 namespace=moby
	I0314 19:42:21.705487    8428 command_runner.go:130] > Mar 14 19:41:37 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:37.706692261Z" level=info msg="cleaning up dead shim" namespace=moby
	I0314 19:42:21.705487    8428 command_runner.go:130] > Mar 14 19:41:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:53.663136392Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0314 19:42:21.705563    8428 command_runner.go:130] > Mar 14 19:41:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:53.663371709Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0314 19:42:21.705563    8428 command_runner.go:130] > Mar 14 19:41:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:53.663411212Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:21.705563    8428 command_runner.go:130] > Mar 14 19:41:53 multinode-442000 dockerd[1049]: time="2024-03-14T19:41:53.663537821Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:21.705563    8428 command_runner.go:130] > Mar 14 19:42:10 multinode-442000 dockerd[1049]: time="2024-03-14T19:42:10.837487028Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0314 19:42:21.705639    8428 command_runner.go:130] > Mar 14 19:42:10 multinode-442000 dockerd[1049]: time="2024-03-14T19:42:10.837604337Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0314 19:42:21.705674    8428 command_runner.go:130] > Mar 14 19:42:10 multinode-442000 dockerd[1049]: time="2024-03-14T19:42:10.837625738Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:21.705704    8428 command_runner.go:130] > Mar 14 19:42:10 multinode-442000 dockerd[1049]: time="2024-03-14T19:42:10.837719345Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:21.705745    8428 command_runner.go:130] > Mar 14 19:42:10 multinode-442000 dockerd[1049]: time="2024-03-14T19:42:10.848167835Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0314 19:42:21.705766    8428 command_runner.go:130] > Mar 14 19:42:10 multinode-442000 dockerd[1049]: time="2024-03-14T19:42:10.849098605Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0314 19:42:21.705766    8428 command_runner.go:130] > Mar 14 19:42:10 multinode-442000 dockerd[1049]: time="2024-03-14T19:42:10.849287919Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:21.705766    8428 command_runner.go:130] > Mar 14 19:42:10 multinode-442000 dockerd[1049]: time="2024-03-14T19:42:10.849656747Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:21.705766    8428 command_runner.go:130] > Mar 14 19:42:11 multinode-442000 cri-dockerd[1267]: time="2024-03-14T19:42:11Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/cddebe360bf3a58d057146523ff9f043ddb40843d3e55a24f8f364524780a439/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	I0314 19:42:21.705766    8428 command_runner.go:130] > Mar 14 19:42:11 multinode-442000 cri-dockerd[1267]: time="2024-03-14T19:42:11Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/89f326046d00d990fbe8611867f6438ef498caad91d78b4f265633a7cd56307f/resolv.conf as [nameserver 172.17.80.1]"
	I0314 19:42:21.705766    8428 command_runner.go:130] > Mar 14 19:42:11 multinode-442000 dockerd[1049]: time="2024-03-14T19:42:11.575693713Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0314 19:42:21.705766    8428 command_runner.go:130] > Mar 14 19:42:11 multinode-442000 dockerd[1049]: time="2024-03-14T19:42:11.575950032Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0314 19:42:21.705766    8428 command_runner.go:130] > Mar 14 19:42:11 multinode-442000 dockerd[1049]: time="2024-03-14T19:42:11.576019637Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:21.705766    8428 command_runner.go:130] > Mar 14 19:42:11 multinode-442000 dockerd[1049]: time="2024-03-14T19:42:11.577004211Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0314 19:42:21.705766    8428 command_runner.go:130] > Mar 14 19:42:11 multinode-442000 dockerd[1049]: time="2024-03-14T19:42:11.577168224Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0314 19:42:21.705766    8428 command_runner.go:130] > Mar 14 19:42:11 multinode-442000 dockerd[1049]: time="2024-03-14T19:42:11.577288033Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:21.705766    8428 command_runner.go:130] > Mar 14 19:42:11 multinode-442000 dockerd[1049]: time="2024-03-14T19:42:11.577583255Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:21.705766    8428 command_runner.go:130] > Mar 14 19:42:11 multinode-442000 dockerd[1049]: time="2024-03-14T19:42:11.576656985Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0314 19:42:21.705766    8428 command_runner.go:130] > Mar 14 19:42:13 multinode-442000 dockerd[1043]: 2024/03/14 19:42:13 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0314 19:42:21.705766    8428 command_runner.go:130] > Mar 14 19:42:14 multinode-442000 dockerd[1043]: 2024/03/14 19:42:14 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0314 19:42:21.705766    8428 command_runner.go:130] > Mar 14 19:42:14 multinode-442000 dockerd[1043]: 2024/03/14 19:42:14 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0314 19:42:21.705766    8428 command_runner.go:130] > Mar 14 19:42:14 multinode-442000 dockerd[1043]: 2024/03/14 19:42:14 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0314 19:42:21.705766    8428 command_runner.go:130] > Mar 14 19:42:14 multinode-442000 dockerd[1043]: 2024/03/14 19:42:14 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0314 19:42:21.705766    8428 command_runner.go:130] > Mar 14 19:42:14 multinode-442000 dockerd[1043]: 2024/03/14 19:42:14 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0314 19:42:21.705766    8428 command_runner.go:130] > Mar 14 19:42:14 multinode-442000 dockerd[1043]: 2024/03/14 19:42:14 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0314 19:42:21.705766    8428 command_runner.go:130] > Mar 14 19:42:14 multinode-442000 dockerd[1043]: 2024/03/14 19:42:14 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0314 19:42:21.705766    8428 command_runner.go:130] > Mar 14 19:42:14 multinode-442000 dockerd[1043]: 2024/03/14 19:42:14 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0314 19:42:21.705766    8428 command_runner.go:130] > Mar 14 19:42:14 multinode-442000 dockerd[1043]: 2024/03/14 19:42:14 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0314 19:42:21.705766    8428 command_runner.go:130] > Mar 14 19:42:14 multinode-442000 dockerd[1043]: 2024/03/14 19:42:14 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0314 19:42:21.706291    8428 command_runner.go:130] > Mar 14 19:42:14 multinode-442000 dockerd[1043]: 2024/03/14 19:42:14 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0314 19:42:21.706291    8428 command_runner.go:130] > Mar 14 19:42:17 multinode-442000 dockerd[1043]: 2024/03/14 19:42:17 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0314 19:42:21.706291    8428 command_runner.go:130] > Mar 14 19:42:17 multinode-442000 dockerd[1043]: 2024/03/14 19:42:17 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0314 19:42:21.706368    8428 command_runner.go:130] > Mar 14 19:42:17 multinode-442000 dockerd[1043]: 2024/03/14 19:42:17 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0314 19:42:21.706401    8428 command_runner.go:130] > Mar 14 19:42:17 multinode-442000 dockerd[1043]: 2024/03/14 19:42:17 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0314 19:42:21.706401    8428 command_runner.go:130] > Mar 14 19:42:18 multinode-442000 dockerd[1043]: 2024/03/14 19:42:18 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0314 19:42:21.706401    8428 command_runner.go:130] > Mar 14 19:42:18 multinode-442000 dockerd[1043]: 2024/03/14 19:42:18 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0314 19:42:21.706401    8428 command_runner.go:130] > Mar 14 19:42:18 multinode-442000 dockerd[1043]: 2024/03/14 19:42:18 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0314 19:42:21.706401    8428 command_runner.go:130] > Mar 14 19:42:18 multinode-442000 dockerd[1043]: 2024/03/14 19:42:18 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0314 19:42:21.706401    8428 command_runner.go:130] > Mar 14 19:42:18 multinode-442000 dockerd[1043]: 2024/03/14 19:42:18 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0314 19:42:21.706401    8428 command_runner.go:130] > Mar 14 19:42:18 multinode-442000 dockerd[1043]: 2024/03/14 19:42:18 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0314 19:42:21.706401    8428 command_runner.go:130] > Mar 14 19:42:18 multinode-442000 dockerd[1043]: 2024/03/14 19:42:18 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0314 19:42:21.706401    8428 command_runner.go:130] > Mar 14 19:42:18 multinode-442000 dockerd[1043]: 2024/03/14 19:42:18 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0314 19:42:21.706401    8428 command_runner.go:130] > Mar 14 19:42:21 multinode-442000 dockerd[1043]: 2024/03/14 19:42:21 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0314 19:42:21.706401    8428 command_runner.go:130] > Mar 14 19:42:21 multinode-442000 dockerd[1043]: 2024/03/14 19:42:21 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0314 19:42:21.706926    8428 command_runner.go:130] > Mar 14 19:42:21 multinode-442000 dockerd[1043]: 2024/03/14 19:42:21 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0314 19:42:21.706926    8428 command_runner.go:130] > Mar 14 19:42:21 multinode-442000 dockerd[1043]: 2024/03/14 19:42:21 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0314 19:42:21.706999    8428 command_runner.go:130] > Mar 14 19:42:21 multinode-442000 dockerd[1043]: 2024/03/14 19:42:21 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0314 19:42:21.707031    8428 command_runner.go:130] > Mar 14 19:42:21 multinode-442000 dockerd[1043]: 2024/03/14 19:42:21 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0314 19:42:21.738751    8428 logs.go:123] Gathering logs for container status ...
	I0314 19:42:21.738751    8428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:42:21.830839    8428 command_runner.go:130] > CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	I0314 19:42:21.830839    8428 command_runner.go:130] > b159aedddf94a       ead0a4a53df89                                                                                         11 seconds ago       Running             coredns                   1                   89f326046d00d       coredns-5dd5756b68-d22jc
	I0314 19:42:21.830984    8428 command_runner.go:130] > 813492ad2d666       8c811b4aec35f                                                                                         11 seconds ago       Running             busybox                   1                   cddebe360bf3a       busybox-5b5d89c9d6-7446n
	I0314 19:42:21.830984    8428 command_runner.go:130] > 3167caea2534f       6e38f40d628db                                                                                         29 seconds ago       Running             storage-provisioner       2                   a723f141543f2       storage-provisioner
	I0314 19:42:21.830984    8428 command_runner.go:130] > 999e4c168afef       4950bb10b3f87                                                                                         About a minute ago   Running             kindnet-cni               1                   a9176b5544663       kindnet-7b9lf
	I0314 19:42:21.830984    8428 command_runner.go:130] > 497007582e446       83f6cc407eed8                                                                                         About a minute ago   Running             kube-proxy                1                   f513a7aff6720       kube-proxy-cg28g
	I0314 19:42:21.830984    8428 command_runner.go:130] > 2876622a2618d       6e38f40d628db                                                                                         About a minute ago   Exited              storage-provisioner       1                   a723f141543f2       storage-provisioner
	I0314 19:42:21.830984    8428 command_runner.go:130] > 32d90a3ea2131       e3db313c6dbc0                                                                                         About a minute ago   Running             kube-scheduler            1                   c70744e60ac50       kube-scheduler-multinode-442000
	I0314 19:42:21.831116    8428 command_runner.go:130] > a598d24960de8       7fe0e6f37db33                                                                                         About a minute ago   Running             kube-apiserver            0                   a27fa2188ee4c       kube-apiserver-multinode-442000
	I0314 19:42:21.831116    8428 command_runner.go:130] > 12baf105f0bb2       d058aa5ab969c                                                                                         About a minute ago   Running             kube-controller-manager   1                   67475bf80ddd9       kube-controller-manager-multinode-442000
	I0314 19:42:21.831173    8428 command_runner.go:130] > a81a9c43c3552       73deb9a3f7025                                                                                         About a minute ago   Running             etcd                      0                   35dd339c8a08d       etcd-multinode-442000
	I0314 19:42:21.831207    8428 command_runner.go:130] > 0cd43cdaa31c9       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   19 minutes ago       Exited              busybox                   0                   fa0f2372c88ee       busybox-5b5d89c9d6-7446n
	I0314 19:42:21.831207    8428 command_runner.go:130] > 8899bc0038935       ead0a4a53df89                                                                                         22 minutes ago       Exited              coredns                   0                   a3dba3fc54c01       coredns-5dd5756b68-d22jc
	I0314 19:42:21.831207    8428 command_runner.go:130] > 1a321c0e89971       kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988              22 minutes ago       Exited              kindnet-cni               0                   b046b896affe9       kindnet-7b9lf
	I0314 19:42:21.831207    8428 command_runner.go:130] > 2a62baf3f1b46       83f6cc407eed8                                                                                         23 minutes ago       Exited              kube-proxy                0                   9b3244b47278e       kube-proxy-cg28g
	I0314 19:42:21.831207    8428 command_runner.go:130] > dbb603289bf16       e3db313c6dbc0                                                                                         23 minutes ago       Exited              kube-scheduler            0                   54e39762d7a64       kube-scheduler-multinode-442000
	I0314 19:42:21.831207    8428 command_runner.go:130] > 16b80f73683dc       d058aa5ab969c                                                                                         23 minutes ago       Exited              kube-controller-manager   0                   102c907609a3a       kube-controller-manager-multinode-442000
	I0314 19:42:21.834044    8428 logs.go:123] Gathering logs for kubelet ...
	I0314 19:42:21.834123    8428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:42:21.856327    8428 command_runner.go:130] > Mar 14 19:40:57 multinode-442000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0314 19:42:21.856327    8428 command_runner.go:130] > Mar 14 19:40:57 multinode-442000 kubelet[1388]: I0314 19:40:57.516074    1388 server.go:467] "Kubelet version" kubeletVersion="v1.28.4"
	I0314 19:42:21.856327    8428 command_runner.go:130] > Mar 14 19:40:57 multinode-442000 kubelet[1388]: I0314 19:40:57.516440    1388 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0314 19:42:21.856327    8428 command_runner.go:130] > Mar 14 19:40:57 multinode-442000 kubelet[1388]: I0314 19:40:57.516773    1388 server.go:895] "Client rotation is on, will bootstrap in background"
	I0314 19:42:21.856327    8428 command_runner.go:130] > Mar 14 19:40:57 multinode-442000 kubelet[1388]: E0314 19:40:57.516893    1388 run.go:74] "command failed" err="failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory"
	I0314 19:42:21.856327    8428 command_runner.go:130] > Mar 14 19:40:57 multinode-442000 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	I0314 19:42:21.856327    8428 command_runner.go:130] > Mar 14 19:40:57 multinode-442000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	I0314 19:42:21.856327    8428 command_runner.go:130] > Mar 14 19:40:58 multinode-442000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1.
	I0314 19:42:21.856327    8428 command_runner.go:130] > Mar 14 19:40:58 multinode-442000 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I0314 19:42:21.856327    8428 command_runner.go:130] > Mar 14 19:40:58 multinode-442000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0314 19:42:21.856327    8428 command_runner.go:130] > Mar 14 19:40:58 multinode-442000 kubelet[1450]: I0314 19:40:58.293295    1450 server.go:467] "Kubelet version" kubeletVersion="v1.28.4"
	I0314 19:42:21.856327    8428 command_runner.go:130] > Mar 14 19:40:58 multinode-442000 kubelet[1450]: I0314 19:40:58.293422    1450 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0314 19:42:21.856327    8428 command_runner.go:130] > Mar 14 19:40:58 multinode-442000 kubelet[1450]: I0314 19:40:58.293759    1450 server.go:895] "Client rotation is on, will bootstrap in background"
	I0314 19:42:21.856327    8428 command_runner.go:130] > Mar 14 19:40:58 multinode-442000 kubelet[1450]: E0314 19:40:58.293809    1450 run.go:74] "command failed" err="failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory"
	I0314 19:42:21.856327    8428 command_runner.go:130] > Mar 14 19:40:58 multinode-442000 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	I0314 19:42:21.856327    8428 command_runner.go:130] > Mar 14 19:40:58 multinode-442000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	I0314 19:42:21.856327    8428 command_runner.go:130] > Mar 14 19:40:58 multinode-442000 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I0314 19:42:21.856327    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0314 19:42:21.856870    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.270178    1523 server.go:467] "Kubelet version" kubeletVersion="v1.28.4"
	I0314 19:42:21.856939    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.270275    1523 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0314 19:42:21.856999    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.270469    1523 server.go:895] "Client rotation is on, will bootstrap in background"
	I0314 19:42:21.856999    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.272943    1523 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	I0314 19:42:21.857069    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.286808    1523 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0314 19:42:21.857069    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.333673    1523 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
	I0314 19:42:21.857136    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.335204    1523 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
	I0314 19:42:21.857242    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.335543    1523 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","To
pologyManagerPolicyOptions":null}
	I0314 19:42:21.857289    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.335688    1523 topology_manager.go:138] "Creating topology manager with none policy"
	I0314 19:42:21.857289    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.335703    1523 container_manager_linux.go:301] "Creating device plugin manager"
	I0314 19:42:21.857289    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.336879    1523 state_mem.go:36] "Initialized new in-memory state store"
	I0314 19:42:21.857350    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.338507    1523 kubelet.go:393] "Attempting to sync node with API server"
	I0314 19:42:21.857416    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.338606    1523 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests"
	I0314 19:42:21.857416    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.339942    1523 kubelet.go:309] "Adding apiserver pod source"
	I0314 19:42:21.857467    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.339973    1523 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
	I0314 19:42:21.857542    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: W0314 19:41:00.342644    1523 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)multinode-442000&limit=500&resourceVersion=0": dial tcp 172.17.93.236:8443: connect: connection refused
	I0314 19:42:21.857542    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: E0314 19:41:00.342728    1523 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)multinode-442000&limit=500&resourceVersion=0": dial tcp 172.17.93.236:8443: connect: connection refused
	I0314 19:42:21.857621    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: W0314 19:41:00.352846    1523 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.17.93.236:8443: connect: connection refused
	I0314 19:42:21.857682    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: E0314 19:41:00.353005    1523 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.17.93.236:8443: connect: connection refused
	I0314 19:42:21.857749    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.362091    1523 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="docker" version="25.0.4" apiVersion="v1"
	I0314 19:42:21.857749    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: W0314 19:41:00.368654    1523 probe.go:268] Flexvolume plugin directory at /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
	I0314 19:42:21.857749    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.370831    1523 server.go:1232] "Started kubelet"
	I0314 19:42:21.857821    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.376404    1523 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
	I0314 19:42:21.857821    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.381472    1523 server.go:162] "Starting to listen" address="0.0.0.0" port=10250
	I0314 19:42:21.857891    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.381715    1523 volume_manager.go:291] "Starting Kubelet Volume Manager"
	I0314 19:42:21.857891    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.383735    1523 server.go:462] "Adding debug handlers to kubelet server"
	I0314 19:42:21.857956    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.385265    1523 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10
	I0314 19:42:21.857956    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.387577    1523 desired_state_of_world_populator.go:151] "Desired state populator starts to run"
	I0314 19:42:21.857956    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.392182    1523 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock"
	I0314 19:42:21.857956    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: E0314 19:41:00.392853    1523 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-442000?timeout=10s\": dial tcp 172.17.93.236:8443: connect: connection refused" interval="200ms"
	I0314 19:42:21.857956    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: W0314 19:41:00.392921    1523 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.17.93.236:8443: connect: connection refused
	I0314 19:42:21.857956    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: E0314 19:41:00.392970    1523 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.17.93.236:8443: connect: connection refused
	I0314 19:42:21.858309    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: E0314 19:41:00.402867    1523 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"multinode-442000.17bcb8e6e82683f3", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"multinode-442000", UID:"multinode-442000", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"multinode-442000"}, FirstTimestamp:time.Date(2024, ti
me.March, 14, 19, 41, 0, 370772979, time.Local), LastTimestamp:time.Date(2024, time.March, 14, 19, 41, 0, 370772979, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"multinode-442000"}': 'Post "https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events": dial tcp 172.17.93.236:8443: connect: connection refused'(may retry after sleeping)
	I0314 19:42:21.858309    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.431568    1523 reconciler_new.go:29] "Reconciler: start to sync state"
	I0314 19:42:21.858309    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.453043    1523 cpu_manager.go:214] "Starting CPU manager" policy="none"
	I0314 19:42:21.858383    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.453062    1523 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s"
	I0314 19:42:21.858383    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.453088    1523 state_mem.go:36] "Initialized new in-memory state store"
	I0314 19:42:21.858383    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.453812    1523 state_mem.go:88] "Updated default CPUSet" cpuSet=""
	I0314 19:42:21.858383    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.453838    1523 state_mem.go:96] "Updated CPUSet assignments" assignments={}
	I0314 19:42:21.858476    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.453846    1523 policy_none.go:49] "None policy: Start"
	I0314 19:42:21.858476    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.459854    1523 memory_manager.go:169] "Starting memorymanager" policy="None"
	I0314 19:42:21.858511    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.459925    1523 state_mem.go:35] "Initializing new in-memory state store"
	I0314 19:42:21.858535    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.460715    1523 state_mem.go:75] "Updated machine memory state"
	I0314 19:42:21.858586    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.466366    1523 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found"
	I0314 19:42:21.858586    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.471455    1523 plugin_manager.go:118] "Starting Kubelet Plugin Manager"
	I0314 19:42:21.858651    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.475344    1523 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4"
	I0314 19:42:21.858651    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.478780    1523 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6"
	I0314 19:42:21.858651    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.478820    1523 status_manager.go:217] "Starting to sync pod status with apiserver"
	I0314 19:42:21.858651    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.478846    1523 kubelet.go:2303] "Starting kubelet main sync loop"
	I0314 19:42:21.858731    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: E0314 19:41:00.478899    1523 kubelet.go:2327] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful"
	I0314 19:42:21.858806    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: W0314 19:41:00.485952    1523 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.17.93.236:8443: connect: connection refused
	I0314 19:42:21.858806    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: E0314 19:41:00.487569    1523 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.17.93.236:8443: connect: connection refused
	I0314 19:42:21.858806    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: E0314 19:41:00.493845    1523 eviction_manager.go:258] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"multinode-442000\" not found"
	I0314 19:42:21.858806    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.501023    1523 kubelet_node_status.go:70] "Attempting to register node" node="multinode-442000"
	I0314 19:42:21.858806    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: E0314 19:41:00.501915    1523 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.17.93.236:8443: connect: connection refused" node="multinode-442000"
	I0314 19:42:21.858806    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: E0314 19:41:00.503739    1523 iptables.go:575] "Could not set up iptables canary" err=<
	I0314 19:42:21.858806    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	I0314 19:42:21.859016    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	I0314 19:42:21.859088    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]:         Perhaps ip6tables or your kernel needs to be upgraded.
	I0314 19:42:21.859088    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	I0314 19:42:21.859168    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.578961    1523 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="af5b88117f99a24e81a324ab026c69a7058a7c1bc88d9b9a5386134abc257bba"
	I0314 19:42:21.859168    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.578983    1523 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="54e39762d7a6437164a9b2c6dd22b1f36b57514310190ce4acc3349001cb1774"
	I0314 19:42:21.859168    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.579017    1523 topology_manager.go:215] "Topology Admit Handler" podUID="2b2434280023596d1e3c90125a7219ed" podNamespace="kube-system" podName="kube-scheduler-multinode-442000"
	I0314 19:42:21.859168    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.592991    1523 topology_manager.go:215] "Topology Admit Handler" podUID="7754d2f32966faec8123dc3b8a2af767" podNamespace="kube-system" podName="kube-apiserver-multinode-442000"
	I0314 19:42:21.859364    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: E0314 19:41:00.594193    1523 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-442000?timeout=10s\": dial tcp 172.17.93.236:8443: connect: connection refused" interval="400ms"
	I0314 19:42:21.859416    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.609977    1523 topology_manager.go:215] "Topology Admit Handler" podUID="a7ee530f2bd843eddeace8cd6ec0d204" podNamespace="kube-system" podName="kube-controller-manager-multinode-442000"
	I0314 19:42:21.859416    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.622973    1523 topology_manager.go:215] "Topology Admit Handler" podUID="fa99a5621d016aa714804afcaa1e0a53" podNamespace="kube-system" podName="etcd-multinode-442000"
	I0314 19:42:21.859486    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.634832    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2b2434280023596d1e3c90125a7219ed-kubeconfig\") pod \"kube-scheduler-multinode-442000\" (UID: \"2b2434280023596d1e3c90125a7219ed\") " pod="kube-system/kube-scheduler-multinode-442000"
	I0314 19:42:21.859486    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.640587    1523 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b179d157b6b2f71cc980c7ea5060a613be77e84e89947fbcb91a687ea7310eaf"
	I0314 19:42:21.859561    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.640610    1523 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b046b896affe9f3219822b857a6b4dfa1427854d5df420b6b2e1cec631372548"
	I0314 19:42:21.859561    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.640625    1523 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fa0f2372c88eef3de0c7caa0041064157c314aff4c14bf6622f34dd89106f773"
	I0314 19:42:21.859627    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.640637    1523 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9b3244b47278e22e56ab0362b7a74ee80ca2806fb1074d718b0278b5bc70be76"
	I0314 19:42:21.859627    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.640648    1523 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a3dba3fc54c01e7fb1675536e155d6b541ed5782f664675ccd953639013f50b0"
	I0314 19:42:21.859627    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.640663    1523 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="102c907609a3ac28e95d46e2671477684c5a043672e21597c677ee9dbfcb7e08"
	I0314 19:42:21.859755    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.640674    1523 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ab390fc53b998ec55449f16c05933add797f430f2cc6f4b55afabf79cd8b0bc7"
	I0314 19:42:21.859755    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.713400    1523 kubelet_node_status.go:70] "Attempting to register node" node="multinode-442000"
	I0314 19:42:21.859755    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: E0314 19:41:00.714712    1523 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.17.93.236:8443: connect: connection refused" node="multinode-442000"
	I0314 19:42:21.859755    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.736377    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7754d2f32966faec8123dc3b8a2af767-ca-certs\") pod \"kube-apiserver-multinode-442000\" (UID: \"7754d2f32966faec8123dc3b8a2af767\") " pod="kube-system/kube-apiserver-multinode-442000"
	I0314 19:42:21.859755    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.736439    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7754d2f32966faec8123dc3b8a2af767-k8s-certs\") pod \"kube-apiserver-multinode-442000\" (UID: \"7754d2f32966faec8123dc3b8a2af767\") " pod="kube-system/kube-apiserver-multinode-442000"
	I0314 19:42:21.859923    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.736466    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7754d2f32966faec8123dc3b8a2af767-usr-share-ca-certificates\") pod \"kube-apiserver-multinode-442000\" (UID: \"7754d2f32966faec8123dc3b8a2af767\") " pod="kube-system/kube-apiserver-multinode-442000"
	I0314 19:42:21.859989    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.736490    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a7ee530f2bd843eddeace8cd6ec0d204-flexvolume-dir\") pod \"kube-controller-manager-multinode-442000\" (UID: \"a7ee530f2bd843eddeace8cd6ec0d204\") " pod="kube-system/kube-controller-manager-multinode-442000"
	I0314 19:42:21.860054    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.736521    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a7ee530f2bd843eddeace8cd6ec0d204-k8s-certs\") pod \"kube-controller-manager-multinode-442000\" (UID: \"a7ee530f2bd843eddeace8cd6ec0d204\") " pod="kube-system/kube-controller-manager-multinode-442000"
	I0314 19:42:21.860177    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.736546    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/fa99a5621d016aa714804afcaa1e0a53-etcd-certs\") pod \"etcd-multinode-442000\" (UID: \"fa99a5621d016aa714804afcaa1e0a53\") " pod="kube-system/etcd-multinode-442000"
	I0314 19:42:21.860443    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.736609    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a7ee530f2bd843eddeace8cd6ec0d204-ca-certs\") pod \"kube-controller-manager-multinode-442000\" (UID: \"a7ee530f2bd843eddeace8cd6ec0d204\") " pod="kube-system/kube-controller-manager-multinode-442000"
	I0314 19:42:21.860516    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.736642    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a7ee530f2bd843eddeace8cd6ec0d204-kubeconfig\") pod \"kube-controller-manager-multinode-442000\" (UID: \"a7ee530f2bd843eddeace8cd6ec0d204\") " pod="kube-system/kube-controller-manager-multinode-442000"
	I0314 19:42:21.860636    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.736675    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a7ee530f2bd843eddeace8cd6ec0d204-usr-share-ca-certificates\") pod \"kube-controller-manager-multinode-442000\" (UID: \"a7ee530f2bd843eddeace8cd6ec0d204\") " pod="kube-system/kube-controller-manager-multinode-442000"
	I0314 19:42:21.860689    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: I0314 19:41:00.736706    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/fa99a5621d016aa714804afcaa1e0a53-etcd-data\") pod \"etcd-multinode-442000\" (UID: \"fa99a5621d016aa714804afcaa1e0a53\") " pod="kube-system/etcd-multinode-442000"
	I0314 19:42:21.860689    8428 command_runner.go:130] > Mar 14 19:41:00 multinode-442000 kubelet[1523]: E0314 19:41:00.996146    1523 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-442000?timeout=10s\": dial tcp 172.17.93.236:8443: connect: connection refused" interval="800ms"
	I0314 19:42:21.860875    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 kubelet[1523]: E0314 19:41:01.009288    1523 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"multinode-442000.17bcb8e6e82683f3", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"multinode-442000", UID:"multinode-442000", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"multinode-442000"}, FirstTimestamp:time.Date(2024, ti
me.March, 14, 19, 41, 0, 370772979, time.Local), LastTimestamp:time.Date(2024, time.March, 14, 19, 41, 0, 370772979, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"multinode-442000"}': 'Post "https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events": dial tcp 172.17.93.236:8443: connect: connection refused'(may retry after sleeping)
	I0314 19:42:21.860916    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 kubelet[1523]: I0314 19:41:01.128790    1523 kubelet_node_status.go:70] "Attempting to register node" node="multinode-442000"
	I0314 19:42:21.860984    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 kubelet[1523]: E0314 19:41:01.130034    1523 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.17.93.236:8443: connect: connection refused" node="multinode-442000"
	I0314 19:42:21.860984    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 kubelet[1523]: W0314 19:41:01.475229    1523 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.17.93.236:8443: connect: connection refused
	I0314 19:42:21.861049    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 kubelet[1523]: E0314 19:41:01.475367    1523 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.17.93.236:8443: connect: connection refused
	I0314 19:42:21.861115    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 kubelet[1523]: W0314 19:41:01.647700    1523 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)multinode-442000&limit=500&resourceVersion=0": dial tcp 172.17.93.236:8443: connect: connection refused
	I0314 19:42:21.861188    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 kubelet[1523]: E0314 19:41:01.647839    1523 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)multinode-442000&limit=500&resourceVersion=0": dial tcp 172.17.93.236:8443: connect: connection refused
	I0314 19:42:21.861188    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 kubelet[1523]: I0314 19:41:01.684558    1523 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c70744e60ac50b50085376d0c124ff15cc884b8a836b0085ef71a65ddb06bcfd"
	I0314 19:42:21.861188    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 kubelet[1523]: W0314 19:41:01.767121    1523 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.17.93.236:8443: connect: connection refused
	I0314 19:42:21.861407    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 kubelet[1523]: E0314 19:41:01.767283    1523 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.17.93.236:8443: connect: connection refused
	I0314 19:42:21.861407    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 kubelet[1523]: E0314 19:41:01.797772    1523 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-442000?timeout=10s\": dial tcp 172.17.93.236:8443: connect: connection refused" interval="1.6s"
	I0314 19:42:21.861407    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 kubelet[1523]: W0314 19:41:01.907277    1523 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.17.93.236:8443: connect: connection refused
	I0314 19:42:21.861407    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 kubelet[1523]: E0314 19:41:01.907408    1523 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.17.93.236:8443: connect: connection refused
	I0314 19:42:21.861407    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 kubelet[1523]: I0314 19:41:01.963548    1523 kubelet_node_status.go:70] "Attempting to register node" node="multinode-442000"
	I0314 19:42:21.861407    8428 command_runner.go:130] > Mar 14 19:41:01 multinode-442000 kubelet[1523]: E0314 19:41:01.967786    1523 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.17.93.236:8443: connect: connection refused" node="multinode-442000"
	I0314 19:42:21.861407    8428 command_runner.go:130] > Mar 14 19:41:03 multinode-442000 kubelet[1523]: I0314 19:41:03.581966    1523 kubelet_node_status.go:70] "Attempting to register node" node="multinode-442000"
	I0314 19:42:21.861407    8428 command_runner.go:130] > Mar 14 19:41:05 multinode-442000 kubelet[1523]: I0314 19:41:05.875219    1523 kubelet_node_status.go:108] "Node was previously registered" node="multinode-442000"
	I0314 19:42:21.861407    8428 command_runner.go:130] > Mar 14 19:41:05 multinode-442000 kubelet[1523]: I0314 19:41:05.875953    1523 kubelet_node_status.go:73] "Successfully registered node" node="multinode-442000"
	I0314 19:42:21.861407    8428 command_runner.go:130] > Mar 14 19:41:05 multinode-442000 kubelet[1523]: I0314 19:41:05.881726    1523 kuberuntime_manager.go:1528] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	I0314 19:42:21.861407    8428 command_runner.go:130] > Mar 14 19:41:05 multinode-442000 kubelet[1523]: I0314 19:41:05.882677    1523 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	I0314 19:42:21.861407    8428 command_runner.go:130] > Mar 14 19:41:05 multinode-442000 kubelet[1523]: I0314 19:41:05.894905    1523 setters.go:552] "Node became not ready" node="multinode-442000" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-03-14T19:41:05Z","lastTransitionTime":"2024-03-14T19:41:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"}
	I0314 19:42:21.861951    8428 command_runner.go:130] > Mar 14 19:41:05 multinode-442000 kubelet[1523]: E0314 19:41:05.973748    1523 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"etcd-multinode-442000\" already exists" pod="kube-system/etcd-multinode-442000"
	I0314 19:42:21.862025    8428 command_runner.go:130] > Mar 14 19:41:06 multinode-442000 kubelet[1523]: I0314 19:41:06.346543    1523 apiserver.go:52] "Watching apiserver"
	I0314 19:42:21.862067    8428 command_runner.go:130] > Mar 14 19:41:06 multinode-442000 kubelet[1523]: I0314 19:41:06.355573    1523 topology_manager.go:215] "Topology Admit Handler" podUID="677b9084-0026-4b21-b041-445940624ed7" podNamespace="kube-system" podName="kindnet-7b9lf"
	I0314 19:42:21.862067    8428 command_runner.go:130] > Mar 14 19:41:06 multinode-442000 kubelet[1523]: I0314 19:41:06.355823    1523 topology_manager.go:215] "Topology Admit Handler" podUID="c7f798bf-6722-4731-af8d-ccd5703d116e" podNamespace="kube-system" podName="kube-proxy-cg28g"
	I0314 19:42:21.862067    8428 command_runner.go:130] > Mar 14 19:41:06 multinode-442000 kubelet[1523]: I0314 19:41:06.355970    1523 topology_manager.go:215] "Topology Admit Handler" podUID="2a563b3f-a175-4dc2-9f0b-67dbaefbfaac" podNamespace="kube-system" podName="coredns-5dd5756b68-d22jc"
	I0314 19:42:21.862067    8428 command_runner.go:130] > Mar 14 19:41:06 multinode-442000 kubelet[1523]: I0314 19:41:06.356220    1523 topology_manager.go:215] "Topology Admit Handler" podUID="65d76566-4401-4b28-8452-10ed98624901" podNamespace="kube-system" podName="storage-provisioner"
	I0314 19:42:21.862229    8428 command_runner.go:130] > Mar 14 19:41:06 multinode-442000 kubelet[1523]: I0314 19:41:06.356515    1523 topology_manager.go:215] "Topology Admit Handler" podUID="6ca0ace6-596a-4504-80b5-0cc0cc11f9a2" podNamespace="default" podName="busybox-5b5d89c9d6-7446n"
	I0314 19:42:21.862229    8428 command_runner.go:130] > Mar 14 19:41:06 multinode-442000 kubelet[1523]: E0314 19:41:06.356776    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-7446n" podUID="6ca0ace6-596a-4504-80b5-0cc0cc11f9a2"
	I0314 19:42:21.862315    8428 command_runner.go:130] > Mar 14 19:41:06 multinode-442000 kubelet[1523]: E0314 19:41:06.356948    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-d22jc" podUID="2a563b3f-a175-4dc2-9f0b-67dbaefbfaac"
	I0314 19:42:21.862355    8428 command_runner.go:130] > Mar 14 19:41:06 multinode-442000 kubelet[1523]: I0314 19:41:06.360847    1523 kubelet.go:1872] "Trying to delete pod" pod="kube-system/kube-apiserver-multinode-442000" podUID="02a2d011-5f4c-451c-9698-a88e42e4b6c9"
	I0314 19:42:21.862434    8428 command_runner.go:130] > Mar 14 19:41:06 multinode-442000 kubelet[1523]: I0314 19:41:06.388530    1523 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
	I0314 19:42:21.862485    8428 command_runner.go:130] > Mar 14 19:41:06 multinode-442000 kubelet[1523]: I0314 19:41:06.394882    1523 kubelet.go:1877] "Deleted mirror pod because it is outdated" pod="kube-system/kube-apiserver-multinode-442000"
	I0314 19:42:21.862485    8428 command_runner.go:130] > Mar 14 19:41:06 multinode-442000 kubelet[1523]: I0314 19:41:06.419699    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c7f798bf-6722-4731-af8d-ccd5703d116e-xtables-lock\") pod \"kube-proxy-cg28g\" (UID: \"c7f798bf-6722-4731-af8d-ccd5703d116e\") " pod="kube-system/kube-proxy-cg28g"
	I0314 19:42:21.862564    8428 command_runner.go:130] > Mar 14 19:41:06 multinode-442000 kubelet[1523]: I0314 19:41:06.419828    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/677b9084-0026-4b21-b041-445940624ed7-cni-cfg\") pod \"kindnet-7b9lf\" (UID: \"677b9084-0026-4b21-b041-445940624ed7\") " pod="kube-system/kindnet-7b9lf"
	I0314 19:42:21.862654    8428 command_runner.go:130] > Mar 14 19:41:06 multinode-442000 kubelet[1523]: I0314 19:41:06.419854    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/677b9084-0026-4b21-b041-445940624ed7-lib-modules\") pod \"kindnet-7b9lf\" (UID: \"677b9084-0026-4b21-b041-445940624ed7\") " pod="kube-system/kindnet-7b9lf"
	I0314 19:42:21.862654    8428 command_runner.go:130] > Mar 14 19:41:06 multinode-442000 kubelet[1523]: I0314 19:41:06.419895    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/65d76566-4401-4b28-8452-10ed98624901-tmp\") pod \"storage-provisioner\" (UID: \"65d76566-4401-4b28-8452-10ed98624901\") " pod="kube-system/storage-provisioner"
	I0314 19:42:21.862654    8428 command_runner.go:130] > Mar 14 19:41:06 multinode-442000 kubelet[1523]: I0314 19:41:06.419943    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/677b9084-0026-4b21-b041-445940624ed7-xtables-lock\") pod \"kindnet-7b9lf\" (UID: \"677b9084-0026-4b21-b041-445940624ed7\") " pod="kube-system/kindnet-7b9lf"
	I0314 19:42:21.862774    8428 command_runner.go:130] > Mar 14 19:41:06 multinode-442000 kubelet[1523]: I0314 19:41:06.420062    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c7f798bf-6722-4731-af8d-ccd5703d116e-lib-modules\") pod \"kube-proxy-cg28g\" (UID: \"c7f798bf-6722-4731-af8d-ccd5703d116e\") " pod="kube-system/kube-proxy-cg28g"
	I0314 19:42:21.862774    8428 command_runner.go:130] > Mar 14 19:41:06 multinode-442000 kubelet[1523]: E0314 19:41:06.420370    1523 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0314 19:42:21.862896    8428 command_runner.go:130] > Mar 14 19:41:06 multinode-442000 kubelet[1523]: E0314 19:41:06.420509    1523 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2a563b3f-a175-4dc2-9f0b-67dbaefbfaac-config-volume podName:2a563b3f-a175-4dc2-9f0b-67dbaefbfaac nodeName:}" failed. No retries permitted until 2024-03-14 19:41:06.920467401 +0000 UTC m=+6.742091622 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/2a563b3f-a175-4dc2-9f0b-67dbaefbfaac-config-volume") pod "coredns-5dd5756b68-d22jc" (UID: "2a563b3f-a175-4dc2-9f0b-67dbaefbfaac") : object "kube-system"/"coredns" not registered
	I0314 19:42:21.862945    8428 command_runner.go:130] > Mar 14 19:41:06 multinode-442000 kubelet[1523]: E0314 19:41:06.447169    1523 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0314 19:42:21.863020    8428 command_runner.go:130] > Mar 14 19:41:06 multinode-442000 kubelet[1523]: E0314 19:41:06.447481    1523 projected.go:198] Error preparing data for projected volume kube-api-access-6hh9s for pod default/busybox-5b5d89c9d6-7446n: object "default"/"kube-root-ca.crt" not registered
	I0314 19:42:21.863020    8428 command_runner.go:130] > Mar 14 19:41:06 multinode-442000 kubelet[1523]: E0314 19:41:06.447769    1523 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6ca0ace6-596a-4504-80b5-0cc0cc11f9a2-kube-api-access-6hh9s podName:6ca0ace6-596a-4504-80b5-0cc0cc11f9a2 nodeName:}" failed. No retries permitted until 2024-03-14 19:41:06.9477485 +0000 UTC m=+6.769372721 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-6hh9s" (UniqueName: "kubernetes.io/projected/6ca0ace6-596a-4504-80b5-0cc0cc11f9a2-kube-api-access-6hh9s") pod "busybox-5b5d89c9d6-7446n" (UID: "6ca0ace6-596a-4504-80b5-0cc0cc11f9a2") : object "default"/"kube-root-ca.crt" not registered
	I0314 19:42:21.863097    8428 command_runner.go:130] > Mar 14 19:41:06 multinode-442000 kubelet[1523]: I0314 19:41:06.496544    1523 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="81fdcd9740169a0b72b7c7316eeac39f" path="/var/lib/kubelet/pods/81fdcd9740169a0b72b7c7316eeac39f/volumes"
	I0314 19:42:21.863097    8428 command_runner.go:130] > Mar 14 19:41:06 multinode-442000 kubelet[1523]: I0314 19:41:06.497856    1523 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="92e70beb375f9f247f5f8395dc065033" path="/var/lib/kubelet/pods/92e70beb375f9f247f5f8395dc065033/volumes"
	I0314 19:42:21.863186    8428 command_runner.go:130] > Mar 14 19:41:06 multinode-442000 kubelet[1523]: I0314 19:41:06.840791    1523 kubelet.go:1872] "Trying to delete pod" pod="kube-system/etcd-multinode-442000" podUID="8974ad44-5d36-48f0-bc6b-9115bab5fb5e"
	I0314 19:42:21.863186    8428 command_runner.go:130] > Mar 14 19:41:06 multinode-442000 kubelet[1523]: I0314 19:41:06.864488    1523 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-multinode-442000" podStartSLOduration=0.864428449 podCreationTimestamp="2024-03-14 19:41:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-03-14 19:41:06.656175631 +0000 UTC m=+6.477799952" watchObservedRunningTime="2024-03-14 19:41:06.864428449 +0000 UTC m=+6.686052670"
	I0314 19:42:21.863278    8428 command_runner.go:130] > Mar 14 19:41:06 multinode-442000 kubelet[1523]: I0314 19:41:06.889820    1523 kubelet.go:1877] "Deleted mirror pod because it is outdated" pod="kube-system/etcd-multinode-442000"
	I0314 19:42:21.863278    8428 command_runner.go:130] > Mar 14 19:41:06 multinode-442000 kubelet[1523]: E0314 19:41:06.925613    1523 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0314 19:42:21.863368    8428 command_runner.go:130] > Mar 14 19:41:06 multinode-442000 kubelet[1523]: E0314 19:41:06.925789    1523 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2a563b3f-a175-4dc2-9f0b-67dbaefbfaac-config-volume podName:2a563b3f-a175-4dc2-9f0b-67dbaefbfaac nodeName:}" failed. No retries permitted until 2024-03-14 19:41:07.925744766 +0000 UTC m=+7.747368987 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/2a563b3f-a175-4dc2-9f0b-67dbaefbfaac-config-volume") pod "coredns-5dd5756b68-d22jc" (UID: "2a563b3f-a175-4dc2-9f0b-67dbaefbfaac") : object "kube-system"/"coredns" not registered
	I0314 19:42:21.863457    8428 command_runner.go:130] > Mar 14 19:41:07 multinode-442000 kubelet[1523]: E0314 19:41:07.026456    1523 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0314 19:42:21.863457    8428 command_runner.go:130] > Mar 14 19:41:07 multinode-442000 kubelet[1523]: E0314 19:41:07.026485    1523 projected.go:198] Error preparing data for projected volume kube-api-access-6hh9s for pod default/busybox-5b5d89c9d6-7446n: object "default"/"kube-root-ca.crt" not registered
	I0314 19:42:21.863547    8428 command_runner.go:130] > Mar 14 19:41:07 multinode-442000 kubelet[1523]: E0314 19:41:07.026583    1523 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6ca0ace6-596a-4504-80b5-0cc0cc11f9a2-kube-api-access-6hh9s podName:6ca0ace6-596a-4504-80b5-0cc0cc11f9a2 nodeName:}" failed. No retries permitted until 2024-03-14 19:41:08.02656612 +0000 UTC m=+7.848190341 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-6hh9s" (UniqueName: "kubernetes.io/projected/6ca0ace6-596a-4504-80b5-0cc0cc11f9a2-kube-api-access-6hh9s") pod "busybox-5b5d89c9d6-7446n" (UID: "6ca0ace6-596a-4504-80b5-0cc0cc11f9a2") : object "default"/"kube-root-ca.crt" not registered
	I0314 19:42:21.863547    8428 command_runner.go:130] > Mar 14 19:41:07 multinode-442000 kubelet[1523]: E0314 19:41:07.479340    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-d22jc" podUID="2a563b3f-a175-4dc2-9f0b-67dbaefbfaac"
	I0314 19:42:21.863635    8428 command_runner.go:130] > Mar 14 19:41:07 multinode-442000 kubelet[1523]: E0314 19:41:07.479540    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-7446n" podUID="6ca0ace6-596a-4504-80b5-0cc0cc11f9a2"
	I0314 19:42:21.863635    8428 command_runner.go:130] > Mar 14 19:41:07 multinode-442000 kubelet[1523]: E0314 19:41:07.934416    1523 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0314 19:42:21.863725    8428 command_runner.go:130] > Mar 14 19:41:07 multinode-442000 kubelet[1523]: E0314 19:41:07.934566    1523 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2a563b3f-a175-4dc2-9f0b-67dbaefbfaac-config-volume podName:2a563b3f-a175-4dc2-9f0b-67dbaefbfaac nodeName:}" failed. No retries permitted until 2024-03-14 19:41:09.934544359 +0000 UTC m=+9.756168580 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/2a563b3f-a175-4dc2-9f0b-67dbaefbfaac-config-volume") pod "coredns-5dd5756b68-d22jc" (UID: "2a563b3f-a175-4dc2-9f0b-67dbaefbfaac") : object "kube-system"/"coredns" not registered
	I0314 19:42:21.863814    8428 command_runner.go:130] > Mar 14 19:41:08 multinode-442000 kubelet[1523]: E0314 19:41:08.035285    1523 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0314 19:42:21.863814    8428 command_runner.go:130] > Mar 14 19:41:08 multinode-442000 kubelet[1523]: E0314 19:41:08.035328    1523 projected.go:198] Error preparing data for projected volume kube-api-access-6hh9s for pod default/busybox-5b5d89c9d6-7446n: object "default"/"kube-root-ca.crt" not registered
	I0314 19:42:21.863904    8428 command_runner.go:130] > Mar 14 19:41:08 multinode-442000 kubelet[1523]: E0314 19:41:08.035382    1523 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6ca0ace6-596a-4504-80b5-0cc0cc11f9a2-kube-api-access-6hh9s podName:6ca0ace6-596a-4504-80b5-0cc0cc11f9a2 nodeName:}" failed. No retries permitted until 2024-03-14 19:41:10.035364414 +0000 UTC m=+9.856988635 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-6hh9s" (UniqueName: "kubernetes.io/projected/6ca0ace6-596a-4504-80b5-0cc0cc11f9a2-kube-api-access-6hh9s") pod "busybox-5b5d89c9d6-7446n" (UID: "6ca0ace6-596a-4504-80b5-0cc0cc11f9a2") : object "default"/"kube-root-ca.crt" not registered
	I0314 19:42:21.864870    8428 command_runner.go:130] > Mar 14 19:41:08 multinode-442000 kubelet[1523]: I0314 19:41:08.192454    1523 kubelet.go:1872] "Trying to delete pod" pod="kube-system/etcd-multinode-442000" podUID="8974ad44-5d36-48f0-bc6b-9115bab5fb5e"
	I0314 19:42:21.864870    8428 command_runner.go:130] > Mar 14 19:41:08 multinode-442000 kubelet[1523]: I0314 19:41:08.232807    1523 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/etcd-multinode-442000" podStartSLOduration=2.232765597 podCreationTimestamp="2024-03-14 19:41:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-03-14 19:41:08.211688076 +0000 UTC m=+8.033312297" watchObservedRunningTime="2024-03-14 19:41:08.232765597 +0000 UTC m=+8.054389818"
	I0314 19:42:21.864870    8428 command_runner.go:130] > Mar 14 19:41:09 multinode-442000 kubelet[1523]: E0314 19:41:09.480285    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-7446n" podUID="6ca0ace6-596a-4504-80b5-0cc0cc11f9a2"
	I0314 19:42:21.864870    8428 command_runner.go:130] > Mar 14 19:41:09 multinode-442000 kubelet[1523]: E0314 19:41:09.480350    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-d22jc" podUID="2a563b3f-a175-4dc2-9f0b-67dbaefbfaac"
	I0314 19:42:21.864870    8428 command_runner.go:130] > Mar 14 19:41:09 multinode-442000 kubelet[1523]: E0314 19:41:09.954598    1523 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0314 19:42:21.864870    8428 command_runner.go:130] > Mar 14 19:41:09 multinode-442000 kubelet[1523]: E0314 19:41:09.954683    1523 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2a563b3f-a175-4dc2-9f0b-67dbaefbfaac-config-volume podName:2a563b3f-a175-4dc2-9f0b-67dbaefbfaac nodeName:}" failed. No retries permitted until 2024-03-14 19:41:13.95466674 +0000 UTC m=+13.776290961 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/2a563b3f-a175-4dc2-9f0b-67dbaefbfaac-config-volume") pod "coredns-5dd5756b68-d22jc" (UID: "2a563b3f-a175-4dc2-9f0b-67dbaefbfaac") : object "kube-system"/"coredns" not registered
	I0314 19:42:21.864870    8428 command_runner.go:130] > Mar 14 19:41:10 multinode-442000 kubelet[1523]: E0314 19:41:10.055917    1523 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0314 19:42:21.864870    8428 command_runner.go:130] > Mar 14 19:41:10 multinode-442000 kubelet[1523]: E0314 19:41:10.055948    1523 projected.go:198] Error preparing data for projected volume kube-api-access-6hh9s for pod default/busybox-5b5d89c9d6-7446n: object "default"/"kube-root-ca.crt" not registered
	I0314 19:42:21.864870    8428 command_runner.go:130] > Mar 14 19:41:10 multinode-442000 kubelet[1523]: E0314 19:41:10.055999    1523 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6ca0ace6-596a-4504-80b5-0cc0cc11f9a2-kube-api-access-6hh9s podName:6ca0ace6-596a-4504-80b5-0cc0cc11f9a2 nodeName:}" failed. No retries permitted until 2024-03-14 19:41:14.055983733 +0000 UTC m=+13.877608054 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-6hh9s" (UniqueName: "kubernetes.io/projected/6ca0ace6-596a-4504-80b5-0cc0cc11f9a2-kube-api-access-6hh9s") pod "busybox-5b5d89c9d6-7446n" (UID: "6ca0ace6-596a-4504-80b5-0cc0cc11f9a2") : object "default"/"kube-root-ca.crt" not registered
	I0314 19:42:21.864870    8428 command_runner.go:130] > Mar 14 19:41:11 multinode-442000 kubelet[1523]: E0314 19:41:11.480167    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-d22jc" podUID="2a563b3f-a175-4dc2-9f0b-67dbaefbfaac"
	I0314 19:42:21.865406    8428 command_runner.go:130] > Mar 14 19:41:11 multinode-442000 kubelet[1523]: E0314 19:41:11.480285    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-7446n" podUID="6ca0ace6-596a-4504-80b5-0cc0cc11f9a2"
	I0314 19:42:21.865406    8428 command_runner.go:130] > Mar 14 19:41:13 multinode-442000 kubelet[1523]: E0314 19:41:13.480095    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-d22jc" podUID="2a563b3f-a175-4dc2-9f0b-67dbaefbfaac"
	I0314 19:42:21.865406    8428 command_runner.go:130] > Mar 14 19:41:13 multinode-442000 kubelet[1523]: E0314 19:41:13.480797    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-7446n" podUID="6ca0ace6-596a-4504-80b5-0cc0cc11f9a2"
	I0314 19:42:21.865527    8428 command_runner.go:130] > Mar 14 19:41:13 multinode-442000 kubelet[1523]: E0314 19:41:13.988392    1523 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0314 19:42:21.865527    8428 command_runner.go:130] > Mar 14 19:41:13 multinode-442000 kubelet[1523]: E0314 19:41:13.988528    1523 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2a563b3f-a175-4dc2-9f0b-67dbaefbfaac-config-volume podName:2a563b3f-a175-4dc2-9f0b-67dbaefbfaac nodeName:}" failed. No retries permitted until 2024-03-14 19:41:21.98850961 +0000 UTC m=+21.810133831 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/2a563b3f-a175-4dc2-9f0b-67dbaefbfaac-config-volume") pod "coredns-5dd5756b68-d22jc" (UID: "2a563b3f-a175-4dc2-9f0b-67dbaefbfaac") : object "kube-system"/"coredns" not registered
	I0314 19:42:21.865527    8428 command_runner.go:130] > Mar 14 19:41:14 multinode-442000 kubelet[1523]: E0314 19:41:14.089208    1523 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0314 19:42:21.865527    8428 command_runner.go:130] > Mar 14 19:41:14 multinode-442000 kubelet[1523]: E0314 19:41:14.089365    1523 projected.go:198] Error preparing data for projected volume kube-api-access-6hh9s for pod default/busybox-5b5d89c9d6-7446n: object "default"/"kube-root-ca.crt" not registered
	I0314 19:42:21.865527    8428 command_runner.go:130] > Mar 14 19:41:14 multinode-442000 kubelet[1523]: E0314 19:41:14.089427    1523 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6ca0ace6-596a-4504-80b5-0cc0cc11f9a2-kube-api-access-6hh9s podName:6ca0ace6-596a-4504-80b5-0cc0cc11f9a2 nodeName:}" failed. No retries permitted until 2024-03-14 19:41:22.089409571 +0000 UTC m=+21.911033792 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-6hh9s" (UniqueName: "kubernetes.io/projected/6ca0ace6-596a-4504-80b5-0cc0cc11f9a2-kube-api-access-6hh9s") pod "busybox-5b5d89c9d6-7446n" (UID: "6ca0ace6-596a-4504-80b5-0cc0cc11f9a2") : object "default"/"kube-root-ca.crt" not registered
	I0314 19:42:21.865527    8428 command_runner.go:130] > Mar 14 19:41:15 multinode-442000 kubelet[1523]: E0314 19:41:15.480116    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-d22jc" podUID="2a563b3f-a175-4dc2-9f0b-67dbaefbfaac"
	I0314 19:42:21.865527    8428 command_runner.go:130] > Mar 14 19:41:15 multinode-442000 kubelet[1523]: E0314 19:41:15.480286    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-7446n" podUID="6ca0ace6-596a-4504-80b5-0cc0cc11f9a2"
	I0314 19:42:21.866713    8428 command_runner.go:130] > Mar 14 19:41:17 multinode-442000 kubelet[1523]: E0314 19:41:17.479583    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-7446n" podUID="6ca0ace6-596a-4504-80b5-0cc0cc11f9a2"
	I0314 19:42:21.866713    8428 command_runner.go:130] > Mar 14 19:41:17 multinode-442000 kubelet[1523]: E0314 19:41:17.480025    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-d22jc" podUID="2a563b3f-a175-4dc2-9f0b-67dbaefbfaac"
	I0314 19:42:21.866713    8428 command_runner.go:130] > Mar 14 19:41:19 multinode-442000 kubelet[1523]: E0314 19:41:19.480562    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-d22jc" podUID="2a563b3f-a175-4dc2-9f0b-67dbaefbfaac"
	I0314 19:42:21.866713    8428 command_runner.go:130] > Mar 14 19:41:19 multinode-442000 kubelet[1523]: E0314 19:41:19.480625    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-7446n" podUID="6ca0ace6-596a-4504-80b5-0cc0cc11f9a2"
	I0314 19:42:21.866713    8428 command_runner.go:130] > Mar 14 19:41:21 multinode-442000 kubelet[1523]: E0314 19:41:21.479895    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-7446n" podUID="6ca0ace6-596a-4504-80b5-0cc0cc11f9a2"
	I0314 19:42:21.866713    8428 command_runner.go:130] > Mar 14 19:41:21 multinode-442000 kubelet[1523]: E0314 19:41:21.480437    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-d22jc" podUID="2a563b3f-a175-4dc2-9f0b-67dbaefbfaac"
	I0314 19:42:21.866713    8428 command_runner.go:130] > Mar 14 19:41:22 multinode-442000 kubelet[1523]: E0314 19:41:22.061436    1523 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0314 19:42:21.867693    8428 command_runner.go:130] > Mar 14 19:41:22 multinode-442000 kubelet[1523]: E0314 19:41:22.061515    1523 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2a563b3f-a175-4dc2-9f0b-67dbaefbfaac-config-volume podName:2a563b3f-a175-4dc2-9f0b-67dbaefbfaac nodeName:}" failed. No retries permitted until 2024-03-14 19:41:38.061499618 +0000 UTC m=+37.883123839 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/2a563b3f-a175-4dc2-9f0b-67dbaefbfaac-config-volume") pod "coredns-5dd5756b68-d22jc" (UID: "2a563b3f-a175-4dc2-9f0b-67dbaefbfaac") : object "kube-system"/"coredns" not registered
	I0314 19:42:21.867693    8428 command_runner.go:130] > Mar 14 19:41:22 multinode-442000 kubelet[1523]: E0314 19:41:22.162555    1523 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0314 19:42:21.867693    8428 command_runner.go:130] > Mar 14 19:41:22 multinode-442000 kubelet[1523]: E0314 19:41:22.162603    1523 projected.go:198] Error preparing data for projected volume kube-api-access-6hh9s for pod default/busybox-5b5d89c9d6-7446n: object "default"/"kube-root-ca.crt" not registered
	I0314 19:42:21.867693    8428 command_runner.go:130] > Mar 14 19:41:22 multinode-442000 kubelet[1523]: E0314 19:41:22.162667    1523 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6ca0ace6-596a-4504-80b5-0cc0cc11f9a2-kube-api-access-6hh9s podName:6ca0ace6-596a-4504-80b5-0cc0cc11f9a2 nodeName:}" failed. No retries permitted until 2024-03-14 19:41:38.162650651 +0000 UTC m=+37.984274872 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-6hh9s" (UniqueName: "kubernetes.io/projected/6ca0ace6-596a-4504-80b5-0cc0cc11f9a2-kube-api-access-6hh9s") pod "busybox-5b5d89c9d6-7446n" (UID: "6ca0ace6-596a-4504-80b5-0cc0cc11f9a2") : object "default"/"kube-root-ca.crt" not registered
	I0314 19:42:21.867693    8428 command_runner.go:130] > Mar 14 19:41:23 multinode-442000 kubelet[1523]: E0314 19:41:23.480157    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-d22jc" podUID="2a563b3f-a175-4dc2-9f0b-67dbaefbfaac"
	I0314 19:42:21.867693    8428 command_runner.go:130] > Mar 14 19:41:23 multinode-442000 kubelet[1523]: E0314 19:41:23.481151    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-7446n" podUID="6ca0ace6-596a-4504-80b5-0cc0cc11f9a2"
	I0314 19:42:21.867693    8428 command_runner.go:130] > Mar 14 19:41:25 multinode-442000 kubelet[1523]: E0314 19:41:25.479970    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-d22jc" podUID="2a563b3f-a175-4dc2-9f0b-67dbaefbfaac"
	I0314 19:42:21.867693    8428 command_runner.go:130] > Mar 14 19:41:25 multinode-442000 kubelet[1523]: E0314 19:41:25.480065    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-7446n" podUID="6ca0ace6-596a-4504-80b5-0cc0cc11f9a2"
	I0314 19:42:21.867693    8428 command_runner.go:130] > Mar 14 19:41:27 multinode-442000 kubelet[1523]: E0314 19:41:27.480032    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-d22jc" podUID="2a563b3f-a175-4dc2-9f0b-67dbaefbfaac"
	I0314 19:42:21.867693    8428 command_runner.go:130] > Mar 14 19:41:27 multinode-442000 kubelet[1523]: E0314 19:41:27.480122    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-7446n" podUID="6ca0ace6-596a-4504-80b5-0cc0cc11f9a2"
	I0314 19:42:21.867693    8428 command_runner.go:130] > Mar 14 19:41:29 multinode-442000 kubelet[1523]: E0314 19:41:29.480034    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-7446n" podUID="6ca0ace6-596a-4504-80b5-0cc0cc11f9a2"
	I0314 19:42:21.867693    8428 command_runner.go:130] > Mar 14 19:41:29 multinode-442000 kubelet[1523]: E0314 19:41:29.480291    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-d22jc" podUID="2a563b3f-a175-4dc2-9f0b-67dbaefbfaac"
	I0314 19:42:21.867693    8428 command_runner.go:130] > Mar 14 19:41:31 multinode-442000 kubelet[1523]: E0314 19:41:31.479554    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-d22jc" podUID="2a563b3f-a175-4dc2-9f0b-67dbaefbfaac"
	I0314 19:42:21.867693    8428 command_runner.go:130] > Mar 14 19:41:31 multinode-442000 kubelet[1523]: E0314 19:41:31.479650    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-7446n" podUID="6ca0ace6-596a-4504-80b5-0cc0cc11f9a2"
	I0314 19:42:21.867693    8428 command_runner.go:130] > Mar 14 19:41:33 multinode-442000 kubelet[1523]: E0314 19:41:33.479299    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-d22jc" podUID="2a563b3f-a175-4dc2-9f0b-67dbaefbfaac"
	I0314 19:42:21.867693    8428 command_runner.go:130] > Mar 14 19:41:33 multinode-442000 kubelet[1523]: E0314 19:41:33.479835    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-7446n" podUID="6ca0ace6-596a-4504-80b5-0cc0cc11f9a2"
	I0314 19:42:21.867693    8428 command_runner.go:130] > Mar 14 19:41:35 multinode-442000 kubelet[1523]: E0314 19:41:35.479778    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-7446n" podUID="6ca0ace6-596a-4504-80b5-0cc0cc11f9a2"
	I0314 19:42:21.867693    8428 command_runner.go:130] > Mar 14 19:41:35 multinode-442000 kubelet[1523]: E0314 19:41:35.480230    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-d22jc" podUID="2a563b3f-a175-4dc2-9f0b-67dbaefbfaac"
	I0314 19:42:21.867693    8428 command_runner.go:130] > Mar 14 19:41:37 multinode-442000 kubelet[1523]: E0314 19:41:37.480388    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-d22jc" podUID="2a563b3f-a175-4dc2-9f0b-67dbaefbfaac"
	I0314 19:42:21.867693    8428 command_runner.go:130] > Mar 14 19:41:37 multinode-442000 kubelet[1523]: E0314 19:41:37.480921    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-7446n" podUID="6ca0ace6-596a-4504-80b5-0cc0cc11f9a2"
	I0314 19:42:21.867693    8428 command_runner.go:130] > Mar 14 19:41:38 multinode-442000 kubelet[1523]: E0314 19:41:38.089907    1523 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0314 19:42:21.868716    8428 command_runner.go:130] > Mar 14 19:41:38 multinode-442000 kubelet[1523]: E0314 19:41:38.090056    1523 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2a563b3f-a175-4dc2-9f0b-67dbaefbfaac-config-volume podName:2a563b3f-a175-4dc2-9f0b-67dbaefbfaac nodeName:}" failed. No retries permitted until 2024-03-14 19:42:10.090036325 +0000 UTC m=+69.911660546 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/2a563b3f-a175-4dc2-9f0b-67dbaefbfaac-config-volume") pod "coredns-5dd5756b68-d22jc" (UID: "2a563b3f-a175-4dc2-9f0b-67dbaefbfaac") : object "kube-system"/"coredns" not registered
	I0314 19:42:21.868766    8428 command_runner.go:130] > Mar 14 19:41:38 multinode-442000 kubelet[1523]: E0314 19:41:38.191172    1523 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0314 19:42:21.868766    8428 command_runner.go:130] > Mar 14 19:41:38 multinode-442000 kubelet[1523]: E0314 19:41:38.191351    1523 projected.go:198] Error preparing data for projected volume kube-api-access-6hh9s for pod default/busybox-5b5d89c9d6-7446n: object "default"/"kube-root-ca.crt" not registered
	I0314 19:42:21.868766    8428 command_runner.go:130] > Mar 14 19:41:38 multinode-442000 kubelet[1523]: E0314 19:41:38.191425    1523 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6ca0ace6-596a-4504-80b5-0cc0cc11f9a2-kube-api-access-6hh9s podName:6ca0ace6-596a-4504-80b5-0cc0cc11f9a2 nodeName:}" failed. No retries permitted until 2024-03-14 19:42:10.191406835 +0000 UTC m=+70.013031056 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-6hh9s" (UniqueName: "kubernetes.io/projected/6ca0ace6-596a-4504-80b5-0cc0cc11f9a2-kube-api-access-6hh9s") pod "busybox-5b5d89c9d6-7446n" (UID: "6ca0ace6-596a-4504-80b5-0cc0cc11f9a2") : object "default"/"kube-root-ca.crt" not registered
	I0314 19:42:21.868766    8428 command_runner.go:130] > Mar 14 19:41:38 multinode-442000 kubelet[1523]: I0314 19:41:38.578418    1523 scope.go:117] "RemoveContainer" containerID="07c2872c48edaa090b20d66267963c0d69c5c9eb97824b199af2d7e611ac596a"
	I0314 19:42:21.868766    8428 command_runner.go:130] > Mar 14 19:41:38 multinode-442000 kubelet[1523]: I0314 19:41:38.578814    1523 scope.go:117] "RemoveContainer" containerID="2876622a2618d9b60f7cb4f182054a8b2d30209e3bd14c5d4afe515101547bc8"
	I0314 19:42:21.868766    8428 command_runner.go:130] > Mar 14 19:41:38 multinode-442000 kubelet[1523]: E0314 19:41:38.579025    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(65d76566-4401-4b28-8452-10ed98624901)\"" pod="kube-system/storage-provisioner" podUID="65d76566-4401-4b28-8452-10ed98624901"
	I0314 19:42:21.868766    8428 command_runner.go:130] > Mar 14 19:41:39 multinode-442000 kubelet[1523]: E0314 19:41:39.479691    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-d22jc" podUID="2a563b3f-a175-4dc2-9f0b-67dbaefbfaac"
	I0314 19:42:21.868766    8428 command_runner.go:130] > Mar 14 19:41:39 multinode-442000 kubelet[1523]: E0314 19:41:39.479909    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-7446n" podUID="6ca0ace6-596a-4504-80b5-0cc0cc11f9a2"
	I0314 19:42:21.868766    8428 command_runner.go:130] > Mar 14 19:41:41 multinode-442000 kubelet[1523]: E0314 19:41:41.479574    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-7446n" podUID="6ca0ace6-596a-4504-80b5-0cc0cc11f9a2"
	I0314 19:42:21.868766    8428 command_runner.go:130] > Mar 14 19:41:41 multinode-442000 kubelet[1523]: E0314 19:41:41.480003    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-d22jc" podUID="2a563b3f-a175-4dc2-9f0b-67dbaefbfaac"
	I0314 19:42:21.868766    8428 command_runner.go:130] > Mar 14 19:41:41 multinode-442000 kubelet[1523]: I0314 19:41:41.518811    1523 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	I0314 19:42:21.868766    8428 command_runner.go:130] > Mar 14 19:41:53 multinode-442000 kubelet[1523]: I0314 19:41:53.480206    1523 scope.go:117] "RemoveContainer" containerID="2876622a2618d9b60f7cb4f182054a8b2d30209e3bd14c5d4afe515101547bc8"
	I0314 19:42:21.868766    8428 command_runner.go:130] > Mar 14 19:42:00 multinode-442000 kubelet[1523]: I0314 19:42:00.447192    1523 scope.go:117] "RemoveContainer" containerID="9585e3eb2ead2f471eb0d22c8e29e4bfd954095774af365d80329ea39fff78e1"
	I0314 19:42:21.868766    8428 command_runner.go:130] > Mar 14 19:42:00 multinode-442000 kubelet[1523]: I0314 19:42:00.490865    1523 scope.go:117] "RemoveContainer" containerID="cd640f130e429bd4182c258358ec791604b8f307f9c45f2e3880e9b1a7df666a"
	I0314 19:42:21.868766    8428 command_runner.go:130] > Mar 14 19:42:00 multinode-442000 kubelet[1523]: E0314 19:42:00.516969    1523 iptables.go:575] "Could not set up iptables canary" err=<
	I0314 19:42:21.868766    8428 command_runner.go:130] > Mar 14 19:42:00 multinode-442000 kubelet[1523]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	I0314 19:42:21.869306    8428 command_runner.go:130] > Mar 14 19:42:00 multinode-442000 kubelet[1523]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	I0314 19:42:21.869306    8428 command_runner.go:130] > Mar 14 19:42:00 multinode-442000 kubelet[1523]:         Perhaps ip6tables or your kernel needs to be upgraded.
	I0314 19:42:21.869306    8428 command_runner.go:130] > Mar 14 19:42:00 multinode-442000 kubelet[1523]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	I0314 19:42:21.869306    8428 command_runner.go:130] > Mar 14 19:42:11 multinode-442000 kubelet[1523]: I0314 19:42:11.167906    1523 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="89f326046d00d990fbe8611867f6438ef498caad91d78b4f265633a7cd56307f"
	I0314 19:42:21.869306    8428 command_runner.go:130] > Mar 14 19:42:11 multinode-442000 kubelet[1523]: I0314 19:42:11.214897    1523 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cddebe360bf3a58d057146523ff9f043ddb40843d3e55a24f8f364524780a439"
	I0314 19:42:21.911288    8428 logs.go:123] Gathering logs for kube-apiserver [a598d24960de] ...
	I0314 19:42:21.911288    8428 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a598d24960de"
	I0314 19:42:21.941844    8428 command_runner.go:130] ! I0314 19:41:02.580148       1 options.go:220] external host was not specified, using 172.17.93.236
	I0314 19:42:21.941937    8428 command_runner.go:130] ! I0314 19:41:02.584195       1 server.go:148] Version: v1.28.4
	I0314 19:42:21.941937    8428 command_runner.go:130] ! I0314 19:41:02.584361       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0314 19:42:21.942223    8428 command_runner.go:130] ! I0314 19:41:03.945945       1 shared_informer.go:311] Waiting for caches to sync for node_authorizer
	I0314 19:42:21.942280    8428 command_runner.go:130] ! I0314 19:41:03.963375       1 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0314 19:42:21.942388    8428 command_runner.go:130] ! I0314 19:41:03.963415       1 plugins.go:161] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0314 19:42:21.942388    8428 command_runner.go:130] ! I0314 19:41:03.963973       1 instance.go:298] Using reconciler: lease
	I0314 19:42:21.942447    8428 command_runner.go:130] ! I0314 19:41:04.031000       1 handler.go:232] Adding GroupVersion apiextensions.k8s.io v1 to ResourceManager
	I0314 19:42:21.942474    8428 command_runner.go:130] ! W0314 19:41:04.031118       1 genericapiserver.go:744] Skipping API apiextensions.k8s.io/v1beta1 because it has no resources.
	I0314 19:42:21.942474    8428 command_runner.go:130] ! I0314 19:41:04.342643       1 handler.go:232] Adding GroupVersion  v1 to ResourceManager
	I0314 19:42:21.942474    8428 command_runner.go:130] ! I0314 19:41:04.343120       1 instance.go:709] API group "internal.apiserver.k8s.io" is not enabled, skipping.
	I0314 19:42:21.942558    8428 command_runner.go:130] ! I0314 19:41:04.862959       1 instance.go:709] API group "resource.k8s.io" is not enabled, skipping.
	I0314 19:42:21.942558    8428 command_runner.go:130] ! I0314 19:41:04.875745       1 handler.go:232] Adding GroupVersion authentication.k8s.io v1 to ResourceManager
	I0314 19:42:21.942558    8428 command_runner.go:130] ! W0314 19:41:04.875858       1 genericapiserver.go:744] Skipping API authentication.k8s.io/v1beta1 because it has no resources.
	I0314 19:42:21.942641    8428 command_runner.go:130] ! W0314 19:41:04.875867       1 genericapiserver.go:744] Skipping API authentication.k8s.io/v1alpha1 because it has no resources.
	I0314 19:42:21.942641    8428 command_runner.go:130] ! I0314 19:41:04.876422       1 handler.go:232] Adding GroupVersion authorization.k8s.io v1 to ResourceManager
	I0314 19:42:21.942641    8428 command_runner.go:130] ! W0314 19:41:04.876506       1 genericapiserver.go:744] Skipping API authorization.k8s.io/v1beta1 because it has no resources.
	I0314 19:42:21.942693    8428 command_runner.go:130] ! I0314 19:41:04.877676       1 handler.go:232] Adding GroupVersion autoscaling v2 to ResourceManager
	I0314 19:42:21.942723    8428 command_runner.go:130] ! I0314 19:41:04.878707       1 handler.go:232] Adding GroupVersion autoscaling v1 to ResourceManager
	I0314 19:42:21.942723    8428 command_runner.go:130] ! W0314 19:41:04.878804       1 genericapiserver.go:744] Skipping API autoscaling/v2beta1 because it has no resources.
	I0314 19:42:21.942806    8428 command_runner.go:130] ! W0314 19:41:04.878812       1 genericapiserver.go:744] Skipping API autoscaling/v2beta2 because it has no resources.
	I0314 19:42:21.942806    8428 command_runner.go:130] ! I0314 19:41:04.881331       1 handler.go:232] Adding GroupVersion batch v1 to ResourceManager
	I0314 19:42:21.942864    8428 command_runner.go:130] ! W0314 19:41:04.881418       1 genericapiserver.go:744] Skipping API batch/v1beta1 because it has no resources.
	I0314 19:42:21.942890    8428 command_runner.go:130] ! I0314 19:41:04.882613       1 handler.go:232] Adding GroupVersion certificates.k8s.io v1 to ResourceManager
	I0314 19:42:21.942890    8428 command_runner.go:130] ! W0314 19:41:04.882706       1 genericapiserver.go:744] Skipping API certificates.k8s.io/v1beta1 because it has no resources.
	I0314 19:42:21.942947    8428 command_runner.go:130] ! W0314 19:41:04.882714       1 genericapiserver.go:744] Skipping API certificates.k8s.io/v1alpha1 because it has no resources.
	I0314 19:42:21.943000    8428 command_runner.go:130] ! I0314 19:41:04.883473       1 handler.go:232] Adding GroupVersion coordination.k8s.io v1 to ResourceManager
	I0314 19:42:21.943048    8428 command_runner.go:130] ! W0314 19:41:04.883562       1 genericapiserver.go:744] Skipping API coordination.k8s.io/v1beta1 because it has no resources.
	I0314 19:42:21.943048    8428 command_runner.go:130] ! W0314 19:41:04.883619       1 genericapiserver.go:744] Skipping API discovery.k8s.io/v1beta1 because it has no resources.
	I0314 19:42:21.943112    8428 command_runner.go:130] ! I0314 19:41:04.884340       1 handler.go:232] Adding GroupVersion discovery.k8s.io v1 to ResourceManager
	I0314 19:42:21.943136    8428 command_runner.go:130] ! I0314 19:41:04.886289       1 handler.go:232] Adding GroupVersion networking.k8s.io v1 to ResourceManager
	I0314 19:42:21.943165    8428 command_runner.go:130] ! W0314 19:41:04.886373       1 genericapiserver.go:744] Skipping API networking.k8s.io/v1beta1 because it has no resources.
	I0314 19:42:21.943165    8428 command_runner.go:130] ! W0314 19:41:04.886382       1 genericapiserver.go:744] Skipping API networking.k8s.io/v1alpha1 because it has no resources.
	I0314 19:42:21.943165    8428 command_runner.go:130] ! I0314 19:41:04.886877       1 handler.go:232] Adding GroupVersion node.k8s.io v1 to ResourceManager
	I0314 19:42:21.943165    8428 command_runner.go:130] ! W0314 19:41:04.886971       1 genericapiserver.go:744] Skipping API node.k8s.io/v1beta1 because it has no resources.
	I0314 19:42:21.943165    8428 command_runner.go:130] ! W0314 19:41:04.886979       1 genericapiserver.go:744] Skipping API node.k8s.io/v1alpha1 because it has no resources.
	I0314 19:42:21.943165    8428 command_runner.go:130] ! I0314 19:41:04.888213       1 handler.go:232] Adding GroupVersion policy v1 to ResourceManager
	I0314 19:42:21.943165    8428 command_runner.go:130] ! W0314 19:41:04.888261       1 genericapiserver.go:744] Skipping API policy/v1beta1 because it has no resources.
	I0314 19:42:21.943165    8428 command_runner.go:130] ! I0314 19:41:04.903461       1 handler.go:232] Adding GroupVersion rbac.authorization.k8s.io v1 to ResourceManager
	I0314 19:42:21.943165    8428 command_runner.go:130] ! W0314 19:41:04.903509       1 genericapiserver.go:744] Skipping API rbac.authorization.k8s.io/v1beta1 because it has no resources.
	I0314 19:42:21.943165    8428 command_runner.go:130] ! W0314 19:41:04.903517       1 genericapiserver.go:744] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
	I0314 19:42:21.943165    8428 command_runner.go:130] ! I0314 19:41:04.906409       1 handler.go:232] Adding GroupVersion scheduling.k8s.io v1 to ResourceManager
	I0314 19:42:21.943165    8428 command_runner.go:130] ! W0314 19:41:04.906458       1 genericapiserver.go:744] Skipping API scheduling.k8s.io/v1beta1 because it has no resources.
	I0314 19:42:21.943165    8428 command_runner.go:130] ! W0314 19:41:04.906466       1 genericapiserver.go:744] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
	I0314 19:42:21.943165    8428 command_runner.go:130] ! I0314 19:41:04.915366       1 handler.go:232] Adding GroupVersion storage.k8s.io v1 to ResourceManager
	I0314 19:42:21.943165    8428 command_runner.go:130] ! W0314 19:41:04.915463       1 genericapiserver.go:744] Skipping API storage.k8s.io/v1beta1 because it has no resources.
	I0314 19:42:21.943165    8428 command_runner.go:130] ! W0314 19:41:04.915472       1 genericapiserver.go:744] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
	I0314 19:42:21.943165    8428 command_runner.go:130] ! I0314 19:41:04.916839       1 handler.go:232] Adding GroupVersion flowcontrol.apiserver.k8s.io v1beta3 to ResourceManager
	I0314 19:42:21.943165    8428 command_runner.go:130] ! I0314 19:41:04.918318       1 handler.go:232] Adding GroupVersion flowcontrol.apiserver.k8s.io v1beta2 to ResourceManager
	I0314 19:42:21.943165    8428 command_runner.go:130] ! W0314 19:41:04.918410       1 genericapiserver.go:744] Skipping API flowcontrol.apiserver.k8s.io/v1beta1 because it has no resources.
	I0314 19:42:21.943165    8428 command_runner.go:130] ! W0314 19:41:04.918418       1 genericapiserver.go:744] Skipping API flowcontrol.apiserver.k8s.io/v1alpha1 because it has no resources.
	I0314 19:42:21.943165    8428 command_runner.go:130] ! I0314 19:41:04.922469       1 handler.go:232] Adding GroupVersion apps v1 to ResourceManager
	I0314 19:42:21.943165    8428 command_runner.go:130] ! W0314 19:41:04.922563       1 genericapiserver.go:744] Skipping API apps/v1beta2 because it has no resources.
	I0314 19:42:21.943165    8428 command_runner.go:130] ! W0314 19:41:04.922576       1 genericapiserver.go:744] Skipping API apps/v1beta1 because it has no resources.
	I0314 19:42:21.943165    8428 command_runner.go:130] ! I0314 19:41:04.923589       1 handler.go:232] Adding GroupVersion admissionregistration.k8s.io v1 to ResourceManager
	I0314 19:42:21.943703    8428 command_runner.go:130] ! W0314 19:41:04.923671       1 genericapiserver.go:744] Skipping API admissionregistration.k8s.io/v1beta1 because it has no resources.
	I0314 19:42:21.943703    8428 command_runner.go:130] ! W0314 19:41:04.923678       1 genericapiserver.go:744] Skipping API admissionregistration.k8s.io/v1alpha1 because it has no resources.
	I0314 19:42:21.943703    8428 command_runner.go:130] ! I0314 19:41:04.924323       1 handler.go:232] Adding GroupVersion events.k8s.io v1 to ResourceManager
	I0314 19:42:21.943703    8428 command_runner.go:130] ! W0314 19:41:04.924409       1 genericapiserver.go:744] Skipping API events.k8s.io/v1beta1 because it has no resources.
	I0314 19:42:21.943703    8428 command_runner.go:130] ! I0314 19:41:04.946149       1 handler.go:232] Adding GroupVersion apiregistration.k8s.io v1 to ResourceManager
	I0314 19:42:21.943809    8428 command_runner.go:130] ! W0314 19:41:04.946188       1 genericapiserver.go:744] Skipping API apiregistration.k8s.io/v1beta1 because it has no resources.
	I0314 19:42:21.943837    8428 command_runner.go:130] ! I0314 19:41:05.649195       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0314 19:42:21.943837    8428 command_runner.go:130] ! I0314 19:41:05.649351       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0314 19:42:21.943927    8428 command_runner.go:130] ! I0314 19:41:05.650113       1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	I0314 19:42:21.943927    8428 command_runner.go:130] ! I0314 19:41:05.651281       1 secure_serving.go:213] Serving securely on [::]:8443
	I0314 19:42:21.943927    8428 command_runner.go:130] ! I0314 19:41:05.651311       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0314 19:42:21.943983    8428 command_runner.go:130] ! I0314 19:41:05.651726       1 handler_discovery.go:412] Starting ResourceDiscoveryManager
	I0314 19:42:21.944086    8428 command_runner.go:130] ! I0314 19:41:05.651907       1 dynamic_serving_content.go:132] "Starting controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	I0314 19:42:21.944086    8428 command_runner.go:130] ! I0314 19:41:05.654468       1 controller.go:80] Starting OpenAPI V3 AggregationController
	I0314 19:42:21.944086    8428 command_runner.go:130] ! I0314 19:41:05.654814       1 gc_controller.go:78] Starting apiserver lease garbage collector
	I0314 19:42:21.944086    8428 command_runner.go:130] ! I0314 19:41:05.655201       1 gc_controller.go:78] Starting apiserver lease garbage collector
	I0314 19:42:21.944086    8428 command_runner.go:130] ! I0314 19:41:05.656049       1 apf_controller.go:372] Starting API Priority and Fairness config controller
	I0314 19:42:21.944086    8428 command_runner.go:130] ! I0314 19:41:05.656308       1 available_controller.go:423] Starting AvailableConditionController
	I0314 19:42:21.944086    8428 command_runner.go:130] ! I0314 19:41:05.656404       1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
	I0314 19:42:21.944086    8428 command_runner.go:130] ! I0314 19:41:05.651597       1 apiservice_controller.go:97] Starting APIServiceRegistrationController
	I0314 19:42:21.944086    8428 command_runner.go:130] ! I0314 19:41:05.656599       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
	I0314 19:42:21.944086    8428 command_runner.go:130] ! I0314 19:41:05.658623       1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
	I0314 19:42:21.944086    8428 command_runner.go:130] ! I0314 19:41:05.658785       1 shared_informer.go:311] Waiting for caches to sync for cluster_authentication_trust_controller
	I0314 19:42:21.944086    8428 command_runner.go:130] ! I0314 19:41:05.659483       1 system_namespaces_controller.go:67] Starting system namespaces controller
	I0314 19:42:21.944086    8428 command_runner.go:130] ! I0314 19:41:05.661076       1 aggregator.go:164] waiting for initial CRD sync...
	I0314 19:42:21.944086    8428 command_runner.go:130] ! I0314 19:41:05.662487       1 customresource_discovery_controller.go:289] Starting DiscoveryController
	I0314 19:42:21.944086    8428 command_runner.go:130] ! I0314 19:41:05.662789       1 controller.go:78] Starting OpenAPI AggregationController
	I0314 19:42:21.944086    8428 command_runner.go:130] ! I0314 19:41:05.727194       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0314 19:42:21.944086    8428 command_runner.go:130] ! I0314 19:41:05.728512       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0314 19:42:21.944086    8428 command_runner.go:130] ! I0314 19:41:05.729067       1 controller.go:116] Starting legacy_token_tracking_controller
	I0314 19:42:21.944086    8428 command_runner.go:130] ! I0314 19:41:05.729317       1 shared_informer.go:311] Waiting for caches to sync for configmaps
	I0314 19:42:21.944086    8428 command_runner.go:130] ! I0314 19:41:05.729432       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0314 19:42:21.944086    8428 command_runner.go:130] ! I0314 19:41:05.729507       1 shared_informer.go:311] Waiting for caches to sync for crd-autoregister
	I0314 19:42:21.944086    8428 command_runner.go:130] ! I0314 19:41:05.729606       1 controller.go:134] Starting OpenAPI controller
	I0314 19:42:21.944086    8428 command_runner.go:130] ! I0314 19:41:05.729710       1 controller.go:85] Starting OpenAPI V3 controller
	I0314 19:42:21.944086    8428 command_runner.go:130] ! I0314 19:41:05.729812       1 naming_controller.go:291] Starting NamingConditionController
	I0314 19:42:21.944086    8428 command_runner.go:130] ! I0314 19:41:05.729911       1 establishing_controller.go:76] Starting EstablishingController
	I0314 19:42:21.944086    8428 command_runner.go:130] ! I0314 19:41:05.730411       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0314 19:42:21.944613    8428 command_runner.go:130] ! I0314 19:41:05.730521       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0314 19:42:21.944613    8428 command_runner.go:130] ! I0314 19:41:05.730616       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0314 19:42:21.944613    8428 command_runner.go:130] ! I0314 19:41:05.799477       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0314 19:42:21.944613    8428 command_runner.go:130] ! I0314 19:41:05.813580       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0314 19:42:21.944701    8428 command_runner.go:130] ! I0314 19:41:05.830168       1 shared_informer.go:318] Caches are synced for configmaps
	I0314 19:42:21.944701    8428 command_runner.go:130] ! I0314 19:41:05.830217       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0314 19:42:21.944701    8428 command_runner.go:130] ! I0314 19:41:05.830281       1 aggregator.go:166] initial CRD sync complete...
	I0314 19:42:21.944783    8428 command_runner.go:130] ! I0314 19:41:05.830289       1 autoregister_controller.go:141] Starting autoregister controller
	I0314 19:42:21.944846    8428 command_runner.go:130] ! I0314 19:41:05.830295       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0314 19:42:21.944869    8428 command_runner.go:130] ! I0314 19:41:05.830301       1 cache.go:39] Caches are synced for autoregister controller
	I0314 19:42:21.944926    8428 command_runner.go:130] ! I0314 19:41:05.846941       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0314 19:42:21.944977    8428 command_runner.go:130] ! I0314 19:41:05.857057       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0314 19:42:21.945012    8428 command_runner.go:130] ! I0314 19:41:05.858966       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0314 19:42:21.945036    8428 command_runner.go:130] ! I0314 19:41:05.865554       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I0314 19:42:21.945092    8428 command_runner.go:130] ! I0314 19:41:05.865721       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I0314 19:42:21.945115    8428 command_runner.go:130] ! I0314 19:41:06.667315       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0314 19:42:21.945142    8428 command_runner.go:130] ! W0314 19:41:07.118314       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [172.17.93.236]
	I0314 19:42:21.945142    8428 command_runner.go:130] ! I0314 19:41:07.120612       1 controller.go:624] quota admission added evaluator for: endpoints
	I0314 19:42:21.945142    8428 command_runner.go:130] ! I0314 19:41:07.135973       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0314 19:42:21.945142    8428 command_runner.go:130] ! I0314 19:41:09.049225       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0314 19:42:21.945142    8428 command_runner.go:130] ! I0314 19:41:09.264220       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0314 19:42:21.945142    8428 command_runner.go:130] ! I0314 19:41:09.277110       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0314 19:42:21.945142    8428 command_runner.go:130] ! I0314 19:41:09.393446       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0314 19:42:21.945142    8428 command_runner.go:130] ! I0314 19:41:09.422214       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0314 19:42:21.952646    8428 logs.go:123] Gathering logs for kube-scheduler [dbb603289bf1] ...
	I0314 19:42:21.952646    8428 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbb603289bf1"
	I0314 19:42:21.979163    8428 command_runner.go:130] ! I0314 19:18:59.007917       1 serving.go:348] Generated self-signed cert in-memory
	I0314 19:42:21.979655    8428 command_runner.go:130] ! W0314 19:19:00.211611       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	I0314 19:42:21.979655    8428 command_runner.go:130] ! W0314 19:19:00.212802       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	I0314 19:42:21.979742    8428 command_runner.go:130] ! W0314 19:19:00.212990       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	I0314 19:42:21.979742    8428 command_runner.go:130] ! W0314 19:19:00.213108       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0314 19:42:21.979742    8428 command_runner.go:130] ! I0314 19:19:00.283055       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.4"
	I0314 19:42:21.979742    8428 command_runner.go:130] ! I0314 19:19:00.284207       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0314 19:42:21.979742    8428 command_runner.go:130] ! I0314 19:19:00.288027       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0314 19:42:21.979849    8428 command_runner.go:130] ! I0314 19:19:00.288233       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0314 19:42:21.979849    8428 command_runner.go:130] ! I0314 19:19:00.288206       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0314 19:42:21.979929    8428 command_runner.go:130] ! I0314 19:19:00.290233       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0314 19:42:21.979929    8428 command_runner.go:130] ! W0314 19:19:00.293166       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0314 19:42:21.980006    8428 command_runner.go:130] ! E0314 19:19:00.293367       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0314 19:42:21.980085    8428 command_runner.go:130] ! W0314 19:19:00.311723       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0314 19:42:21.980085    8428 command_runner.go:130] ! E0314 19:19:00.311803       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0314 19:42:21.980160    8428 command_runner.go:130] ! W0314 19:19:00.312480       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0314 19:42:21.980235    8428 command_runner.go:130] ! E0314 19:19:00.317665       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0314 19:42:21.980235    8428 command_runner.go:130] ! W0314 19:19:00.313212       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0314 19:42:21.980313    8428 command_runner.go:130] ! W0314 19:19:00.313379       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0314 19:42:21.980313    8428 command_runner.go:130] ! W0314 19:19:00.313450       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0314 19:42:21.980388    8428 command_runner.go:130] ! W0314 19:19:00.313586       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0314 19:42:21.980463    8428 command_runner.go:130] ! W0314 19:19:00.313632       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0314 19:42:21.980463    8428 command_runner.go:130] ! W0314 19:19:00.313705       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0314 19:42:21.980538    8428 command_runner.go:130] ! W0314 19:19:00.313774       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0314 19:42:21.980538    8428 command_runner.go:130] ! W0314 19:19:00.313864       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0314 19:42:21.980612    8428 command_runner.go:130] ! W0314 19:19:00.313910       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0314 19:42:21.980685    8428 command_runner.go:130] ! W0314 19:19:00.313978       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0314 19:42:21.980685    8428 command_runner.go:130] ! W0314 19:19:00.314056       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0314 19:42:21.980761    8428 command_runner.go:130] ! W0314 19:19:00.314091       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0314 19:42:21.980835    8428 command_runner.go:130] ! E0314 19:19:00.318101       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0314 19:42:21.980835    8428 command_runner.go:130] ! E0314 19:19:00.318394       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0314 19:42:21.980909    8428 command_runner.go:130] ! E0314 19:19:00.318606       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0314 19:42:21.980983    8428 command_runner.go:130] ! E0314 19:19:00.318728       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0314 19:42:21.981058    8428 command_runner.go:130] ! E0314 19:19:00.318953       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0314 19:42:21.981058    8428 command_runner.go:130] ! E0314 19:19:00.319076       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0314 19:42:21.981131    8428 command_runner.go:130] ! E0314 19:19:00.319318       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0314 19:42:21.981131    8428 command_runner.go:130] ! E0314 19:19:00.319575       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0314 19:42:21.981205    8428 command_runner.go:130] ! E0314 19:19:00.319588       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0314 19:42:21.981278    8428 command_runner.go:130] ! E0314 19:19:00.319719       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0314 19:42:21.981278    8428 command_runner.go:130] ! E0314 19:19:00.319732       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0314 19:42:21.981357    8428 command_runner.go:130] ! E0314 19:19:00.319788       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0314 19:42:21.981431    8428 command_runner.go:130] ! W0314 19:19:01.268901       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0314 19:42:21.981431    8428 command_runner.go:130] ! E0314 19:19:01.269219       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0314 19:42:21.981506    8428 command_runner.go:130] ! W0314 19:19:01.309661       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0314 19:42:21.981583    8428 command_runner.go:130] ! E0314 19:19:01.309894       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0314 19:42:21.981583    8428 command_runner.go:130] ! W0314 19:19:01.318104       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0314 19:42:21.981666    8428 command_runner.go:130] ! E0314 19:19:01.318410       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0314 19:42:21.981721    8428 command_runner.go:130] ! W0314 19:19:01.382148       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0314 19:42:21.981755    8428 command_runner.go:130] ! E0314 19:19:01.382194       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0314 19:42:21.981796    8428 command_runner.go:130] ! W0314 19:19:01.454259       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0314 19:42:21.981796    8428 command_runner.go:130] ! E0314 19:19:01.454398       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0314 19:42:21.981796    8428 command_runner.go:130] ! W0314 19:19:01.505982       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0314 19:42:21.981796    8428 command_runner.go:130] ! E0314 19:19:01.506182       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0314 19:42:21.981796    8428 command_runner.go:130] ! W0314 19:19:01.640521       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0314 19:42:21.981796    8428 command_runner.go:130] ! E0314 19:19:01.640836       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0314 19:42:21.981796    8428 command_runner.go:130] ! W0314 19:19:01.681052       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0314 19:42:21.981796    8428 command_runner.go:130] ! E0314 19:19:01.681953       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0314 19:42:21.981796    8428 command_runner.go:130] ! W0314 19:19:01.732243       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0314 19:42:21.981796    8428 command_runner.go:130] ! E0314 19:19:01.732288       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0314 19:42:21.981796    8428 command_runner.go:130] ! W0314 19:19:01.767241       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0314 19:42:21.981796    8428 command_runner.go:130] ! E0314 19:19:01.767329       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0314 19:42:21.982324    8428 command_runner.go:130] ! W0314 19:19:01.783665       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0314 19:42:21.982404    8428 command_runner.go:130] ! E0314 19:19:01.783845       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0314 19:42:21.982437    8428 command_runner.go:130] ! W0314 19:19:01.812936       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0314 19:42:21.982467    8428 command_runner.go:130] ! E0314 19:19:01.813027       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0314 19:42:21.982467    8428 command_runner.go:130] ! W0314 19:19:01.821109       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0314 19:42:21.982467    8428 command_runner.go:130] ! E0314 19:19:01.821267       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0314 19:42:21.982467    8428 command_runner.go:130] ! W0314 19:19:01.843311       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0314 19:42:21.982467    8428 command_runner.go:130] ! E0314 19:19:01.843339       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0314 19:42:21.982467    8428 command_runner.go:130] ! W0314 19:19:01.914649       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0314 19:42:21.982467    8428 command_runner.go:130] ! E0314 19:19:01.914986       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0314 19:42:21.982467    8428 command_runner.go:130] ! I0314 19:19:04.090863       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0314 19:42:21.982467    8428 command_runner.go:130] ! I0314 19:38:43.236637       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0314 19:42:21.982467    8428 command_runner.go:130] ! I0314 19:38:43.237145       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0314 19:42:21.982467    8428 command_runner.go:130] ! E0314 19:38:43.237439       1 run.go:74] "command failed" err="finished without leader elect"
	I0314 19:42:21.993743    8428 logs.go:123] Gathering logs for dmesg ...
	I0314 19:42:21.993743    8428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:42:22.014134    8428 command_runner.go:130] > [Mar14 19:39] You have booted with nomodeset. This means your GPU drivers are DISABLED
	I0314 19:42:22.014134    8428 command_runner.go:130] > [  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	I0314 19:42:22.014134    8428 command_runner.go:130] > [  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	I0314 19:42:22.014134    8428 command_runner.go:130] > [  +0.111500] RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible!
	I0314 19:42:22.014134    8428 command_runner.go:130] > [  +0.025646] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	I0314 19:42:22.014134    8428 command_runner.go:130] > [  +0.000006] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	I0314 19:42:22.014134    8428 command_runner.go:130] > [  +0.000001] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	I0314 19:42:22.014134    8428 command_runner.go:130] > [  +0.051209] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	I0314 19:42:22.014134    8428 command_runner.go:130] > [  +0.017569] * Found PM-Timer Bug on the chipset. Due to workarounds for a bug,
	I0314 19:42:22.014134    8428 command_runner.go:130] >               * this clock source is slow. Consider trying other clock sources
	I0314 19:42:22.014134    8428 command_runner.go:130] > [  +5.774438] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	I0314 19:42:22.014134    8428 command_runner.go:130] > [  +0.663188] psmouse serio1: trackpoint: failed to get extended button data, assuming 3 buttons
	I0314 19:42:22.014134    8428 command_runner.go:130] > [  +1.473946] systemd-fstab-generator[113]: Ignoring "noauto" option for root device
	I0314 19:42:22.014134    8428 command_runner.go:130] > [  +5.849126] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	I0314 19:42:22.015143    8428 command_runner.go:130] > [  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	I0314 19:42:22.015143    8428 command_runner.go:130] > [  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	I0314 19:42:22.015143    8428 command_runner.go:130] > [Mar14 19:40] systemd-fstab-generator[630]: Ignoring "noauto" option for root device
	I0314 19:42:22.015143    8428 command_runner.go:130] > [  +0.179743] systemd-fstab-generator[642]: Ignoring "noauto" option for root device
	I0314 19:42:22.015143    8428 command_runner.go:130] > [ +24.853688] systemd-fstab-generator[971]: Ignoring "noauto" option for root device
	I0314 19:42:22.015143    8428 command_runner.go:130] > [  +0.096946] kauditd_printk_skb: 73 callbacks suppressed
	I0314 19:42:22.015143    8428 command_runner.go:130] > [  +0.497369] systemd-fstab-generator[1009]: Ignoring "noauto" option for root device
	I0314 19:42:22.015143    8428 command_runner.go:130] > [  +0.185545] systemd-fstab-generator[1021]: Ignoring "noauto" option for root device
	I0314 19:42:22.015143    8428 command_runner.go:130] > [  +0.215423] systemd-fstab-generator[1035]: Ignoring "noauto" option for root device
	I0314 19:42:22.015143    8428 command_runner.go:130] > [  +2.887443] systemd-fstab-generator[1220]: Ignoring "noauto" option for root device
	I0314 19:42:22.015143    8428 command_runner.go:130] > [  +0.193519] systemd-fstab-generator[1232]: Ignoring "noauto" option for root device
	I0314 19:42:22.015143    8428 command_runner.go:130] > [  +0.182072] systemd-fstab-generator[1244]: Ignoring "noauto" option for root device
	I0314 19:42:22.015143    8428 command_runner.go:130] > [  +0.258988] systemd-fstab-generator[1259]: Ignoring "noauto" option for root device
	I0314 19:42:22.015143    8428 command_runner.go:130] > [  +0.819687] systemd-fstab-generator[1381]: Ignoring "noauto" option for root device
	I0314 19:42:22.015143    8428 command_runner.go:130] > [  +0.099817] kauditd_printk_skb: 205 callbacks suppressed
	I0314 19:42:22.015143    8428 command_runner.go:130] > [  +2.940519] systemd-fstab-generator[1516]: Ignoring "noauto" option for root device
	I0314 19:42:22.015143    8428 command_runner.go:130] > [Mar14 19:41] kauditd_printk_skb: 84 callbacks suppressed
	I0314 19:42:22.015143    8428 command_runner.go:130] > [  +4.042735] systemd-fstab-generator[3087]: Ignoring "noauto" option for root device
	I0314 19:42:22.015143    8428 command_runner.go:130] > [  +7.733278] kauditd_printk_skb: 70 callbacks suppressed
	I0314 19:42:22.017600    8428 logs.go:123] Gathering logs for coredns [8899bc003893] ...
	I0314 19:42:22.017600    8428 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8899bc003893"
	I0314 19:42:22.046053    8428 command_runner.go:130] > .:53
	I0314 19:42:22.046053    8428 command_runner.go:130] > [INFO] plugin/reload: Running configuration SHA512 = d518b2f22d7013b4ce33ee954d9f8802810eac8bed02a1cf0be20d76208a6f83258316421f15df605ab13f1704501370ffcd7655fbac5818a200880248c94b94
	I0314 19:42:22.046053    8428 command_runner.go:130] > CoreDNS-1.10.1
	I0314 19:42:22.046053    8428 command_runner.go:130] > linux/amd64, go1.20, 055b2c3
	I0314 19:42:22.046053    8428 command_runner.go:130] > [INFO] 127.0.0.1:56069 - 18242 "HINFO IN 687842018263708116.264844942244880205. udp 55 false 512" NXDOMAIN qr,rd,ra 130 0.040568923s
	I0314 19:42:22.046053    8428 command_runner.go:130] > [INFO] 10.244.0.3:42598 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000297623s
	I0314 19:42:22.046053    8428 command_runner.go:130] > [INFO] 10.244.0.3:49284 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.094729955s
	I0314 19:42:22.046053    8428 command_runner.go:130] > [INFO] 10.244.0.3:58753 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd 60 0.047978925s
	I0314 19:42:22.046053    8428 command_runner.go:130] > [INFO] 10.244.0.3:60240 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 1.250879171s
	I0314 19:42:22.046053    8428 command_runner.go:130] > [INFO] 10.244.1.2:35705 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000107809s
	I0314 19:42:22.046053    8428 command_runner.go:130] > [INFO] 10.244.1.2:38792 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 140 0.00013461s
	I0314 19:42:22.046053    8428 command_runner.go:130] > [INFO] 10.244.1.2:53339 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd 60 0.000060304s
	I0314 19:42:22.046053    8428 command_runner.go:130] > [INFO] 10.244.1.2:55975 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,aa,rd,ra 140 0.000059805s
	I0314 19:42:22.046053    8428 command_runner.go:130] > [INFO] 10.244.0.3:55630 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000117109s
	I0314 19:42:22.046053    8428 command_runner.go:130] > [INFO] 10.244.0.3:50181 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.122219329s
	I0314 19:42:22.046053    8428 command_runner.go:130] > [INFO] 10.244.0.3:58918 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000194615s
	I0314 19:42:22.046053    8428 command_runner.go:130] > [INFO] 10.244.0.3:48641 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00012501s
	I0314 19:42:22.046053    8428 command_runner.go:130] > [INFO] 10.244.0.3:57540 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.0346353s
	I0314 19:42:22.046053    8428 command_runner.go:130] > [INFO] 10.244.0.3:59969 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000278722s
	I0314 19:42:22.046053    8428 command_runner.go:130] > [INFO] 10.244.0.3:51295 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000167413s
	I0314 19:42:22.046053    8428 command_runner.go:130] > [INFO] 10.244.0.3:45005 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000148512s
	I0314 19:42:22.046053    8428 command_runner.go:130] > [INFO] 10.244.1.2:51938 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000100608s
	I0314 19:42:22.046053    8428 command_runner.go:130] > [INFO] 10.244.1.2:46248 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.00024762s
	I0314 19:42:22.046053    8428 command_runner.go:130] > [INFO] 10.244.1.2:46501 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000100408s
	I0314 19:42:22.046053    8428 command_runner.go:130] > [INFO] 10.244.1.2:52414 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000056704s
	I0314 19:42:22.046053    8428 command_runner.go:130] > [INFO] 10.244.1.2:44908 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000121409s
	I0314 19:42:22.046053    8428 command_runner.go:130] > [INFO] 10.244.1.2:49578 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00011941s
	I0314 19:42:22.046053    8428 command_runner.go:130] > [INFO] 10.244.1.2:51057 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000060205s
	I0314 19:42:22.046053    8428 command_runner.go:130] > [INFO] 10.244.1.2:56240 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000055805s
	I0314 19:42:22.046053    8428 command_runner.go:130] > [INFO] 10.244.0.3:32901 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000172914s
	I0314 19:42:22.046053    8428 command_runner.go:130] > [INFO] 10.244.0.3:41115 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000149912s
	I0314 19:42:22.046574    8428 command_runner.go:130] > [INFO] 10.244.0.3:40494 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00013161s
	I0314 19:42:22.046667    8428 command_runner.go:130] > [INFO] 10.244.0.3:40575 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000077106s
	I0314 19:42:22.046759    8428 command_runner.go:130] > [INFO] 10.244.1.2:55307 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000194115s
	I0314 19:42:22.046814    8428 command_runner.go:130] > [INFO] 10.244.1.2:46435 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00025832s
	I0314 19:42:22.046814    8428 command_runner.go:130] > [INFO] 10.244.1.2:52095 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000156813s
	I0314 19:42:22.046814    8428 command_runner.go:130] > [INFO] 10.244.1.2:57849 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00012701s
	I0314 19:42:22.046814    8428 command_runner.go:130] > [INFO] 10.244.0.3:47270 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000244119s
	I0314 19:42:22.046814    8428 command_runner.go:130] > [INFO] 10.244.0.3:59009 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000411532s
	I0314 19:42:22.046814    8428 command_runner.go:130] > [INFO] 10.244.0.3:40925 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000108108s
	I0314 19:42:22.046814    8428 command_runner.go:130] > [INFO] 10.244.0.3:56417 - 5 "PTR IN 1.80.17.172.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000067706s
	I0314 19:42:22.046814    8428 command_runner.go:130] > [INFO] 10.244.1.2:36896 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000108409s
	I0314 19:42:22.046814    8428 command_runner.go:130] > [INFO] 10.244.1.2:38949 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000118209s
	I0314 19:42:22.046814    8428 command_runner.go:130] > [INFO] 10.244.1.2:56933 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000156413s
	I0314 19:42:22.046814    8428 command_runner.go:130] > [INFO] 10.244.1.2:35971 - 5 "PTR IN 1.80.17.172.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000072406s
	I0314 19:42:22.046814    8428 command_runner.go:130] > [INFO] SIGTERM: Shutting down servers then terminating
	I0314 19:42:22.046814    8428 command_runner.go:130] > [INFO] plugin/health: Going into lameduck mode for 5s
	I0314 19:42:22.049606    8428 logs.go:123] Gathering logs for kindnet [1a321c0e8997] ...
	I0314 19:42:22.049606    8428 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a321c0e8997"
	I0314 19:42:22.078790    8428 command_runner.go:130] ! I0314 19:27:36.366640       1 main.go:227] handling current node
	I0314 19:42:22.078871    8428 command_runner.go:130] ! I0314 19:27:36.366652       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:22.078871    8428 command_runner.go:130] ! I0314 19:27:36.366658       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:22.078910    8428 command_runner.go:130] ! I0314 19:27:36.366818       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:22.078947    8428 command_runner.go:130] ! I0314 19:27:36.366827       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:22.078947    8428 command_runner.go:130] ! I0314 19:27:46.378468       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:22.078982    8428 command_runner.go:130] ! I0314 19:27:46.378496       1 main.go:227] handling current node
	I0314 19:42:22.078982    8428 command_runner.go:130] ! I0314 19:27:46.378506       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:22.078982    8428 command_runner.go:130] ! I0314 19:27:46.378513       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:22.078982    8428 command_runner.go:130] ! I0314 19:27:46.379039       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:22.078982    8428 command_runner.go:130] ! I0314 19:27:46.379130       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:22.079139    8428 command_runner.go:130] ! I0314 19:27:56.393642       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:22.079214    8428 command_runner.go:130] ! I0314 19:27:56.393700       1 main.go:227] handling current node
	I0314 19:42:22.079214    8428 command_runner.go:130] ! I0314 19:27:56.393723       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:22.079248    8428 command_runner.go:130] ! I0314 19:27:56.393733       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:22.079248    8428 command_runner.go:130] ! I0314 19:27:56.394716       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:22.079293    8428 command_runner.go:130] ! I0314 19:27:56.394779       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:22.079293    8428 command_runner.go:130] ! I0314 19:28:06.403171       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:22.079293    8428 command_runner.go:130] ! I0314 19:28:06.403199       1 main.go:227] handling current node
	I0314 19:42:22.079293    8428 command_runner.go:130] ! I0314 19:28:06.403212       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:22.079351    8428 command_runner.go:130] ! I0314 19:28:06.403219       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:22.079351    8428 command_runner.go:130] ! I0314 19:28:06.403663       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:22.079351    8428 command_runner.go:130] ! I0314 19:28:06.403834       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:22.079406    8428 command_runner.go:130] ! I0314 19:28:16.415146       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:22.079406    8428 command_runner.go:130] ! I0314 19:28:16.415237       1 main.go:227] handling current node
	I0314 19:42:22.079406    8428 command_runner.go:130] ! I0314 19:28:16.415250       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:22.079406    8428 command_runner.go:130] ! I0314 19:28:16.415260       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:22.079464    8428 command_runner.go:130] ! I0314 19:28:16.415497       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:22.079464    8428 command_runner.go:130] ! I0314 19:28:16.415703       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:22.079491    8428 command_runner.go:130] ! I0314 19:28:26.430257       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:22.079491    8428 command_runner.go:130] ! I0314 19:28:26.430350       1 main.go:227] handling current node
	I0314 19:42:22.079552    8428 command_runner.go:130] ! I0314 19:28:26.430364       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:22.079552    8428 command_runner.go:130] ! I0314 19:28:26.430372       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:22.079552    8428 command_runner.go:130] ! I0314 19:28:26.430709       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:22.079552    8428 command_runner.go:130] ! I0314 19:28:26.430804       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:22.079552    8428 command_runner.go:130] ! I0314 19:28:36.445854       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:22.079552    8428 command_runner.go:130] ! I0314 19:28:36.445897       1 main.go:227] handling current node
	I0314 19:42:22.079552    8428 command_runner.go:130] ! I0314 19:28:36.445915       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:22.079552    8428 command_runner.go:130] ! I0314 19:28:36.446285       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:22.079552    8428 command_runner.go:130] ! I0314 19:28:36.446702       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:22.079552    8428 command_runner.go:130] ! I0314 19:28:36.446731       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:22.079552    8428 command_runner.go:130] ! I0314 19:28:46.461369       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:22.079552    8428 command_runner.go:130] ! I0314 19:28:46.462057       1 main.go:227] handling current node
	I0314 19:42:22.079552    8428 command_runner.go:130] ! I0314 19:28:46.462235       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:22.079552    8428 command_runner.go:130] ! I0314 19:28:46.462250       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:22.079552    8428 command_runner.go:130] ! I0314 19:28:46.462593       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:22.079552    8428 command_runner.go:130] ! I0314 19:28:46.462770       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:22.079552    8428 command_runner.go:130] ! I0314 19:28:56.477451       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:22.079552    8428 command_runner.go:130] ! I0314 19:28:56.477483       1 main.go:227] handling current node
	I0314 19:42:22.079552    8428 command_runner.go:130] ! I0314 19:28:56.477496       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:22.079552    8428 command_runner.go:130] ! I0314 19:28:56.477508       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:22.079552    8428 command_runner.go:130] ! I0314 19:28:56.478007       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:22.079552    8428 command_runner.go:130] ! I0314 19:28:56.478089       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:22.079552    8428 command_runner.go:130] ! I0314 19:29:06.484423       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:22.079552    8428 command_runner.go:130] ! I0314 19:29:06.484497       1 main.go:227] handling current node
	I0314 19:42:22.079552    8428 command_runner.go:130] ! I0314 19:29:06.484559       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:22.079552    8428 command_runner.go:130] ! I0314 19:29:06.484624       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:22.079552    8428 command_runner.go:130] ! I0314 19:29:06.484852       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:22.079552    8428 command_runner.go:130] ! I0314 19:29:06.484945       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:22.079552    8428 command_runner.go:130] ! I0314 19:29:16.500812       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:22.079552    8428 command_runner.go:130] ! I0314 19:29:16.500909       1 main.go:227] handling current node
	I0314 19:42:22.079552    8428 command_runner.go:130] ! I0314 19:29:16.500924       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:22.079552    8428 command_runner.go:130] ! I0314 19:29:16.500932       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:22.079552    8428 command_runner.go:130] ! I0314 19:29:16.501505       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:22.080080    8428 command_runner.go:130] ! I0314 19:29:16.501585       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:22.080080    8428 command_runner.go:130] ! I0314 19:29:26.508494       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:22.080080    8428 command_runner.go:130] ! I0314 19:29:26.508585       1 main.go:227] handling current node
	I0314 19:42:22.080124    8428 command_runner.go:130] ! I0314 19:29:26.508601       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:22.080124    8428 command_runner.go:130] ! I0314 19:29:26.508609       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:22.080124    8428 command_runner.go:130] ! I0314 19:29:26.508822       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:22.080176    8428 command_runner.go:130] ! I0314 19:29:26.508837       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:22.080210    8428 command_runner.go:130] ! I0314 19:29:36.517002       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:22.080239    8428 command_runner.go:130] ! I0314 19:29:36.517123       1 main.go:227] handling current node
	I0314 19:42:22.080239    8428 command_runner.go:130] ! I0314 19:29:36.517142       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:22.080239    8428 command_runner.go:130] ! I0314 19:29:36.517155       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:22.080239    8428 command_runner.go:130] ! I0314 19:29:36.517648       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:22.080239    8428 command_runner.go:130] ! I0314 19:29:36.517836       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:22.080239    8428 command_runner.go:130] ! I0314 19:29:46.530826       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:22.080239    8428 command_runner.go:130] ! I0314 19:29:46.530962       1 main.go:227] handling current node
	I0314 19:42:22.080239    8428 command_runner.go:130] ! I0314 19:29:46.530978       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:22.080239    8428 command_runner.go:130] ! I0314 19:29:46.531314       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:22.080239    8428 command_runner.go:130] ! I0314 19:29:46.531557       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:22.080239    8428 command_runner.go:130] ! I0314 19:29:46.531706       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:22.080239    8428 command_runner.go:130] ! I0314 19:29:56.551916       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:22.080239    8428 command_runner.go:130] ! I0314 19:29:56.551953       1 main.go:227] handling current node
	I0314 19:42:22.080239    8428 command_runner.go:130] ! I0314 19:29:56.551965       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:22.080239    8428 command_runner.go:130] ! I0314 19:29:56.551971       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:22.080239    8428 command_runner.go:130] ! I0314 19:29:56.552084       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:22.080239    8428 command_runner.go:130] ! I0314 19:29:56.552107       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:22.080239    8428 command_runner.go:130] ! I0314 19:30:06.560066       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:22.080239    8428 command_runner.go:130] ! I0314 19:30:06.560115       1 main.go:227] handling current node
	I0314 19:42:22.080239    8428 command_runner.go:130] ! I0314 19:30:06.560129       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:22.080239    8428 command_runner.go:130] ! I0314 19:30:06.560136       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:22.080239    8428 command_runner.go:130] ! I0314 19:30:06.560429       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:22.080239    8428 command_runner.go:130] ! I0314 19:30:06.560534       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:22.080239    8428 command_runner.go:130] ! I0314 19:30:16.573690       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:22.080239    8428 command_runner.go:130] ! I0314 19:30:16.573731       1 main.go:227] handling current node
	I0314 19:42:22.080239    8428 command_runner.go:130] ! I0314 19:30:16.573978       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:22.080239    8428 command_runner.go:130] ! I0314 19:30:16.574067       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:22.080239    8428 command_runner.go:130] ! I0314 19:30:16.574385       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:22.080239    8428 command_runner.go:130] ! I0314 19:30:16.574414       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:22.080239    8428 command_runner.go:130] ! I0314 19:30:26.589277       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:22.080239    8428 command_runner.go:130] ! I0314 19:30:26.589488       1 main.go:227] handling current node
	I0314 19:42:22.080239    8428 command_runner.go:130] ! I0314 19:30:26.589534       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:22.080239    8428 command_runner.go:130] ! I0314 19:30:26.589557       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:22.080239    8428 command_runner.go:130] ! I0314 19:30:26.589802       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:22.080239    8428 command_runner.go:130] ! I0314 19:30:26.589885       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:22.080766    8428 command_runner.go:130] ! I0314 19:30:36.605356       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:22.080766    8428 command_runner.go:130] ! I0314 19:30:36.605400       1 main.go:227] handling current node
	I0314 19:42:22.080806    8428 command_runner.go:130] ! I0314 19:30:36.605412       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:22.080806    8428 command_runner.go:130] ! I0314 19:30:36.605418       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:22.080806    8428 command_runner.go:130] ! I0314 19:30:36.605556       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:22.080806    8428 command_runner.go:130] ! I0314 19:30:36.605625       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:22.080806    8428 command_runner.go:130] ! I0314 19:30:46.612911       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:22.080806    8428 command_runner.go:130] ! I0314 19:30:46.613010       1 main.go:227] handling current node
	I0314 19:42:22.080806    8428 command_runner.go:130] ! I0314 19:30:46.613025       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:22.080806    8428 command_runner.go:130] ! I0314 19:30:46.613034       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:22.080806    8428 command_runner.go:130] ! I0314 19:30:46.613445       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:22.080806    8428 command_runner.go:130] ! I0314 19:30:46.615380       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:22.080806    8428 command_runner.go:130] ! I0314 19:30:56.630605       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:22.080806    8428 command_runner.go:130] ! I0314 19:30:56.630965       1 main.go:227] handling current node
	I0314 19:42:22.080806    8428 command_runner.go:130] ! I0314 19:30:56.631076       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:22.080806    8428 command_runner.go:130] ! I0314 19:30:56.631132       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:22.080806    8428 command_runner.go:130] ! I0314 19:30:56.631442       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:22.080806    8428 command_runner.go:130] ! I0314 19:30:56.631542       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:22.080806    8428 command_runner.go:130] ! I0314 19:31:06.643588       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:22.080806    8428 command_runner.go:130] ! I0314 19:31:06.643631       1 main.go:227] handling current node
	I0314 19:42:22.080806    8428 command_runner.go:130] ! I0314 19:31:06.643643       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:22.080806    8428 command_runner.go:130] ! I0314 19:31:06.643650       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:22.080806    8428 command_runner.go:130] ! I0314 19:31:06.644160       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:22.080806    8428 command_runner.go:130] ! I0314 19:31:06.644255       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:22.080806    8428 command_runner.go:130] ! I0314 19:31:16.650940       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:22.080806    8428 command_runner.go:130] ! I0314 19:31:16.651187       1 main.go:227] handling current node
	I0314 19:42:22.080806    8428 command_runner.go:130] ! I0314 19:31:16.651208       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:22.080806    8428 command_runner.go:130] ! I0314 19:31:16.651236       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:22.080806    8428 command_runner.go:130] ! I0314 19:31:16.651354       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:22.080806    8428 command_runner.go:130] ! I0314 19:31:16.651374       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:22.080806    8428 command_runner.go:130] ! I0314 19:31:26.665304       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:22.080806    8428 command_runner.go:130] ! I0314 19:31:26.665403       1 main.go:227] handling current node
	I0314 19:42:22.080806    8428 command_runner.go:130] ! I0314 19:31:26.665418       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:22.080806    8428 command_runner.go:130] ! I0314 19:31:26.665427       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:22.080806    8428 command_runner.go:130] ! I0314 19:31:26.665674       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:22.080806    8428 command_runner.go:130] ! I0314 19:31:26.665859       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:22.080806    8428 command_runner.go:130] ! I0314 19:31:36.681645       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:22.081331    8428 command_runner.go:130] ! I0314 19:31:36.681680       1 main.go:227] handling current node
	I0314 19:42:22.081373    8428 command_runner.go:130] ! I0314 19:31:36.681695       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:22.081373    8428 command_runner.go:130] ! I0314 19:31:36.681704       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:22.081413    8428 command_runner.go:130] ! I0314 19:31:36.682032       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:22.081413    8428 command_runner.go:130] ! I0314 19:31:36.682062       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:22.081472    8428 command_runner.go:130] ! I0314 19:31:46.697305       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:22.081472    8428 command_runner.go:130] ! I0314 19:31:46.697415       1 main.go:227] handling current node
	I0314 19:42:22.081527    8428 command_runner.go:130] ! I0314 19:31:46.697432       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:22.081527    8428 command_runner.go:130] ! I0314 19:31:46.697444       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:22.081527    8428 command_runner.go:130] ! I0314 19:31:46.697965       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:22.081609    8428 command_runner.go:130] ! I0314 19:31:46.698093       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:22.081631    8428 command_runner.go:130] ! I0314 19:31:56.705518       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:22.081631    8428 command_runner.go:130] ! I0314 19:31:56.705613       1 main.go:227] handling current node
	I0314 19:42:22.081631    8428 command_runner.go:130] ! I0314 19:31:56.705627       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:22.081631    8428 command_runner.go:130] ! I0314 19:31:56.705635       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:22.081631    8428 command_runner.go:130] ! I0314 19:31:56.706151       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:22.081631    8428 command_runner.go:130] ! I0314 19:31:56.706269       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:22.081631    8428 command_runner.go:130] ! I0314 19:32:06.716977       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:22.081631    8428 command_runner.go:130] ! I0314 19:32:06.717087       1 main.go:227] handling current node
	I0314 19:42:22.081631    8428 command_runner.go:130] ! I0314 19:32:06.717105       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:22.081631    8428 command_runner.go:130] ! I0314 19:32:06.717116       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:22.081631    8428 command_runner.go:130] ! I0314 19:32:06.717701       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:22.081631    8428 command_runner.go:130] ! I0314 19:32:06.717870       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:22.081631    8428 command_runner.go:130] ! I0314 19:32:16.738903       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:22.081631    8428 command_runner.go:130] ! I0314 19:32:16.738946       1 main.go:227] handling current node
	I0314 19:42:22.081631    8428 command_runner.go:130] ! I0314 19:32:16.738962       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:22.081631    8428 command_runner.go:130] ! I0314 19:32:16.738971       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:22.081631    8428 command_runner.go:130] ! I0314 19:32:16.739310       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:22.081631    8428 command_runner.go:130] ! I0314 19:32:16.739420       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:22.081631    8428 command_runner.go:130] ! I0314 19:32:26.749067       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:22.081631    8428 command_runner.go:130] ! I0314 19:32:26.749521       1 main.go:227] handling current node
	I0314 19:42:22.081631    8428 command_runner.go:130] ! I0314 19:32:26.749656       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:22.081631    8428 command_runner.go:130] ! I0314 19:32:26.749670       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:22.081631    8428 command_runner.go:130] ! I0314 19:32:26.750040       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:22.081631    8428 command_runner.go:130] ! I0314 19:32:26.750074       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:22.081631    8428 command_runner.go:130] ! I0314 19:32:36.765313       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:22.081631    8428 command_runner.go:130] ! I0314 19:32:36.765423       1 main.go:227] handling current node
	I0314 19:42:22.081631    8428 command_runner.go:130] ! I0314 19:32:36.765442       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:22.081631    8428 command_runner.go:130] ! I0314 19:32:36.765453       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:22.081631    8428 command_runner.go:130] ! I0314 19:32:36.766102       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:22.081631    8428 command_runner.go:130] ! I0314 19:32:36.766130       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:22.081631    8428 command_runner.go:130] ! I0314 19:32:46.781715       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:22.081631    8428 command_runner.go:130] ! I0314 19:32:46.781800       1 main.go:227] handling current node
	I0314 19:42:22.081631    8428 command_runner.go:130] ! I0314 19:32:46.782151       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:22.082159    8428 command_runner.go:130] ! I0314 19:32:46.782168       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:22.082159    8428 command_runner.go:130] ! I0314 19:32:46.782370       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:22.082207    8428 command_runner.go:130] ! I0314 19:32:46.782396       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:22.082231    8428 command_runner.go:130] ! I0314 19:32:56.797473       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:22.082231    8428 command_runner.go:130] ! I0314 19:32:56.797568       1 main.go:227] handling current node
	I0314 19:42:22.082231    8428 command_runner.go:130] ! I0314 19:32:56.797583       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:22.082231    8428 command_runner.go:130] ! I0314 19:32:56.797621       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:22.082231    8428 command_runner.go:130] ! I0314 19:32:56.797733       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:22.082231    8428 command_runner.go:130] ! I0314 19:32:56.797772       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:22.082231    8428 command_runner.go:130] ! I0314 19:33:06.803421       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:22.082231    8428 command_runner.go:130] ! I0314 19:33:06.803513       1 main.go:227] handling current node
	I0314 19:42:22.082231    8428 command_runner.go:130] ! I0314 19:33:06.803527       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:22.082231    8428 command_runner.go:130] ! I0314 19:33:06.803534       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:22.082231    8428 command_runner.go:130] ! I0314 19:33:06.804158       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:22.082231    8428 command_runner.go:130] ! I0314 19:33:06.804237       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:22.082231    8428 command_runner.go:130] ! I0314 19:33:16.818983       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:22.082231    8428 command_runner.go:130] ! I0314 19:33:16.819134       1 main.go:227] handling current node
	I0314 19:42:22.082231    8428 command_runner.go:130] ! I0314 19:33:16.819149       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:22.082231    8428 command_runner.go:130] ! I0314 19:33:16.819157       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:22.082231    8428 command_runner.go:130] ! I0314 19:33:16.819421       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:22.082231    8428 command_runner.go:130] ! I0314 19:33:16.819491       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:22.082231    8428 command_runner.go:130] ! I0314 19:33:26.826209       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:22.082231    8428 command_runner.go:130] ! I0314 19:33:26.826474       1 main.go:227] handling current node
	I0314 19:42:22.082231    8428 command_runner.go:130] ! I0314 19:33:26.826509       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:22.082231    8428 command_runner.go:130] ! I0314 19:33:26.826519       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:22.082231    8428 command_runner.go:130] ! I0314 19:33:26.826794       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:22.082231    8428 command_runner.go:130] ! I0314 19:33:26.826886       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:22.082231    8428 command_runner.go:130] ! I0314 19:33:36.839979       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:22.082231    8428 command_runner.go:130] ! I0314 19:33:36.840555       1 main.go:227] handling current node
	I0314 19:42:22.082231    8428 command_runner.go:130] ! I0314 19:33:36.840828       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:22.082231    8428 command_runner.go:130] ! I0314 19:33:36.840855       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:22.082231    8428 command_runner.go:130] ! I0314 19:33:36.841055       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:22.082231    8428 command_runner.go:130] ! I0314 19:33:36.841183       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:22.082231    8428 command_runner.go:130] ! I0314 19:33:46.854483       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:22.082231    8428 command_runner.go:130] ! I0314 19:33:46.854585       1 main.go:227] handling current node
	I0314 19:42:22.082231    8428 command_runner.go:130] ! I0314 19:33:46.854600       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:22.082231    8428 command_runner.go:130] ! I0314 19:33:46.854608       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:22.082758    8428 command_runner.go:130] ! I0314 19:33:46.855303       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:22.082758    8428 command_runner.go:130] ! I0314 19:33:46.855389       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:22.082799    8428 command_runner.go:130] ! I0314 19:33:56.867052       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:22.082799    8428 command_runner.go:130] ! I0314 19:33:56.867136       1 main.go:227] handling current node
	I0314 19:42:22.082799    8428 command_runner.go:130] ! I0314 19:33:56.867150       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:22.082799    8428 command_runner.go:130] ! I0314 19:33:56.867158       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:22.082799    8428 command_runner.go:130] ! I0314 19:33:56.867493       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:22.082799    8428 command_runner.go:130] ! I0314 19:33:56.867886       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:22.082799    8428 command_runner.go:130] ! I0314 19:34:06.874298       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:22.082799    8428 command_runner.go:130] ! I0314 19:34:06.874391       1 main.go:227] handling current node
	I0314 19:42:22.082799    8428 command_runner.go:130] ! I0314 19:34:06.874405       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:22.082799    8428 command_runner.go:130] ! I0314 19:34:06.874413       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:22.082799    8428 command_runner.go:130] ! I0314 19:34:06.874932       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:22.082799    8428 command_runner.go:130] ! I0314 19:34:06.874962       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:22.082799    8428 command_runner.go:130] ! I0314 19:34:16.890513       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:22.082799    8428 command_runner.go:130] ! I0314 19:34:16.890589       1 main.go:227] handling current node
	I0314 19:42:22.082799    8428 command_runner.go:130] ! I0314 19:34:16.890604       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:22.082799    8428 command_runner.go:130] ! I0314 19:34:16.890612       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:22.082799    8428 command_runner.go:130] ! I0314 19:34:16.890870       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:22.082799    8428 command_runner.go:130] ! I0314 19:34:16.890953       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:22.082799    8428 command_runner.go:130] ! I0314 19:34:26.908423       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:22.082799    8428 command_runner.go:130] ! I0314 19:34:26.908576       1 main.go:227] handling current node
	I0314 19:42:22.082799    8428 command_runner.go:130] ! I0314 19:34:26.908597       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:22.082799    8428 command_runner.go:130] ! I0314 19:34:26.908606       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:22.082799    8428 command_runner.go:130] ! I0314 19:34:26.909103       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:22.082799    8428 command_runner.go:130] ! I0314 19:34:26.909271       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:22.082799    8428 command_runner.go:130] ! I0314 19:34:36.915794       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:22.082799    8428 command_runner.go:130] ! I0314 19:34:36.915910       1 main.go:227] handling current node
	I0314 19:42:22.082799    8428 command_runner.go:130] ! I0314 19:34:36.915926       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:22.082799    8428 command_runner.go:130] ! I0314 19:34:36.915935       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:22.082799    8428 command_runner.go:130] ! I0314 19:34:36.916282       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:22.082799    8428 command_runner.go:130] ! I0314 19:34:36.916372       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:22.082799    8428 command_runner.go:130] ! I0314 19:34:46.931699       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:22.082799    8428 command_runner.go:130] ! I0314 19:34:46.931833       1 main.go:227] handling current node
	I0314 19:42:22.082799    8428 command_runner.go:130] ! I0314 19:34:46.931849       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:22.082799    8428 command_runner.go:130] ! I0314 19:34:46.931858       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:22.083324    8428 command_runner.go:130] ! I0314 19:34:46.932099       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:22.083324    8428 command_runner.go:130] ! I0314 19:34:46.932124       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:22.083324    8428 command_runner.go:130] ! I0314 19:34:56.946470       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:22.083408    8428 command_runner.go:130] ! I0314 19:34:56.946565       1 main.go:227] handling current node
	I0314 19:42:22.083408    8428 command_runner.go:130] ! I0314 19:34:56.946580       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:22.083408    8428 command_runner.go:130] ! I0314 19:34:56.946588       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:22.083408    8428 command_runner.go:130] ! I0314 19:34:56.946812       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:22.083408    8428 command_runner.go:130] ! I0314 19:34:56.946927       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:22.083499    8428 command_runner.go:130] ! I0314 19:35:06.960844       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:22.083499    8428 command_runner.go:130] ! I0314 19:35:06.960939       1 main.go:227] handling current node
	I0314 19:42:22.083499    8428 command_runner.go:130] ! I0314 19:35:06.960954       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:22.083499    8428 command_runner.go:130] ! I0314 19:35:06.960962       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:22.083581    8428 command_runner.go:130] ! I0314 19:35:06.961467       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:22.083581    8428 command_runner.go:130] ! I0314 19:35:06.961574       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:22.083581    8428 command_runner.go:130] ! I0314 19:35:16.981993       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:22.083581    8428 command_runner.go:130] ! I0314 19:35:16.982080       1 main.go:227] handling current node
	I0314 19:42:22.083665    8428 command_runner.go:130] ! I0314 19:35:16.982095       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:22.083665    8428 command_runner.go:130] ! I0314 19:35:16.982103       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:22.083665    8428 command_runner.go:130] ! I0314 19:35:16.982594       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:22.083665    8428 command_runner.go:130] ! I0314 19:35:16.982673       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:22.083748    8428 command_runner.go:130] ! I0314 19:35:26.993848       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:22.083748    8428 command_runner.go:130] ! I0314 19:35:26.993940       1 main.go:227] handling current node
	I0314 19:42:22.083748    8428 command_runner.go:130] ! I0314 19:35:26.993955       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:22.083748    8428 command_runner.go:130] ! I0314 19:35:26.993963       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:22.083748    8428 command_runner.go:130] ! I0314 19:35:26.994360       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:22.083829    8428 command_runner.go:130] ! I0314 19:35:26.994437       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:22.083829    8428 command_runner.go:130] ! I0314 19:35:37.008613       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:22.083829    8428 command_runner.go:130] ! I0314 19:35:37.008706       1 main.go:227] handling current node
	I0314 19:42:22.083829    8428 command_runner.go:130] ! I0314 19:35:37.008720       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:22.083829    8428 command_runner.go:130] ! I0314 19:35:37.008727       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:22.083918    8428 command_runner.go:130] ! I0314 19:35:37.009233       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:22.083918    8428 command_runner.go:130] ! I0314 19:35:37.009320       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:22.083918    8428 command_runner.go:130] ! I0314 19:35:47.018420       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:22.083918    8428 command_runner.go:130] ! I0314 19:35:47.018526       1 main.go:227] handling current node
	I0314 19:42:22.083999    8428 command_runner.go:130] ! I0314 19:35:47.018541       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:22.083999    8428 command_runner.go:130] ! I0314 19:35:47.018549       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:22.083999    8428 command_runner.go:130] ! I0314 19:35:47.018669       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:22.083999    8428 command_runner.go:130] ! I0314 19:35:47.018680       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:22.083999    8428 command_runner.go:130] ! I0314 19:35:57.025132       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:22.084079    8428 command_runner.go:130] ! I0314 19:35:57.025207       1 main.go:227] handling current node
	I0314 19:42:22.084079    8428 command_runner.go:130] ! I0314 19:35:57.025220       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:22.084079    8428 command_runner.go:130] ! I0314 19:35:57.025228       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:22.084079    8428 command_runner.go:130] ! I0314 19:35:57.026009       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:22.084161    8428 command_runner.go:130] ! I0314 19:35:57.026145       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:22.084161    8428 command_runner.go:130] ! I0314 19:36:07.042281       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:22.084161    8428 command_runner.go:130] ! I0314 19:36:07.042353       1 main.go:227] handling current node
	I0314 19:42:22.084161    8428 command_runner.go:130] ! I0314 19:36:07.042367       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:22.084161    8428 command_runner.go:130] ! I0314 19:36:07.042375       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:22.084240    8428 command_runner.go:130] ! I0314 19:36:07.042493       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:22.084240    8428 command_runner.go:130] ! I0314 19:36:07.042500       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:22.084240    8428 command_runner.go:130] ! I0314 19:36:17.055539       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:22.084329    8428 command_runner.go:130] ! I0314 19:36:17.055567       1 main.go:227] handling current node
	I0314 19:42:22.084329    8428 command_runner.go:130] ! I0314 19:36:17.055581       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:22.084329    8428 command_runner.go:130] ! I0314 19:36:17.055588       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:22.084329    8428 command_runner.go:130] ! I0314 19:36:17.056312       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:22.084329    8428 command_runner.go:130] ! I0314 19:36:17.056341       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:22.084408    8428 command_runner.go:130] ! I0314 19:36:27.067921       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:22.084434    8428 command_runner.go:130] ! I0314 19:36:27.067961       1 main.go:227] handling current node
	I0314 19:42:22.084461    8428 command_runner.go:130] ! I0314 19:36:27.069052       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:22.084490    8428 command_runner.go:130] ! I0314 19:36:27.069179       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:22.084490    8428 command_runner.go:130] ! I0314 19:36:27.069306       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:22.084523    8428 command_runner.go:130] ! I0314 19:36:27.069332       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:22.084545    8428 command_runner.go:130] ! I0314 19:36:37.082322       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:22.084545    8428 command_runner.go:130] ! I0314 19:36:37.082413       1 main.go:227] handling current node
	I0314 19:42:22.084545    8428 command_runner.go:130] ! I0314 19:36:37.082429       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:22.084545    8428 command_runner.go:130] ! I0314 19:36:37.082437       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:22.084545    8428 command_runner.go:130] ! I0314 19:36:37.082972       1 main.go:223] Handling node with IPs: map[172.17.85.186:{}]
	I0314 19:42:22.084545    8428 command_runner.go:130] ! I0314 19:36:37.083000       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.2.0/24] 
	I0314 19:42:22.084545    8428 command_runner.go:130] ! I0314 19:36:47.099685       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:22.084545    8428 command_runner.go:130] ! I0314 19:36:47.099830       1 main.go:227] handling current node
	I0314 19:42:22.084545    8428 command_runner.go:130] ! I0314 19:36:47.099862       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:22.084545    8428 command_runner.go:130] ! I0314 19:36:47.099982       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:22.084545    8428 command_runner.go:130] ! I0314 19:36:57.107274       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:22.084545    8428 command_runner.go:130] ! I0314 19:36:57.107368       1 main.go:227] handling current node
	I0314 19:42:22.084545    8428 command_runner.go:130] ! I0314 19:36:57.107382       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:22.084545    8428 command_runner.go:130] ! I0314 19:36:57.107390       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:22.084545    8428 command_runner.go:130] ! I0314 19:36:57.107827       1 main.go:223] Handling node with IPs: map[172.17.84.215:{}]
	I0314 19:42:22.084545    8428 command_runner.go:130] ! I0314 19:36:57.107942       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.3.0/24] 
	I0314 19:42:22.084545    8428 command_runner.go:130] ! I0314 19:36:57.108076       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 172.17.84.215 Flags: [] Table: 0} 
	I0314 19:42:22.084545    8428 command_runner.go:130] ! I0314 19:37:07.120709       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:22.084545    8428 command_runner.go:130] ! I0314 19:37:07.121059       1 main.go:227] handling current node
	I0314 19:42:22.084545    8428 command_runner.go:130] ! I0314 19:37:07.121098       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:22.084545    8428 command_runner.go:130] ! I0314 19:37:07.121109       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:22.084545    8428 command_runner.go:130] ! I0314 19:37:07.121440       1 main.go:223] Handling node with IPs: map[172.17.84.215:{}]
	I0314 19:42:22.084545    8428 command_runner.go:130] ! I0314 19:37:07.121455       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.3.0/24] 
	I0314 19:42:22.084545    8428 command_runner.go:130] ! I0314 19:37:17.137704       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:22.084545    8428 command_runner.go:130] ! I0314 19:37:17.137784       1 main.go:227] handling current node
	I0314 19:42:22.084545    8428 command_runner.go:130] ! I0314 19:37:17.137796       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:22.084545    8428 command_runner.go:130] ! I0314 19:37:17.137803       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:22.084545    8428 command_runner.go:130] ! I0314 19:37:17.138265       1 main.go:223] Handling node with IPs: map[172.17.84.215:{}]
	I0314 19:42:22.084545    8428 command_runner.go:130] ! I0314 19:37:17.138298       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.3.0/24] 
	I0314 19:42:22.084545    8428 command_runner.go:130] ! I0314 19:37:27.144505       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:22.084545    8428 command_runner.go:130] ! I0314 19:37:27.144594       1 main.go:227] handling current node
	I0314 19:42:22.084545    8428 command_runner.go:130] ! I0314 19:37:27.144607       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:22.084545    8428 command_runner.go:130] ! I0314 19:37:27.144615       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:22.084545    8428 command_runner.go:130] ! I0314 19:37:27.145062       1 main.go:223] Handling node with IPs: map[172.17.84.215:{}]
	I0314 19:42:22.085071    8428 command_runner.go:130] ! I0314 19:37:27.145092       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.3.0/24] 
	I0314 19:42:22.085071    8428 command_runner.go:130] ! I0314 19:37:37.154684       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:22.085071    8428 command_runner.go:130] ! I0314 19:37:37.154836       1 main.go:227] handling current node
	I0314 19:42:22.085071    8428 command_runner.go:130] ! I0314 19:37:37.154851       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:22.085071    8428 command_runner.go:130] ! I0314 19:37:37.154860       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:22.085160    8428 command_runner.go:130] ! I0314 19:37:37.155452       1 main.go:223] Handling node with IPs: map[172.17.84.215:{}]
	I0314 19:42:22.085160    8428 command_runner.go:130] ! I0314 19:37:37.155614       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.3.0/24] 
	I0314 19:42:22.085160    8428 command_runner.go:130] ! I0314 19:37:47.168249       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:22.085160    8428 command_runner.go:130] ! I0314 19:37:47.168338       1 main.go:227] handling current node
	I0314 19:42:22.085160    8428 command_runner.go:130] ! I0314 19:37:47.168352       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:22.085240    8428 command_runner.go:130] ! I0314 19:37:47.168360       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:22.085240    8428 command_runner.go:130] ! I0314 19:37:47.168976       1 main.go:223] Handling node with IPs: map[172.17.84.215:{}]
	I0314 19:42:22.085240    8428 command_runner.go:130] ! I0314 19:37:47.169064       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.3.0/24] 
	I0314 19:42:22.085240    8428 command_runner.go:130] ! I0314 19:37:57.176039       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:22.085322    8428 command_runner.go:130] ! I0314 19:37:57.176130       1 main.go:227] handling current node
	I0314 19:42:22.085322    8428 command_runner.go:130] ! I0314 19:37:57.176145       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:22.085322    8428 command_runner.go:130] ! I0314 19:37:57.176153       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:22.085402    8428 command_runner.go:130] ! I0314 19:37:57.176528       1 main.go:223] Handling node with IPs: map[172.17.84.215:{}]
	I0314 19:42:22.085402    8428 command_runner.go:130] ! I0314 19:37:57.176659       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.3.0/24] 
	I0314 19:42:22.085402    8428 command_runner.go:130] ! I0314 19:38:07.189890       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:22.085402    8428 command_runner.go:130] ! I0314 19:38:07.189993       1 main.go:227] handling current node
	I0314 19:42:22.085545    8428 command_runner.go:130] ! I0314 19:38:07.190008       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:22.085545    8428 command_runner.go:130] ! I0314 19:38:07.190016       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:22.085545    8428 command_runner.go:130] ! I0314 19:38:07.190217       1 main.go:223] Handling node with IPs: map[172.17.84.215:{}]
	I0314 19:42:22.085545    8428 command_runner.go:130] ! I0314 19:38:07.190245       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.3.0/24] 
	I0314 19:42:22.085545    8428 command_runner.go:130] ! I0314 19:38:17.196541       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:22.085545    8428 command_runner.go:130] ! I0314 19:38:17.196633       1 main.go:227] handling current node
	I0314 19:42:22.085640    8428 command_runner.go:130] ! I0314 19:38:17.196647       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:22.085640    8428 command_runner.go:130] ! I0314 19:38:17.196655       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:22.085640    8428 command_runner.go:130] ! I0314 19:38:17.196888       1 main.go:223] Handling node with IPs: map[172.17.84.215:{}]
	I0314 19:42:22.085640    8428 command_runner.go:130] ! I0314 19:38:17.197012       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.3.0/24] 
	I0314 19:42:22.085721    8428 command_runner.go:130] ! I0314 19:38:27.217365       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:22.085721    8428 command_runner.go:130] ! I0314 19:38:27.217460       1 main.go:227] handling current node
	I0314 19:42:22.085721    8428 command_runner.go:130] ! I0314 19:38:27.217475       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:22.085800    8428 command_runner.go:130] ! I0314 19:38:27.217483       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:22.085800    8428 command_runner.go:130] ! I0314 19:38:27.217621       1 main.go:223] Handling node with IPs: map[172.17.84.215:{}]
	I0314 19:42:22.085800    8428 command_runner.go:130] ! I0314 19:38:27.217634       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.3.0/24] 
	I0314 19:42:22.085800    8428 command_runner.go:130] ! I0314 19:38:37.229941       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:42:22.085800    8428 command_runner.go:130] ! I0314 19:38:37.230048       1 main.go:227] handling current node
	I0314 19:42:22.085881    8428 command_runner.go:130] ! I0314 19:38:37.230062       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:42:22.085881    8428 command_runner.go:130] ! I0314 19:38:37.230070       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:42:22.085881    8428 command_runner.go:130] ! I0314 19:38:37.230268       1 main.go:223] Handling node with IPs: map[172.17.84.215:{}]
	I0314 19:42:22.085961    8428 command_runner.go:130] ! I0314 19:38:37.230338       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.3.0/24] 
	I0314 19:42:22.102667    8428 logs.go:123] Gathering logs for kube-proxy [497007582e44] ...
	I0314 19:42:22.102667    8428 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 497007582e44"
	I0314 19:42:22.132564    8428 command_runner.go:130] ! I0314 19:41:08.342277       1 server_others.go:69] "Using iptables proxy"
	I0314 19:42:22.132953    8428 command_runner.go:130] ! I0314 19:41:08.381589       1 node.go:141] Successfully retrieved node IP: 172.17.93.236
	I0314 19:42:22.132953    8428 command_runner.go:130] ! I0314 19:41:08.703360       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0314 19:42:22.132953    8428 command_runner.go:130] ! I0314 19:41:08.703384       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0314 19:42:22.132953    8428 command_runner.go:130] ! I0314 19:41:08.724122       1 server_others.go:152] "Using iptables Proxier"
	I0314 19:42:22.133043    8428 command_runner.go:130] ! I0314 19:41:08.726554       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0314 19:42:22.133043    8428 command_runner.go:130] ! I0314 19:41:08.729424       1 server.go:846] "Version info" version="v1.28.4"
	I0314 19:42:22.133043    8428 command_runner.go:130] ! I0314 19:41:08.729460       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0314 19:42:22.133043    8428 command_runner.go:130] ! I0314 19:41:08.732062       1 config.go:188] "Starting service config controller"
	I0314 19:42:22.133043    8428 command_runner.go:130] ! I0314 19:41:08.732501       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0314 19:42:22.133043    8428 command_runner.go:130] ! I0314 19:41:08.732571       1 config.go:97] "Starting endpoint slice config controller"
	I0314 19:42:22.133126    8428 command_runner.go:130] ! I0314 19:41:08.732581       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0314 19:42:22.133170    8428 command_runner.go:130] ! I0314 19:41:08.733523       1 config.go:315] "Starting node config controller"
	I0314 19:42:22.133170    8428 command_runner.go:130] ! I0314 19:41:08.733550       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0314 19:42:22.133170    8428 command_runner.go:130] ! I0314 19:41:08.832968       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0314 19:42:22.133170    8428 command_runner.go:130] ! I0314 19:41:08.833049       1 shared_informer.go:318] Caches are synced for service config
	I0314 19:42:22.133170    8428 command_runner.go:130] ! I0314 19:41:08.835501       1 shared_informer.go:318] Caches are synced for node config
	I0314 19:42:22.137287    8428 logs.go:123] Gathering logs for kube-controller-manager [16b80f73683d] ...
	I0314 19:42:22.137376    8428 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16b80f73683d"
	I0314 19:42:22.167451    8428 command_runner.go:130] ! I0314 19:18:57.791996       1 serving.go:348] Generated self-signed cert in-memory
	I0314 19:42:22.167451    8428 command_runner.go:130] ! I0314 19:18:58.802083       1 controllermanager.go:189] "Starting" version="v1.28.4"
	I0314 19:42:22.167451    8428 command_runner.go:130] ! I0314 19:18:58.802123       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0314 19:42:22.167451    8428 command_runner.go:130] ! I0314 19:18:58.803952       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0314 19:42:22.167451    8428 command_runner.go:130] ! I0314 19:18:58.804068       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0314 19:42:22.167451    8428 command_runner.go:130] ! I0314 19:18:58.807259       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0314 19:42:22.167451    8428 command_runner.go:130] ! I0314 19:18:58.807321       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0314 19:42:22.167451    8428 command_runner.go:130] ! I0314 19:19:03.211766       1 shared_informer.go:311] Waiting for caches to sync for tokens
	I0314 19:42:22.167451    8428 command_runner.go:130] ! I0314 19:19:03.241058       1 controllermanager.go:642] "Started controller" controller="endpoints-controller"
	I0314 19:42:22.167451    8428 command_runner.go:130] ! I0314 19:19:03.241394       1 endpoints_controller.go:174] "Starting endpoint controller"
	I0314 19:42:22.167451    8428 command_runner.go:130] ! I0314 19:19:03.241421       1 shared_informer.go:311] Waiting for caches to sync for endpoint
	I0314 19:42:22.167451    8428 command_runner.go:130] ! I0314 19:19:03.277645       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="limitranges"
	I0314 19:42:22.167451    8428 command_runner.go:130] ! I0314 19:19:03.277842       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="serviceaccounts"
	I0314 19:42:22.167451    8428 command_runner.go:130] ! I0314 19:19:03.277987       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="statefulsets.apps"
	I0314 19:42:22.167451    8428 command_runner.go:130] ! I0314 19:19:03.278099       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="roles.rbac.authorization.k8s.io"
	I0314 19:42:22.167451    8428 command_runner.go:130] ! I0314 19:19:03.278176       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="jobs.batch"
	I0314 19:42:22.167451    8428 command_runner.go:130] ! I0314 19:19:03.278283       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="ingresses.networking.k8s.io"
	I0314 19:42:22.167451    8428 command_runner.go:130] ! I0314 19:19:03.278389       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="leases.coordination.k8s.io"
	I0314 19:42:22.167451    8428 command_runner.go:130] ! I0314 19:19:03.278566       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="rolebindings.rbac.authorization.k8s.io"
	I0314 19:42:22.167981    8428 command_runner.go:130] ! W0314 19:19:03.278710       1 shared_informer.go:593] resyncPeriod 13h23m0.648968128s is smaller than resyncCheckPeriod 15h46m21.421594093s and the informer has already started. Changing it to 15h46m21.421594093s
	I0314 19:42:22.167981    8428 command_runner.go:130] ! I0314 19:19:03.278915       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="controllerrevisions.apps"
	I0314 19:42:22.167981    8428 command_runner.go:130] ! I0314 19:19:03.279052       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="daemonsets.apps"
	I0314 19:42:22.168063    8428 command_runner.go:130] ! I0314 19:19:03.279196       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="networkpolicies.networking.k8s.io"
	I0314 19:42:22.168063    8428 command_runner.go:130] ! I0314 19:19:03.279291       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="poddisruptionbudgets.policy"
	I0314 19:42:22.168063    8428 command_runner.go:130] ! I0314 19:19:03.279313       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="endpoints"
	I0314 19:42:22.168148    8428 command_runner.go:130] ! I0314 19:19:03.279560       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="replicasets.apps"
	I0314 19:42:22.168148    8428 command_runner.go:130] ! I0314 19:19:03.279688       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="horizontalpodautoscalers.autoscaling"
	I0314 19:42:22.168148    8428 command_runner.go:130] ! I0314 19:19:03.279834       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="cronjobs.batch"
	I0314 19:42:22.168224    8428 command_runner.go:130] ! I0314 19:19:03.279857       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="deployments.apps"
	I0314 19:42:22.168224    8428 command_runner.go:130] ! I0314 19:19:03.279927       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="csistoragecapacities.storage.k8s.io"
	I0314 19:42:22.168224    8428 command_runner.go:130] ! I0314 19:19:03.280011       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="endpointslices.discovery.k8s.io"
	I0314 19:42:22.168224    8428 command_runner.go:130] ! I0314 19:19:03.280106       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="podtemplates"
	I0314 19:42:22.168301    8428 command_runner.go:130] ! I0314 19:19:03.280148       1 controllermanager.go:642] "Started controller" controller="resourcequota-controller"
	I0314 19:42:22.168301    8428 command_runner.go:130] ! I0314 19:19:03.280224       1 resource_quota_controller.go:294] "Starting resource quota controller"
	I0314 19:42:22.168301    8428 command_runner.go:130] ! I0314 19:19:03.280306       1 shared_informer.go:311] Waiting for caches to sync for resource quota
	I0314 19:42:22.168301    8428 command_runner.go:130] ! I0314 19:19:03.280392       1 resource_quota_monitor.go:305] "QuotaMonitor running"
	I0314 19:42:22.168301    8428 command_runner.go:130] ! I0314 19:19:03.297527       1 controllermanager.go:642] "Started controller" controller="serviceaccount-controller"
	I0314 19:42:22.168376    8428 command_runner.go:130] ! I0314 19:19:03.297675       1 serviceaccounts_controller.go:111] "Starting service account controller"
	I0314 19:42:22.168376    8428 command_runner.go:130] ! I0314 19:19:03.297706       1 shared_informer.go:311] Waiting for caches to sync for service account
	I0314 19:42:22.168376    8428 command_runner.go:130] ! I0314 19:19:03.310691       1 node_lifecycle_controller.go:431] "Controller will reconcile labels"
	I0314 19:42:22.168376    8428 command_runner.go:130] ! I0314 19:19:03.310864       1 controllermanager.go:642] "Started controller" controller="node-lifecycle-controller"
	I0314 19:42:22.168376    8428 command_runner.go:130] ! I0314 19:19:03.311121       1 node_lifecycle_controller.go:465] "Sending events to api server"
	I0314 19:42:22.168376    8428 command_runner.go:130] ! I0314 19:19:03.311163       1 node_lifecycle_controller.go:476] "Starting node controller"
	I0314 19:42:22.168459    8428 command_runner.go:130] ! I0314 19:19:03.311170       1 shared_informer.go:311] Waiting for caches to sync for taint
	I0314 19:42:22.168459    8428 command_runner.go:130] ! I0314 19:19:03.312491       1 shared_informer.go:318] Caches are synced for tokens
	I0314 19:42:22.168459    8428 command_runner.go:130] ! I0314 19:19:03.324271       1 controllermanager.go:642] "Started controller" controller="persistentvolume-attach-detach-controller"
	I0314 19:42:22.168459    8428 command_runner.go:130] ! I0314 19:19:03.324640       1 attach_detach_controller.go:337] "Starting attach detach controller"
	I0314 19:42:22.168459    8428 command_runner.go:130] ! I0314 19:19:03.324856       1 shared_informer.go:311] Waiting for caches to sync for attach detach
	I0314 19:42:22.168535    8428 command_runner.go:130] ! I0314 19:19:03.341489       1 controllermanager.go:642] "Started controller" controller="certificatesigningrequest-cleaner-controller"
	I0314 19:42:22.168535    8428 command_runner.go:130] ! I0314 19:19:03.341829       1 cleaner.go:83] "Starting CSR cleaner controller"
	I0314 19:42:22.168535    8428 command_runner.go:130] ! I0314 19:19:03.359979       1 controllermanager.go:642] "Started controller" controller="bootstrap-signer-controller"
	I0314 19:42:22.168610    8428 command_runner.go:130] ! I0314 19:19:03.360131       1 shared_informer.go:311] Waiting for caches to sync for bootstrap_signer
	I0314 19:42:22.168610    8428 command_runner.go:130] ! I0314 19:19:03.373006       1 controllermanager.go:642] "Started controller" controller="persistentvolume-binder-controller"
	I0314 19:42:22.168610    8428 command_runner.go:130] ! I0314 19:19:03.373343       1 pv_controller_base.go:319] "Starting persistent volume controller"
	I0314 19:42:22.168610    8428 command_runner.go:130] ! I0314 19:19:03.373606       1 shared_informer.go:311] Waiting for caches to sync for persistent volume
	I0314 19:42:22.168610    8428 command_runner.go:130] ! I0314 19:19:03.385026       1 controllermanager.go:642] "Started controller" controller="persistentvolumeclaim-protection-controller"
	I0314 19:42:22.168688    8428 command_runner.go:130] ! I0314 19:19:03.385081       1 pvc_protection_controller.go:102] "Starting PVC protection controller"
	I0314 19:42:22.168688    8428 command_runner.go:130] ! I0314 19:19:03.385807       1 shared_informer.go:311] Waiting for caches to sync for PVC protection
	I0314 19:42:22.168688    8428 command_runner.go:130] ! I0314 19:19:03.399556       1 controllermanager.go:642] "Started controller" controller="token-cleaner-controller"
	I0314 19:42:22.168688    8428 command_runner.go:130] ! I0314 19:19:03.399796       1 core.go:228] "Warning: configure-cloud-routes is set, but no cloud provider specified. Will not configure cloud provider routes."
	I0314 19:42:22.168762    8428 command_runner.go:130] ! I0314 19:19:03.399936       1 controllermanager.go:620] "Warning: skipping controller" controller="node-route-controller"
	I0314 19:42:22.168762    8428 command_runner.go:130] ! I0314 19:19:03.400078       1 tokencleaner.go:112] "Starting token cleaner controller"
	I0314 19:42:22.168762    8428 command_runner.go:130] ! I0314 19:19:03.400349       1 shared_informer.go:311] Waiting for caches to sync for token_cleaner
	I0314 19:42:22.168762    8428 command_runner.go:130] ! I0314 19:19:03.400489       1 shared_informer.go:318] Caches are synced for token_cleaner
	I0314 19:42:22.168762    8428 command_runner.go:130] ! I0314 19:19:03.521977       1 controllermanager.go:642] "Started controller" controller="persistentvolume-protection-controller"
	I0314 19:42:22.168837    8428 command_runner.go:130] ! I0314 19:19:03.522076       1 pv_protection_controller.go:78] "Starting PV protection controller"
	I0314 19:42:22.168837    8428 command_runner.go:130] ! I0314 19:19:03.522086       1 shared_informer.go:311] Waiting for caches to sync for PV protection
	I0314 19:42:22.168837    8428 command_runner.go:130] ! I0314 19:19:03.567446       1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-kubelet-serving"
	I0314 19:42:22.168837    8428 command_runner.go:130] ! I0314 19:19:03.567574       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I0314 19:42:22.168919    8428 command_runner.go:130] ! I0314 19:19:03.567615       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0314 19:42:22.168919    8428 command_runner.go:130] ! I0314 19:19:03.568792       1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-kubelet-client"
	I0314 19:42:22.168919    8428 command_runner.go:130] ! I0314 19:19:03.568891       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I0314 19:42:22.168996    8428 command_runner.go:130] ! I0314 19:19:03.569119       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0314 19:42:22.168996    8428 command_runner.go:130] ! I0314 19:19:03.570147       1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-kube-apiserver-client"
	I0314 19:42:22.168996    8428 command_runner.go:130] ! I0314 19:19:03.570261       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I0314 19:42:22.168996    8428 command_runner.go:130] ! I0314 19:19:03.570356       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0314 19:42:22.168996    8428 command_runner.go:130] ! I0314 19:19:03.571403       1 controllermanager.go:642] "Started controller" controller="certificatesigningrequest-signing-controller"
	I0314 19:42:22.169074    8428 command_runner.go:130] ! I0314 19:19:03.571529       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0314 19:42:22.169108    8428 command_runner.go:130] ! I0314 19:19:03.571434       1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-legacy-unknown"
	I0314 19:42:22.169108    8428 command_runner.go:130] ! I0314 19:19:03.572095       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0314 19:42:22.169136    8428 command_runner.go:130] ! I0314 19:19:03.723142       1 controllermanager.go:642] "Started controller" controller="ttl-controller"
	I0314 19:42:22.169172    8428 command_runner.go:130] ! I0314 19:19:03.723289       1 ttl_controller.go:124] "Starting TTL controller"
	I0314 19:42:22.169197    8428 command_runner.go:130] ! I0314 19:19:03.723300       1 shared_informer.go:311] Waiting for caches to sync for TTL
	I0314 19:42:22.169197    8428 command_runner.go:130] ! I0314 19:19:13.784656       1 range_allocator.go:111] "No Secondary Service CIDR provided. Skipping filtering out secondary service addresses"
	I0314 19:42:22.169197    8428 command_runner.go:130] ! I0314 19:19:13.784710       1 controllermanager.go:642] "Started controller" controller="node-ipam-controller"
	I0314 19:42:22.169197    8428 command_runner.go:130] ! I0314 19:19:13.784891       1 node_ipam_controller.go:162] "Starting ipam controller"
	I0314 19:42:22.169262    8428 command_runner.go:130] ! I0314 19:19:13.784975       1 shared_informer.go:311] Waiting for caches to sync for node
	I0314 19:42:22.169262    8428 command_runner.go:130] ! I0314 19:19:13.813537       1 controllermanager.go:642] "Started controller" controller="namespace-controller"
	I0314 19:42:22.169262    8428 command_runner.go:130] ! I0314 19:19:13.814099       1 namespace_controller.go:197] "Starting namespace controller"
	I0314 19:42:22.169262    8428 command_runner.go:130] ! I0314 19:19:13.814528       1 shared_informer.go:311] Waiting for caches to sync for namespace
	I0314 19:42:22.169340    8428 command_runner.go:130] ! I0314 19:19:13.831516       1 controllermanager.go:642] "Started controller" controller="garbage-collector-controller"
	I0314 19:42:22.169340    8428 command_runner.go:130] ! I0314 19:19:13.831928       1 garbagecollector.go:155] "Starting controller" controller="garbagecollector"
	I0314 19:42:22.169340    8428 command_runner.go:130] ! I0314 19:19:13.832023       1 shared_informer.go:311] Waiting for caches to sync for garbage collector
	I0314 19:42:22.169340    8428 command_runner.go:130] ! I0314 19:19:13.832052       1 graph_builder.go:294] "Running" component="GraphBuilder"
	I0314 19:42:22.169340    8428 command_runner.go:130] ! I0314 19:19:13.876141       1 controllermanager.go:642] "Started controller" controller="horizontal-pod-autoscaler-controller"
	I0314 19:42:22.169414    8428 command_runner.go:130] ! I0314 19:19:13.876437       1 horizontal.go:200] "Starting HPA controller"
	I0314 19:42:22.169414    8428 command_runner.go:130] ! I0314 19:19:13.876448       1 shared_informer.go:311] Waiting for caches to sync for HPA
	I0314 19:42:22.169414    8428 command_runner.go:130] ! I0314 19:19:13.892498       1 controllermanager.go:642] "Started controller" controller="disruption-controller"
	I0314 19:42:22.169414    8428 command_runner.go:130] ! I0314 19:19:13.892891       1 disruption.go:433] "Sending events to api server."
	I0314 19:42:22.169414    8428 command_runner.go:130] ! I0314 19:19:13.893092       1 disruption.go:444] "Starting disruption controller"
	I0314 19:42:22.169494    8428 command_runner.go:130] ! I0314 19:19:13.893185       1 shared_informer.go:311] Waiting for caches to sync for disruption
	I0314 19:42:22.169494    8428 command_runner.go:130] ! I0314 19:19:13.895299       1 controllermanager.go:642] "Started controller" controller="certificatesigningrequest-approving-controller"
	I0314 19:42:22.169494    8428 command_runner.go:130] ! I0314 19:19:13.895861       1 certificate_controller.go:115] "Starting certificate controller" name="csrapproving"
	I0314 19:42:22.169494    8428 command_runner.go:130] ! I0314 19:19:13.896105       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrapproving
	I0314 19:42:22.169494    8428 command_runner.go:130] ! I0314 19:19:13.908480       1 controllermanager.go:642] "Started controller" controller="endpointslice-mirroring-controller"
	I0314 19:42:22.169569    8428 command_runner.go:130] ! I0314 19:19:13.908861       1 endpointslicemirroring_controller.go:223] "Starting EndpointSliceMirroring controller"
	I0314 19:42:22.169569    8428 command_runner.go:130] ! I0314 19:19:13.908873       1 shared_informer.go:311] Waiting for caches to sync for endpoint_slice_mirroring
	I0314 19:42:22.169569    8428 command_runner.go:130] ! I0314 19:19:13.929369       1 controllermanager.go:642] "Started controller" controller="replicationcontroller-controller"
	I0314 19:42:22.169569    8428 command_runner.go:130] ! I0314 19:19:13.929803       1 replica_set.go:214] "Starting controller" name="replicationcontroller"
	I0314 19:42:22.169644    8428 command_runner.go:130] ! I0314 19:19:13.930050       1 shared_informer.go:311] Waiting for caches to sync for ReplicationController
	I0314 19:42:22.169644    8428 command_runner.go:130] ! I0314 19:19:13.974683       1 controllermanager.go:642] "Started controller" controller="replicaset-controller"
	I0314 19:42:22.169644    8428 command_runner.go:130] ! I0314 19:19:13.974899       1 replica_set.go:214] "Starting controller" name="replicaset"
	I0314 19:42:22.169644    8428 command_runner.go:130] ! I0314 19:19:13.975108       1 shared_informer.go:311] Waiting for caches to sync for ReplicaSet
	I0314 19:42:22.169720    8428 command_runner.go:130] ! E0314 19:19:14.134866       1 core.go:92] "Failed to start service controller" err="WARNING: no cloud provider provided, services of type LoadBalancer will fail"
	I0314 19:42:22.169720    8428 command_runner.go:130] ! I0314 19:19:14.135266       1 controllermanager.go:620] "Warning: skipping controller" controller="service-lb-controller"
	I0314 19:42:22.169720    8428 command_runner.go:130] ! E0314 19:19:14.170400       1 core.go:213] "Failed to start cloud node lifecycle controller" err="no cloud provider provided"
	I0314 19:42:22.169720    8428 command_runner.go:130] ! I0314 19:19:14.170426       1 controllermanager.go:620] "Warning: skipping controller" controller="cloud-node-lifecycle-controller"
	I0314 19:42:22.169795    8428 command_runner.go:130] ! I0314 19:19:14.324676       1 controllermanager.go:642] "Started controller" controller="ttl-after-finished-controller"
	I0314 19:42:22.169795    8428 command_runner.go:130] ! I0314 19:19:14.324865       1 ttlafterfinished_controller.go:109] "Starting TTL after finished controller"
	I0314 19:42:22.169795    8428 command_runner.go:130] ! I0314 19:19:14.325169       1 shared_informer.go:311] Waiting for caches to sync for TTL after finished
	I0314 19:42:22.169795    8428 command_runner.go:130] ! I0314 19:19:14.474401       1 controllermanager.go:642] "Started controller" controller="ephemeral-volume-controller"
	I0314 19:42:22.169795    8428 command_runner.go:130] ! I0314 19:19:14.474562       1 controller.go:169] "Starting ephemeral volume controller"
	I0314 19:42:22.169871    8428 command_runner.go:130] ! I0314 19:19:14.474660       1 shared_informer.go:311] Waiting for caches to sync for ephemeral
	I0314 19:42:22.169871    8428 command_runner.go:130] ! I0314 19:19:14.633668       1 controllermanager.go:642] "Started controller" controller="endpointslice-controller"
	I0314 19:42:22.169871    8428 command_runner.go:130] ! I0314 19:19:14.633821       1 endpointslice_controller.go:264] "Starting endpoint slice controller"
	I0314 19:42:22.169955    8428 command_runner.go:130] ! I0314 19:19:14.633832       1 shared_informer.go:311] Waiting for caches to sync for endpoint_slice
	I0314 19:42:22.169955    8428 command_runner.go:130] ! I0314 19:19:14.773955       1 controllermanager.go:642] "Started controller" controller="pod-garbage-collector-controller"
	I0314 19:42:22.169955    8428 command_runner.go:130] ! I0314 19:19:14.774019       1 gc_controller.go:101] "Starting GC controller"
	I0314 19:42:22.170048    8428 command_runner.go:130] ! I0314 19:19:14.774027       1 shared_informer.go:311] Waiting for caches to sync for GC
	I0314 19:42:22.170048    8428 command_runner.go:130] ! I0314 19:19:14.925568       1 controllermanager.go:642] "Started controller" controller="daemonset-controller"
	I0314 19:42:22.170048    8428 command_runner.go:130] ! I0314 19:19:14.925814       1 daemon_controller.go:291] "Starting daemon sets controller"
	I0314 19:42:22.170048    8428 command_runner.go:130] ! I0314 19:19:14.925828       1 shared_informer.go:311] Waiting for caches to sync for daemon sets
	I0314 19:42:22.170048    8428 command_runner.go:130] ! I0314 19:19:15.075328       1 controllermanager.go:642] "Started controller" controller="job-controller"
	I0314 19:42:22.170135    8428 command_runner.go:130] ! I0314 19:19:15.075556       1 job_controller.go:226] "Starting job controller"
	I0314 19:42:22.170135    8428 command_runner.go:130] ! I0314 19:19:15.075634       1 shared_informer.go:311] Waiting for caches to sync for job
	I0314 19:42:22.170224    8428 command_runner.go:130] ! I0314 19:19:15.225929       1 controllermanager.go:642] "Started controller" controller="persistentvolume-expander-controller"
	I0314 19:42:22.170299    8428 command_runner.go:130] ! I0314 19:19:15.226065       1 expand_controller.go:328] "Starting expand controller"
	I0314 19:42:22.170299    8428 command_runner.go:130] ! I0314 19:19:15.226077       1 shared_informer.go:311] Waiting for caches to sync for expand
	I0314 19:42:22.170337    8428 command_runner.go:130] ! I0314 19:19:15.378471       1 controllermanager.go:642] "Started controller" controller="deployment-controller"
	I0314 19:42:22.170337    8428 command_runner.go:130] ! I0314 19:19:15.378640       1 deployment_controller.go:168] "Starting controller" controller="deployment"
	I0314 19:42:22.170337    8428 command_runner.go:130] ! I0314 19:19:15.379237       1 shared_informer.go:311] Waiting for caches to sync for deployment
	I0314 19:42:22.170337    8428 command_runner.go:130] ! I0314 19:19:15.525089       1 controllermanager.go:642] "Started controller" controller="statefulset-controller"
	I0314 19:42:22.170337    8428 command_runner.go:130] ! I0314 19:19:15.525565       1 stateful_set.go:161] "Starting stateful set controller"
	I0314 19:42:22.170427    8428 command_runner.go:130] ! I0314 19:19:15.525643       1 shared_informer.go:311] Waiting for caches to sync for stateful set
	I0314 19:42:22.170427    8428 command_runner.go:130] ! I0314 19:19:15.679545       1 controllermanager.go:642] "Started controller" controller="cronjob-controller"
	I0314 19:42:22.170427    8428 command_runner.go:130] ! I0314 19:19:15.679611       1 cronjob_controllerv2.go:139] "Starting cronjob controller v2"
	I0314 19:42:22.170427    8428 command_runner.go:130] ! I0314 19:19:15.679619       1 shared_informer.go:311] Waiting for caches to sync for cronjob
	I0314 19:42:22.170503    8428 command_runner.go:130] ! I0314 19:19:15.825516       1 controllermanager.go:642] "Started controller" controller="clusterrole-aggregation-controller"
	I0314 19:42:22.170503    8428 command_runner.go:130] ! I0314 19:19:15.825908       1 clusterroleaggregation_controller.go:189] "Starting ClusterRoleAggregator controller"
	I0314 19:42:22.170503    8428 command_runner.go:130] ! I0314 19:19:15.825920       1 shared_informer.go:311] Waiting for caches to sync for ClusterRoleAggregator
	I0314 19:42:22.170581    8428 command_runner.go:130] ! I0314 19:19:15.976308       1 controllermanager.go:642] "Started controller" controller="root-ca-certificate-publisher-controller"
	I0314 19:42:22.170581    8428 command_runner.go:130] ! I0314 19:19:15.976673       1 publisher.go:102] "Starting root CA cert publisher controller"
	I0314 19:42:22.170581    8428 command_runner.go:130] ! I0314 19:19:15.976858       1 shared_informer.go:311] Waiting for caches to sync for crt configmap
	I0314 19:42:22.170581    8428 command_runner.go:130] ! I0314 19:19:15.993409       1 shared_informer.go:311] Waiting for caches to sync for resource quota
	I0314 19:42:22.170581    8428 command_runner.go:130] ! I0314 19:19:16.017841       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-442000\" does not exist"
	I0314 19:42:22.170658    8428 command_runner.go:130] ! I0314 19:19:16.022817       1 shared_informer.go:311] Waiting for caches to sync for garbage collector
	I0314 19:42:22.170658    8428 command_runner.go:130] ! I0314 19:19:16.023332       1 shared_informer.go:318] Caches are synced for TTL
	I0314 19:42:22.170658    8428 command_runner.go:130] ! I0314 19:19:16.025413       1 shared_informer.go:318] Caches are synced for TTL after finished
	I0314 19:42:22.170658    8428 command_runner.go:130] ! I0314 19:19:16.025667       1 shared_informer.go:318] Caches are synced for stateful set
	I0314 19:42:22.170658    8428 command_runner.go:130] ! I0314 19:19:16.025909       1 shared_informer.go:318] Caches are synced for daemon sets
	I0314 19:42:22.170736    8428 command_runner.go:130] ! I0314 19:19:16.026194       1 shared_informer.go:318] Caches are synced for expand
	I0314 19:42:22.170736    8428 command_runner.go:130] ! I0314 19:19:16.030689       1 shared_informer.go:318] Caches are synced for ReplicationController
	I0314 19:42:22.170736    8428 command_runner.go:130] ! I0314 19:19:16.042937       1 shared_informer.go:318] Caches are synced for endpoint
	I0314 19:42:22.170736    8428 command_runner.go:130] ! I0314 19:19:16.063170       1 shared_informer.go:318] Caches are synced for bootstrap_signer
	I0314 19:42:22.170736    8428 command_runner.go:130] ! I0314 19:19:16.069816       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-client
	I0314 19:42:22.170812    8428 command_runner.go:130] ! I0314 19:19:16.069953       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-serving
	I0314 19:42:22.170812    8428 command_runner.go:130] ! I0314 19:19:16.071382       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0314 19:42:22.170812    8428 command_runner.go:130] ! I0314 19:19:16.072881       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-legacy-unknown
	I0314 19:42:22.170812    8428 command_runner.go:130] ! I0314 19:19:16.075260       1 shared_informer.go:318] Caches are synced for GC
	I0314 19:42:22.170812    8428 command_runner.go:130] ! I0314 19:19:16.075273       1 shared_informer.go:318] Caches are synced for ReplicaSet
	I0314 19:42:22.170891    8428 command_runner.go:130] ! I0314 19:19:16.075312       1 shared_informer.go:318] Caches are synced for ephemeral
	I0314 19:42:22.170891    8428 command_runner.go:130] ! I0314 19:19:16.076852       1 shared_informer.go:318] Caches are synced for HPA
	I0314 19:42:22.170891    8428 command_runner.go:130] ! I0314 19:19:16.077008       1 shared_informer.go:318] Caches are synced for crt configmap
	I0314 19:42:22.170891    8428 command_runner.go:130] ! I0314 19:19:16.077022       1 shared_informer.go:318] Caches are synced for job
	I0314 19:42:22.170891    8428 command_runner.go:130] ! I0314 19:19:16.079681       1 shared_informer.go:318] Caches are synced for deployment
	I0314 19:42:22.170891    8428 command_runner.go:130] ! I0314 19:19:16.079893       1 shared_informer.go:318] Caches are synced for cronjob
	I0314 19:42:22.170891    8428 command_runner.go:130] ! I0314 19:19:16.085788       1 shared_informer.go:318] Caches are synced for node
	I0314 19:42:22.170966    8428 command_runner.go:130] ! I0314 19:19:16.085869       1 range_allocator.go:174] "Sending events to api server"
	I0314 19:42:22.170966    8428 command_runner.go:130] ! I0314 19:19:16.085937       1 range_allocator.go:178] "Starting range CIDR allocator"
	I0314 19:42:22.170966    8428 command_runner.go:130] ! I0314 19:19:16.085945       1 shared_informer.go:311] Waiting for caches to sync for cidrallocator
	I0314 19:42:22.170966    8428 command_runner.go:130] ! I0314 19:19:16.085951       1 shared_informer.go:318] Caches are synced for cidrallocator
	I0314 19:42:22.170966    8428 command_runner.go:130] ! I0314 19:19:16.086224       1 shared_informer.go:318] Caches are synced for PVC protection
	I0314 19:42:22.171041    8428 command_runner.go:130] ! I0314 19:19:16.093730       1 shared_informer.go:318] Caches are synced for disruption
	I0314 19:42:22.171041    8428 command_runner.go:130] ! I0314 19:19:16.093802       1 shared_informer.go:318] Caches are synced for resource quota
	I0314 19:42:22.171041    8428 command_runner.go:130] ! I0314 19:19:16.097148       1 shared_informer.go:318] Caches are synced for certificate-csrapproving
	I0314 19:42:22.171041    8428 command_runner.go:130] ! I0314 19:19:16.098688       1 shared_informer.go:318] Caches are synced for service account
	I0314 19:42:22.171117    8428 command_runner.go:130] ! I0314 19:19:16.102404       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-442000" podCIDRs=["10.244.0.0/24"]
	I0314 19:42:22.171117    8428 command_runner.go:130] ! I0314 19:19:16.112396       1 shared_informer.go:318] Caches are synced for taint
	I0314 19:42:22.171117    8428 command_runner.go:130] ! I0314 19:19:16.112849       1 node_lifecycle_controller.go:1225] "Initializing eviction metric for zone" zone=""
	I0314 19:42:22.171117    8428 command_runner.go:130] ! I0314 19:19:16.113070       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-442000"
	I0314 19:42:22.171117    8428 command_runner.go:130] ! I0314 19:19:16.113155       1 node_lifecycle_controller.go:1029] "Controller detected that all Nodes are not-Ready. Entering master disruption mode"
	I0314 19:42:22.171196    8428 command_runner.go:130] ! I0314 19:19:16.112659       1 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring
	I0314 19:42:22.171196    8428 command_runner.go:130] ! I0314 19:19:16.113865       1 taint_manager.go:205] "Starting NoExecuteTaintManager"
	I0314 19:42:22.171196    8428 command_runner.go:130] ! I0314 19:19:16.113966       1 taint_manager.go:210] "Sending events to api server"
	I0314 19:42:22.171196    8428 command_runner.go:130] ! I0314 19:19:16.115068       1 shared_informer.go:318] Caches are synced for namespace
	I0314 19:42:22.171196    8428 command_runner.go:130] ! I0314 19:19:16.118281       1 event.go:307] "Event occurred" object="multinode-442000" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-442000 event: Registered Node multinode-442000 in Controller"
	I0314 19:42:22.171271    8428 command_runner.go:130] ! I0314 19:19:16.134584       1 shared_informer.go:318] Caches are synced for endpoint_slice
	I0314 19:42:22.171271    8428 command_runner.go:130] ! I0314 19:19:16.151625       1 event.go:307] "Event occurred" object="kube-system/kube-scheduler-multinode-442000" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0314 19:42:22.171271    8428 command_runner.go:130] ! I0314 19:19:16.171551       1 event.go:307] "Event occurred" object="kube-system/etcd-multinode-442000" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0314 19:42:22.171349    8428 command_runner.go:130] ! I0314 19:19:16.174341       1 event.go:307] "Event occurred" object="kube-system/kube-apiserver-multinode-442000" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0314 19:42:22.171349    8428 command_runner.go:130] ! I0314 19:19:16.174358       1 event.go:307] "Event occurred" object="kube-system/kube-controller-manager-multinode-442000" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0314 19:42:22.171349    8428 command_runner.go:130] ! I0314 19:19:16.184987       1 shared_informer.go:318] Caches are synced for resource quota
	I0314 19:42:22.171349    8428 command_runner.go:130] ! I0314 19:19:16.223118       1 shared_informer.go:318] Caches are synced for PV protection
	I0314 19:42:22.171430    8428 command_runner.go:130] ! I0314 19:19:16.225526       1 shared_informer.go:318] Caches are synced for attach detach
	I0314 19:42:22.171430    8428 command_runner.go:130] ! I0314 19:19:16.225950       1 shared_informer.go:318] Caches are synced for ClusterRoleAggregator
	I0314 19:42:22.171430    8428 command_runner.go:130] ! I0314 19:19:16.274020       1 shared_informer.go:318] Caches are synced for persistent volume
	I0314 19:42:22.171430    8428 command_runner.go:130] ! I0314 19:19:16.320250       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-7b9lf"
	I0314 19:42:22.171504    8428 command_runner.go:130] ! I0314 19:19:16.328650       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-cg28g"
	I0314 19:42:22.171504    8428 command_runner.go:130] ! I0314 19:19:16.626855       1 shared_informer.go:318] Caches are synced for garbage collector
	I0314 19:42:22.171504    8428 command_runner.go:130] ! I0314 19:19:16.633099       1 shared_informer.go:318] Caches are synced for garbage collector
	I0314 19:42:22.171504    8428 command_runner.go:130] ! I0314 19:19:16.633344       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0314 19:42:22.171582    8428 command_runner.go:130] ! I0314 19:19:16.789964       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5dd5756b68 to 2"
	I0314 19:42:22.171582    8428 command_runner.go:130] ! I0314 19:19:17.099870       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-pvxpr"
	I0314 19:42:22.171582    8428 command_runner.go:130] ! I0314 19:19:17.114819       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-d22jc"
	I0314 19:42:22.171659    8428 command_runner.go:130] ! I0314 19:19:17.146456       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="355.713874ms"
	I0314 19:42:22.171659    8428 command_runner.go:130] ! I0314 19:19:17.166202       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="19.688691ms"
	I0314 19:42:22.171659    8428 command_runner.go:130] ! I0314 19:19:17.169087       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="2.771063ms"
	I0314 19:42:22.171734    8428 command_runner.go:130] ! I0314 19:19:18.399096       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I0314 19:42:22.171734    8428 command_runner.go:130] ! I0314 19:19:18.448322       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-pvxpr"
	I0314 19:42:22.171734    8428 command_runner.go:130] ! I0314 19:19:18.482373       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="82.944747ms"
	I0314 19:42:22.171811    8428 command_runner.go:130] ! I0314 19:19:18.500300       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="17.716936ms"
	I0314 19:42:22.171811    8428 command_runner.go:130] ! I0314 19:19:18.500887       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="99.317µs"
	I0314 19:42:22.171811    8428 command_runner.go:130] ! I0314 19:19:26.475232       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="98.515µs"
	I0314 19:42:22.171811    8428 command_runner.go:130] ! I0314 19:19:26.505160       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="58.309µs"
	I0314 19:42:22.171811    8428 command_runner.go:130] ! I0314 19:19:28.423231       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="23.310782ms"
	I0314 19:42:22.171893    8428 command_runner.go:130] ! I0314 19:19:28.423925       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="72.006µs"
	I0314 19:42:22.171926    8428 command_runner.go:130] ! I0314 19:19:31.116802       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I0314 19:42:22.171953    8428 command_runner.go:130] ! I0314 19:22:02.467925       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-442000-m02\" does not exist"
	I0314 19:42:22.171953    8428 command_runner.go:130] ! I0314 19:22:02.479576       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-442000-m02" podCIDRs=["10.244.1.0/24"]
	I0314 19:42:22.172012    8428 command_runner.go:130] ! I0314 19:22:02.507610       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-72dzs"
	I0314 19:42:22.172034    8428 command_runner.go:130] ! I0314 19:22:02.511169       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-c7m4p"
	I0314 19:42:22.172034    8428 command_runner.go:130] ! I0314 19:22:06.145908       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-442000-m02"
	I0314 19:42:22.172034    8428 command_runner.go:130] ! I0314 19:22:06.146201       1 event.go:307] "Event occurred" object="multinode-442000-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-442000-m02 event: Registered Node multinode-442000-m02 in Controller"
	I0314 19:42:22.172034    8428 command_runner.go:130] ! I0314 19:22:20.862710       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-442000-m02"
	I0314 19:42:22.172034    8428 command_runner.go:130] ! I0314 19:22:45.188036       1 event.go:307] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-5b5d89c9d6 to 2"
	I0314 19:42:22.172034    8428 command_runner.go:130] ! I0314 19:22:45.218022       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5b5d89c9d6-8drpb"
	I0314 19:42:22.172034    8428 command_runner.go:130] ! I0314 19:22:45.241867       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5b5d89c9d6-7446n"
	I0314 19:42:22.172034    8428 command_runner.go:130] ! I0314 19:22:45.267427       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="80.313691ms"
	I0314 19:42:22.172034    8428 command_runner.go:130] ! I0314 19:22:45.292961       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="25.159362ms"
	I0314 19:42:22.172034    8428 command_runner.go:130] ! I0314 19:22:45.311264       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="18.241692ms"
	I0314 19:42:22.172034    8428 command_runner.go:130] ! I0314 19:22:45.311407       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="93.911µs"
	I0314 19:42:22.172034    8428 command_runner.go:130] ! I0314 19:22:48.320252       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="21.515467ms"
	I0314 19:42:22.172034    8428 command_runner.go:130] ! I0314 19:22:48.320403       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="46.303µs"
	I0314 19:42:22.172034    8428 command_runner.go:130] ! I0314 19:22:48.344640       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="8.018521ms"
	I0314 19:42:22.172034    8428 command_runner.go:130] ! I0314 19:22:48.344838       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="42.804µs"
	I0314 19:42:22.172034    8428 command_runner.go:130] ! I0314 19:26:25.208780       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-442000-m02"
	I0314 19:42:22.172034    8428 command_runner.go:130] ! I0314 19:26:25.214591       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-442000-m03\" does not exist"
	I0314 19:42:22.172034    8428 command_runner.go:130] ! I0314 19:26:25.248082       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-442000-m03" podCIDRs=["10.244.2.0/24"]
	I0314 19:42:22.172034    8428 command_runner.go:130] ! I0314 19:26:25.265233       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-r7zdb"
	I0314 19:42:22.172034    8428 command_runner.go:130] ! I0314 19:26:25.273144       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-w2qls"
	I0314 19:42:22.172034    8428 command_runner.go:130] ! I0314 19:26:26.207170       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-442000-m03"
	I0314 19:42:22.172034    8428 command_runner.go:130] ! I0314 19:26:26.207236       1 event.go:307] "Event occurred" object="multinode-442000-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-442000-m03 event: Registered Node multinode-442000-m03 in Controller"
	I0314 19:42:22.172034    8428 command_runner.go:130] ! I0314 19:26:43.758846       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-442000-m02"
	I0314 19:42:22.172034    8428 command_runner.go:130] ! I0314 19:33:46.333556       1 event.go:307] "Event occurred" object="multinode-442000-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-442000-m03 status is now: NodeNotReady"
	I0314 19:42:22.172034    8428 command_runner.go:130] ! I0314 19:33:46.333891       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-442000-m02"
	I0314 19:42:22.172559    8428 command_runner.go:130] ! I0314 19:33:46.348976       1 event.go:307] "Event occurred" object="kube-system/kindnet-r7zdb" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0314 19:42:22.172559    8428 command_runner.go:130] ! I0314 19:33:46.370200       1 event.go:307] "Event occurred" object="kube-system/kube-proxy-w2qls" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0314 19:42:22.172559    8428 command_runner.go:130] ! I0314 19:36:39.868492       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-442000-m02"
	I0314 19:42:22.172636    8428 command_runner.go:130] ! I0314 19:36:41.400896       1 event.go:307] "Event occurred" object="multinode-442000-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RemovingNode" message="Node multinode-442000-m03 event: Removing Node multinode-442000-m03 from Controller"
	I0314 19:42:22.172671    8428 command_runner.go:130] ! I0314 19:36:47.335802       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-442000-m03\" does not exist"
	I0314 19:42:22.172671    8428 command_runner.go:130] ! I0314 19:36:47.336128       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-442000-m02"
	I0314 19:42:22.172671    8428 command_runner.go:130] ! I0314 19:36:47.352987       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-442000-m03" podCIDRs=["10.244.3.0/24"]
	I0314 19:42:22.172671    8428 command_runner.go:130] ! I0314 19:36:51.403261       1 event.go:307] "Event occurred" object="multinode-442000-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-442000-m03 event: Registered Node multinode-442000-m03 in Controller"
	I0314 19:42:22.172671    8428 command_runner.go:130] ! I0314 19:36:54.976864       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-442000-m02"
	I0314 19:42:22.172671    8428 command_runner.go:130] ! I0314 19:38:21.463528       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-442000-m02"
	I0314 19:42:22.172671    8428 command_runner.go:130] ! I0314 19:38:21.463818       1 event.go:307] "Event occurred" object="multinode-442000-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-442000-m03 status is now: NodeNotReady"
	I0314 19:42:22.172671    8428 command_runner.go:130] ! I0314 19:38:21.486796       1 event.go:307] "Event occurred" object="kube-system/kindnet-r7zdb" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0314 19:42:22.172671    8428 command_runner.go:130] ! I0314 19:38:21.501217       1 event.go:307] "Event occurred" object="kube-system/kube-proxy-w2qls" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0314 19:42:24.692959    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/namespaces/kube-system/pods
	I0314 19:42:24.692959    8428 round_trippers.go:469] Request Headers:
	I0314 19:42:24.692959    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:42:24.692959    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:42:24.698307    8428 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0314 19:42:24.698307    8428 round_trippers.go:577] Response Headers:
	I0314 19:42:24.698307    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:42:24.698307    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:42:24 GMT
	I0314 19:42:24.698307    8428 round_trippers.go:580]     Audit-Id: cfbdcadb-0d12-4859-82dd-7b35a841e2c4
	I0314 19:42:24.698307    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:42:24.698307    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:42:24.698307    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:42:24.699161    8428 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1921"},"items":[{"metadata":{"name":"coredns-5dd5756b68-d22jc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2a563b3f-a175-4dc2-9f0b-67dbaefbfaac","resourceVersion":"1908","creationTimestamp":"2024-03-14T19:19:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"65689a14-ed43-48af-8104-53ea2e3991f3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:19:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"65689a14-ed43-48af-8104-53ea2e3991f3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 83007 chars]
	I0314 19:42:24.703266    8428 system_pods.go:59] 12 kube-system pods found
	I0314 19:42:24.703266    8428 system_pods.go:61] "coredns-5dd5756b68-d22jc" [2a563b3f-a175-4dc2-9f0b-67dbaefbfaac] Running
	I0314 19:42:24.703266    8428 system_pods.go:61] "etcd-multinode-442000" [106cc31d-907f-4853-9e8d-f13c8ac4e398] Running
	I0314 19:42:24.703266    8428 system_pods.go:61] "kindnet-7b9lf" [677b9084-0026-4b21-b041-445940624ed7] Running
	I0314 19:42:24.703266    8428 system_pods.go:61] "kindnet-c7m4p" [926a47cb-e444-455d-8b74-d17a229020a1] Running
	I0314 19:42:24.703266    8428 system_pods.go:61] "kindnet-r7zdb" [69b103aa-023b-4243-ba7b-875106aac183] Running
	I0314 19:42:24.703266    8428 system_pods.go:61] "kube-apiserver-multinode-442000" [ebdd5ddf-2b02-4315-bc64-1b10c383d507] Running
	I0314 19:42:24.703266    8428 system_pods.go:61] "kube-controller-manager-multinode-442000" [b16fc874-ef74-44ca-a54f-bb678bf982df] Running
	I0314 19:42:24.703266    8428 system_pods.go:61] "kube-proxy-72dzs" [80b840b0-3803-4102-a966-ea73aed74f49] Running
	I0314 19:42:24.703266    8428 system_pods.go:61] "kube-proxy-cg28g" [c7f798bf-6722-4731-af8d-ccd5703d116e] Running
	I0314 19:42:24.703266    8428 system_pods.go:61] "kube-proxy-w2qls" [7a53e602-282e-4b63-a993-a5d23d3c615f] Running
	I0314 19:42:24.703266    8428 system_pods.go:61] "kube-scheduler-multinode-442000" [76b10598-fe0d-4a14-a8e4-a32221fbb68f] Running
	I0314 19:42:24.703266    8428 system_pods.go:61] "storage-provisioner" [65d76566-4401-4b28-8452-10ed98624901] Running
	I0314 19:42:24.703266    8428 system_pods.go:74] duration metric: took 3.7500593s to wait for pod list to return data ...
	I0314 19:42:24.703266    8428 default_sa.go:34] waiting for default service account to be created ...
	I0314 19:42:24.703266    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/namespaces/default/serviceaccounts
	I0314 19:42:24.703266    8428 round_trippers.go:469] Request Headers:
	I0314 19:42:24.703266    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:42:24.703266    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:42:24.706404    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:42:24.706404    8428 round_trippers.go:577] Response Headers:
	I0314 19:42:24.706404    8428 round_trippers.go:580]     Content-Length: 262
	I0314 19:42:24.706404    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:42:24 GMT
	I0314 19:42:24.706404    8428 round_trippers.go:580]     Audit-Id: f0249156-d4bf-4c39-be8d-dcff9f92224b
	I0314 19:42:24.706404    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:42:24.706404    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:42:24.706404    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:42:24.706404    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:42:24.706404    8428 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"1921"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"31dfe296-58ba-4a37-a509-52c518a0c41a","resourceVersion":"365","creationTimestamp":"2024-03-14T19:19:16Z"}}]}
	I0314 19:42:24.707321    8428 default_sa.go:45] found service account: "default"
	I0314 19:42:24.707321    8428 default_sa.go:55] duration metric: took 4.0542ms for default service account to be created ...
	I0314 19:42:24.707321    8428 system_pods.go:116] waiting for k8s-apps to be running ...
	I0314 19:42:24.707598    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/namespaces/kube-system/pods
	I0314 19:42:24.707629    8428 round_trippers.go:469] Request Headers:
	I0314 19:42:24.707629    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:42:24.707629    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:42:24.711291    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:42:24.711291    8428 round_trippers.go:577] Response Headers:
	I0314 19:42:24.711291    8428 round_trippers.go:580]     Audit-Id: 05087bd0-2c43-4c05-ad11-6387d183ed88
	I0314 19:42:24.712291    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:42:24.712291    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:42:24.712291    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:42:24.712291    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:42:24.712291    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:42:24 GMT
	I0314 19:42:24.713345    8428 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1921"},"items":[{"metadata":{"name":"coredns-5dd5756b68-d22jc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2a563b3f-a175-4dc2-9f0b-67dbaefbfaac","resourceVersion":"1908","creationTimestamp":"2024-03-14T19:19:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"65689a14-ed43-48af-8104-53ea2e3991f3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:19:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"65689a14-ed43-48af-8104-53ea2e3991f3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 83007 chars]
	I0314 19:42:24.715897    8428 system_pods.go:86] 12 kube-system pods found
	I0314 19:42:24.715897    8428 system_pods.go:89] "coredns-5dd5756b68-d22jc" [2a563b3f-a175-4dc2-9f0b-67dbaefbfaac] Running
	I0314 19:42:24.715897    8428 system_pods.go:89] "etcd-multinode-442000" [106cc31d-907f-4853-9e8d-f13c8ac4e398] Running
	I0314 19:42:24.715897    8428 system_pods.go:89] "kindnet-7b9lf" [677b9084-0026-4b21-b041-445940624ed7] Running
	I0314 19:42:24.715897    8428 system_pods.go:89] "kindnet-c7m4p" [926a47cb-e444-455d-8b74-d17a229020a1] Running
	I0314 19:42:24.715897    8428 system_pods.go:89] "kindnet-r7zdb" [69b103aa-023b-4243-ba7b-875106aac183] Running
	I0314 19:42:24.715897    8428 system_pods.go:89] "kube-apiserver-multinode-442000" [ebdd5ddf-2b02-4315-bc64-1b10c383d507] Running
	I0314 19:42:24.715897    8428 system_pods.go:89] "kube-controller-manager-multinode-442000" [b16fc874-ef74-44ca-a54f-bb678bf982df] Running
	I0314 19:42:24.715897    8428 system_pods.go:89] "kube-proxy-72dzs" [80b840b0-3803-4102-a966-ea73aed74f49] Running
	I0314 19:42:24.715897    8428 system_pods.go:89] "kube-proxy-cg28g" [c7f798bf-6722-4731-af8d-ccd5703d116e] Running
	I0314 19:42:24.715897    8428 system_pods.go:89] "kube-proxy-w2qls" [7a53e602-282e-4b63-a993-a5d23d3c615f] Running
	I0314 19:42:24.715897    8428 system_pods.go:89] "kube-scheduler-multinode-442000" [76b10598-fe0d-4a14-a8e4-a32221fbb68f] Running
	I0314 19:42:24.715897    8428 system_pods.go:89] "storage-provisioner" [65d76566-4401-4b28-8452-10ed98624901] Running
	I0314 19:42:24.715897    8428 system_pods.go:126] duration metric: took 8.5757ms to wait for k8s-apps to be running ...
	I0314 19:42:24.715897    8428 system_svc.go:44] waiting for kubelet service to be running ....
	I0314 19:42:24.724908    8428 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0314 19:42:24.748846    8428 system_svc.go:56] duration metric: took 32.9463ms WaitForService to wait for kubelet
	I0314 19:42:24.748966    8428 kubeadm.go:576] duration metric: took 1m13.90952s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0314 19:42:24.748966    8428 node_conditions.go:102] verifying NodePressure condition ...
	I0314 19:42:24.748966    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes
	I0314 19:42:24.748966    8428 round_trippers.go:469] Request Headers:
	I0314 19:42:24.748966    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:42:24.748966    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:42:24.753758    8428 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 19:42:24.753758    8428 round_trippers.go:577] Response Headers:
	I0314 19:42:24.753758    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:42:24.753758    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:42:25 GMT
	I0314 19:42:24.753838    8428 round_trippers.go:580]     Audit-Id: 163913f8-3487-4480-96f8-d468a3f40123
	I0314 19:42:24.753838    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:42:24.753838    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:42:24.753838    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:42:24.754206    8428 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1921"},"items":[{"metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1868","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 16256 chars]
	I0314 19:42:24.755363    8428 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0314 19:42:24.755435    8428 node_conditions.go:123] node cpu capacity is 2
	I0314 19:42:24.755435    8428 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0314 19:42:24.755435    8428 node_conditions.go:123] node cpu capacity is 2
	I0314 19:42:24.755435    8428 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0314 19:42:24.755435    8428 node_conditions.go:123] node cpu capacity is 2
	I0314 19:42:24.755508    8428 node_conditions.go:105] duration metric: took 6.5414ms to run NodePressure ...
	I0314 19:42:24.755508    8428 start.go:240] waiting for startup goroutines ...
	I0314 19:42:24.755508    8428 start.go:245] waiting for cluster config update ...
	I0314 19:42:24.755508    8428 start.go:254] writing updated cluster config ...
	I0314 19:42:24.761079    8428 out.go:177] 
	I0314 19:42:24.767119    8428 config.go:182] Loaded profile config "ha-832100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0314 19:42:24.772407    8428 config.go:182] Loaded profile config "multinode-442000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0314 19:42:24.772407    8428 profile.go:142] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-442000\config.json ...
	I0314 19:42:24.777556    8428 out.go:177] * Starting "multinode-442000-m02" worker node in "multinode-442000" cluster
	I0314 19:42:24.781938    8428 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0314 19:42:24.781938    8428 cache.go:56] Caching tarball of preloaded images
	I0314 19:42:24.781938    8428 preload.go:173] Found C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0314 19:42:24.781938    8428 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0314 19:42:24.781938    8428 profile.go:142] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-442000\config.json ...
	I0314 19:42:24.787627    8428 start.go:360] acquireMachinesLock for multinode-442000-m02: {Name:mk814f158b6187cc9297257c36fdbe0d2871c950 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0314 19:42:24.787627    8428 start.go:364] duration metric: took 0s to acquireMachinesLock for "multinode-442000-m02"
	I0314 19:42:24.787627    8428 start.go:96] Skipping create...Using existing machine configuration
	I0314 19:42:24.787627    8428 fix.go:54] fixHost starting: m02
	I0314 19:42:24.787627    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000-m02 ).state
	I0314 19:42:26.808599    8428 main.go:141] libmachine: [stdout =====>] : Off
	
	I0314 19:42:26.809623    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:42:26.809675    8428 fix.go:112] recreateIfNeeded on multinode-442000-m02: state=Stopped err=<nil>
	W0314 19:42:26.809790    8428 fix.go:138] unexpected machine state, will restart: <nil>
	I0314 19:42:26.814342    8428 out.go:177] * Restarting existing hyperv VM for "multinode-442000-m02" ...
	I0314 19:42:26.816679    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-442000-m02
	I0314 19:42:29.726066    8428 main.go:141] libmachine: [stdout =====>] : 
	I0314 19:42:29.726287    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:42:29.726287    8428 main.go:141] libmachine: Waiting for host to start...
	I0314 19:42:29.726287    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000-m02 ).state
	I0314 19:42:31.802428    8428 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:42:31.802649    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:42:31.802718    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-442000-m02 ).networkadapters[0]).ipaddresses[0]
	I0314 19:42:34.120121    8428 main.go:141] libmachine: [stdout =====>] : 
	I0314 19:42:34.120121    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:42:35.120652    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000-m02 ).state
	I0314 19:42:37.172337    8428 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:42:37.172770    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:42:37.172836    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-442000-m02 ).networkadapters[0]).ipaddresses[0]
	I0314 19:42:39.446961    8428 main.go:141] libmachine: [stdout =====>] : 
	I0314 19:42:39.446995    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:42:40.454908    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000-m02 ).state
	I0314 19:42:42.476048    8428 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:42:42.476163    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:42:42.476240    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-442000-m02 ).networkadapters[0]).ipaddresses[0]
	I0314 19:42:44.783167    8428 main.go:141] libmachine: [stdout =====>] : 
	I0314 19:42:44.783167    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:42:45.791551    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000-m02 ).state
	I0314 19:42:47.813359    8428 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:42:47.813359    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:42:47.814171    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-442000-m02 ).networkadapters[0]).ipaddresses[0]
	I0314 19:42:50.074989    8428 main.go:141] libmachine: [stdout =====>] : 
	I0314 19:42:50.074989    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:42:51.087339    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000-m02 ).state
	I0314 19:42:53.129558    8428 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:42:53.129841    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:42:53.129841    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-442000-m02 ).networkadapters[0]).ipaddresses[0]
	I0314 19:42:55.505515    8428 main.go:141] libmachine: [stdout =====>] : 172.17.93.200
	
	I0314 19:42:55.505554    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:42:55.507361    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000-m02 ).state
	I0314 19:42:57.475661    8428 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:42:57.475661    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:42:57.476014    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-442000-m02 ).networkadapters[0]).ipaddresses[0]
	I0314 19:42:59.841070    8428 main.go:141] libmachine: [stdout =====>] : 172.17.93.200
	
	I0314 19:42:59.841070    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:42:59.841070    8428 profile.go:142] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-442000\config.json ...
	I0314 19:42:59.843425    8428 machine.go:94] provisionDockerMachine start ...
	I0314 19:42:59.843425    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000-m02 ).state
	I0314 19:43:01.777806    8428 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:43:01.777806    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:43:01.777964    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-442000-m02 ).networkadapters[0]).ipaddresses[0]
	I0314 19:43:04.152668    8428 main.go:141] libmachine: [stdout =====>] : 172.17.93.200
	
	I0314 19:43:04.152668    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:43:04.156507    8428 main.go:141] libmachine: Using SSH client type: native
	I0314 19:43:04.156654    8428 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xac9f80] 0xaccb60 <nil>  [] 0s} 172.17.93.200 22 <nil> <nil>}
	I0314 19:43:04.156654    8428 main.go:141] libmachine: About to run SSH command:
	hostname
	I0314 19:43:04.281475    8428 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0314 19:43:04.281475    8428 buildroot.go:166] provisioning hostname "multinode-442000-m02"
	I0314 19:43:04.281475    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000-m02 ).state
	I0314 19:43:06.260591    8428 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:43:06.260591    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:43:06.261410    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-442000-m02 ).networkadapters[0]).ipaddresses[0]
	I0314 19:43:08.594834    8428 main.go:141] libmachine: [stdout =====>] : 172.17.93.200
	
	I0314 19:43:08.594834    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:43:08.598894    8428 main.go:141] libmachine: Using SSH client type: native
	I0314 19:43:08.599265    8428 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xac9f80] 0xaccb60 <nil>  [] 0s} 172.17.93.200 22 <nil> <nil>}
	I0314 19:43:08.599265    8428 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-442000-m02 && echo "multinode-442000-m02" | sudo tee /etc/hostname
	I0314 19:43:08.759647    8428 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-442000-m02
	
	I0314 19:43:08.759647    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000-m02 ).state
	I0314 19:43:10.753569    8428 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:43:10.753659    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:43:10.753826    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-442000-m02 ).networkadapters[0]).ipaddresses[0]
	I0314 19:43:13.116567    8428 main.go:141] libmachine: [stdout =====>] : 172.17.93.200
	
	I0314 19:43:13.116765    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:43:13.124233    8428 main.go:141] libmachine: Using SSH client type: native
	I0314 19:43:13.124233    8428 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xac9f80] 0xaccb60 <nil>  [] 0s} 172.17.93.200 22 <nil> <nil>}
	I0314 19:43:13.124233    8428 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-442000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-442000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-442000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0314 19:43:13.271548    8428 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0314 19:43:13.271636    8428 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube7\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube7\minikube-integration\.minikube}
	I0314 19:43:13.271702    8428 buildroot.go:174] setting up certificates
	I0314 19:43:13.271748    8428 provision.go:84] configureAuth start
	I0314 19:43:13.271857    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000-m02 ).state
	I0314 19:43:15.244755    8428 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:43:15.245188    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:43:15.245261    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-442000-m02 ).networkadapters[0]).ipaddresses[0]
	I0314 19:43:17.590466    8428 main.go:141] libmachine: [stdout =====>] : 172.17.93.200
	
	I0314 19:43:17.591513    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:43:17.591513    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000-m02 ).state
	I0314 19:43:19.582246    8428 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:43:19.583345    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:43:19.583376    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-442000-m02 ).networkadapters[0]).ipaddresses[0]
	I0314 19:43:21.917219    8428 main.go:141] libmachine: [stdout =====>] : 172.17.93.200
	
	I0314 19:43:21.917745    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:43:21.917745    8428 provision.go:143] copyHostCerts
	I0314 19:43:21.917745    8428 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem
	I0314 19:43:21.917745    8428 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem, removing ...
	I0314 19:43:21.917745    8428 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.pem
	I0314 19:43:21.918380    8428 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0314 19:43:21.919206    8428 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem
	I0314 19:43:21.919364    8428 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem, removing ...
	I0314 19:43:21.919445    8428 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cert.pem
	I0314 19:43:21.919594    8428 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0314 19:43:21.920372    8428 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem
	I0314 19:43:21.920608    8428 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem, removing ...
	I0314 19:43:21.920608    8428 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\key.pem
	I0314 19:43:21.920608    8428 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem (1679 bytes)
	I0314 19:43:21.921328    8428 provision.go:117] generating server cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-442000-m02 san=[127.0.0.1 172.17.93.200 localhost minikube multinode-442000-m02]
	I0314 19:43:22.223608    8428 provision.go:177] copyRemoteCerts
	I0314 19:43:22.233198    8428 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0314 19:43:22.233198    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000-m02 ).state
	I0314 19:43:24.189337    8428 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:43:24.189337    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:43:24.189337    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-442000-m02 ).networkadapters[0]).ipaddresses[0]
	I0314 19:43:26.509019    8428 main.go:141] libmachine: [stdout =====>] : 172.17.93.200
	
	I0314 19:43:26.509413    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:43:26.509697    8428 sshutil.go:53] new ssh client: &{IP:172.17.93.200 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-442000-m02\id_rsa Username:docker}
	I0314 19:43:26.609851    8428 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.3761668s)
	I0314 19:43:26.609890    8428 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0314 19:43:26.610218    8428 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0314 19:43:26.652101    8428 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0314 19:43:26.652248    8428 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1229 bytes)
	I0314 19:43:26.693962    8428 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0314 19:43:26.694363    8428 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0314 19:43:26.735455    8428 provision.go:87] duration metric: took 13.4626972s to configureAuth
	I0314 19:43:26.735455    8428 buildroot.go:189] setting minikube options for container-runtime
	I0314 19:43:26.735455    8428 config.go:182] Loaded profile config "multinode-442000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0314 19:43:26.735455    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000-m02 ).state
	I0314 19:43:28.704426    8428 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:43:28.704426    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:43:28.704689    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-442000-m02 ).networkadapters[0]).ipaddresses[0]
	I0314 19:43:31.087452    8428 main.go:141] libmachine: [stdout =====>] : 172.17.93.200
	
	I0314 19:43:31.087452    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:43:31.091352    8428 main.go:141] libmachine: Using SSH client type: native
	I0314 19:43:31.091874    8428 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xac9f80] 0xaccb60 <nil>  [] 0s} 172.17.93.200 22 <nil> <nil>}
	I0314 19:43:31.091874    8428 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0314 19:43:31.229188    8428 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0314 19:43:31.229188    8428 buildroot.go:70] root file system type: tmpfs
	I0314 19:43:31.229732    8428 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0314 19:43:31.229849    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000-m02 ).state
	I0314 19:43:33.210256    8428 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:43:33.210256    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:43:33.210256    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-442000-m02 ).networkadapters[0]).ipaddresses[0]
	I0314 19:43:35.543113    8428 main.go:141] libmachine: [stdout =====>] : 172.17.93.200
	
	I0314 19:43:35.543113    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:43:35.548106    8428 main.go:141] libmachine: Using SSH client type: native
	I0314 19:43:35.548508    8428 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xac9f80] 0xaccb60 <nil>  [] 0s} 172.17.93.200 22 <nil> <nil>}
	I0314 19:43:35.548508    8428 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.17.93.236"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0314 19:43:35.712813    8428 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.17.93.236
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0314 19:43:35.712813    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000-m02 ).state
	I0314 19:43:37.672506    8428 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:43:37.688954    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:43:37.689089    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-442000-m02 ).networkadapters[0]).ipaddresses[0]
	I0314 19:43:40.056738    8428 main.go:141] libmachine: [stdout =====>] : 172.17.93.200
	
	I0314 19:43:40.056738    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:43:40.060403    8428 main.go:141] libmachine: Using SSH client type: native
	I0314 19:43:40.060802    8428 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xac9f80] 0xaccb60 <nil>  [] 0s} 172.17.93.200 22 <nil> <nil>}
	I0314 19:43:40.060802    8428 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0314 19:43:42.342578    8428 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0314 19:43:42.342578    8428 machine.go:97] duration metric: took 42.495965s to provisionDockerMachine
	I0314 19:43:42.342578    8428 start.go:293] postStartSetup for "multinode-442000-m02" (driver="hyperv")
	I0314 19:43:42.342578    8428 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0314 19:43:42.351826    8428 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0314 19:43:42.351826    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000-m02 ).state
	I0314 19:43:44.321500    8428 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:43:44.322439    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:43:44.322439    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-442000-m02 ).networkadapters[0]).ipaddresses[0]
	I0314 19:43:46.648572    8428 main.go:141] libmachine: [stdout =====>] : 172.17.93.200
	
	I0314 19:43:46.648621    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:43:46.648621    8428 sshutil.go:53] new ssh client: &{IP:172.17.93.200 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-442000-m02\id_rsa Username:docker}
	I0314 19:43:46.750526    8428 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.3983707s)
	I0314 19:43:46.758920    8428 ssh_runner.go:195] Run: cat /etc/os-release
	I0314 19:43:46.765583    8428 command_runner.go:130] > NAME=Buildroot
	I0314 19:43:46.765583    8428 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0314 19:43:46.765583    8428 command_runner.go:130] > ID=buildroot
	I0314 19:43:46.765583    8428 command_runner.go:130] > VERSION_ID=2023.02.9
	I0314 19:43:46.765583    8428 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0314 19:43:46.765583    8428 info.go:137] Remote host: Buildroot 2023.02.9
	I0314 19:43:46.765583    8428 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\addons for local assets ...
	I0314 19:43:46.766114    8428 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\files for local assets ...
	I0314 19:43:46.766728    8428 filesync.go:149] local asset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\110522.pem -> 110522.pem in /etc/ssl/certs
	I0314 19:43:46.766728    8428 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\110522.pem -> /etc/ssl/certs/110522.pem
	I0314 19:43:46.776371    8428 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0314 19:43:46.792827    8428 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\110522.pem --> /etc/ssl/certs/110522.pem (1708 bytes)
	I0314 19:43:46.834338    8428 start.go:296] duration metric: took 4.4914233s for postStartSetup
	I0314 19:43:46.834338    8428 fix.go:56] duration metric: took 1m22.0405476s for fixHost
	I0314 19:43:46.834338    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000-m02 ).state
	I0314 19:43:48.761489    8428 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:43:48.761583    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:43:48.761583    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-442000-m02 ).networkadapters[0]).ipaddresses[0]
	I0314 19:43:51.087077    8428 main.go:141] libmachine: [stdout =====>] : 172.17.93.200
	
	I0314 19:43:51.087514    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:43:51.091029    8428 main.go:141] libmachine: Using SSH client type: native
	I0314 19:43:51.091636    8428 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xac9f80] 0xaccb60 <nil>  [] 0s} 172.17.93.200 22 <nil> <nil>}
	I0314 19:43:51.091636    8428 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0314 19:43:51.221355    8428 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710445431.474296497
	
	I0314 19:43:51.221432    8428 fix.go:216] guest clock: 1710445431.474296497
	I0314 19:43:51.221432    8428 fix.go:229] Guest: 2024-03-14 19:43:51.474296497 +0000 UTC Remote: 2024-03-14 19:43:46.834338 +0000 UTC m=+284.346477901 (delta=4.639958497s)
	I0314 19:43:51.221507    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000-m02 ).state
	I0314 19:43:53.182528    8428 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:43:53.182562    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:43:53.182639    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-442000-m02 ).networkadapters[0]).ipaddresses[0]
	I0314 19:43:55.545891    8428 main.go:141] libmachine: [stdout =====>] : 172.17.93.200
	
	I0314 19:43:55.545891    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:43:55.549623    8428 main.go:141] libmachine: Using SSH client type: native
	I0314 19:43:55.550241    8428 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xac9f80] 0xaccb60 <nil>  [] 0s} 172.17.93.200 22 <nil> <nil>}
	I0314 19:43:55.550241    8428 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1710445431
	I0314 19:43:55.686821    8428 main.go:141] libmachine: SSH cmd err, output: <nil>: Thu Mar 14 19:43:51 UTC 2024
	
	I0314 19:43:55.686821    8428 fix.go:236] clock set: Thu Mar 14 19:43:51 UTC 2024
	 (err=<nil>)
	I0314 19:43:55.686821    8428 start.go:83] releasing machines lock for "multinode-442000-m02", held for 1m30.8923684s
	I0314 19:43:55.687970    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000-m02 ).state
	I0314 19:43:57.672870    8428 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:43:57.673525    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:43:57.673525    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-442000-m02 ).networkadapters[0]).ipaddresses[0]
	I0314 19:44:00.030849    8428 main.go:141] libmachine: [stdout =====>] : 172.17.93.200
	
	I0314 19:44:00.030849    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:44:00.033572    8428 out.go:177] * Found network options:
	I0314 19:44:00.035769    8428 out.go:177]   - NO_PROXY=172.17.93.236
	W0314 19:44:00.037724    8428 proxy.go:119] fail to check proxy env: Error ip not in block
	I0314 19:44:00.039504    8428 out.go:177]   - NO_PROXY=172.17.93.236
	W0314 19:44:00.041078    8428 proxy.go:119] fail to check proxy env: Error ip not in block
	W0314 19:44:00.042766    8428 proxy.go:119] fail to check proxy env: Error ip not in block
	I0314 19:44:00.044757    8428 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0314 19:44:00.044757    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000-m02 ).state
	I0314 19:44:00.051770    8428 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0314 19:44:00.051770    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000-m02 ).state
	I0314 19:44:02.045819    8428 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:44:02.045935    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:44:02.045993    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-442000-m02 ).networkadapters[0]).ipaddresses[0]
	I0314 19:44:02.059280    8428 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:44:02.059280    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:44:02.059280    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-442000-m02 ).networkadapters[0]).ipaddresses[0]
	I0314 19:44:04.505336    8428 main.go:141] libmachine: [stdout =====>] : 172.17.93.200
	
	I0314 19:44:04.505336    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:44:04.505336    8428 sshutil.go:53] new ssh client: &{IP:172.17.93.200 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-442000-m02\id_rsa Username:docker}
	I0314 19:44:04.518554    8428 main.go:141] libmachine: [stdout =====>] : 172.17.93.200
	
	I0314 19:44:04.518554    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:44:04.518554    8428 sshutil.go:53] new ssh client: &{IP:172.17.93.200 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-442000-m02\id_rsa Username:docker}
	I0314 19:44:04.598121    8428 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	I0314 19:44:04.598346    8428 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (4.5461369s)
	W0314 19:44:04.598346    8428 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0314 19:44:04.609505    8428 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0314 19:44:04.675195    8428 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0314 19:44:04.675292    8428 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.6301878s)
	I0314 19:44:04.675292    8428 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0314 19:44:04.675449    8428 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0314 19:44:04.675449    8428 start.go:494] detecting cgroup driver to use...
	I0314 19:44:04.675704    8428 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0314 19:44:04.707714    8428 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0314 19:44:04.717196    8428 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0314 19:44:04.744752    8428 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0314 19:44:04.763306    8428 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0314 19:44:04.772646    8428 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0314 19:44:04.800624    8428 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0314 19:44:04.828339    8428 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0314 19:44:04.854956    8428 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0314 19:44:04.881672    8428 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0314 19:44:04.907690    8428 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0314 19:44:04.933871    8428 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0314 19:44:04.950020    8428 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0314 19:44:04.958598    8428 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0314 19:44:04.983787    8428 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 19:44:05.171967    8428 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0314 19:44:05.201671    8428 start.go:494] detecting cgroup driver to use...
	I0314 19:44:05.216543    8428 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0314 19:44:05.244196    8428 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0314 19:44:05.244196    8428 command_runner.go:130] > [Unit]
	I0314 19:44:05.244196    8428 command_runner.go:130] > Description=Docker Application Container Engine
	I0314 19:44:05.244196    8428 command_runner.go:130] > Documentation=https://docs.docker.com
	I0314 19:44:05.244196    8428 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0314 19:44:05.244196    8428 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0314 19:44:05.244196    8428 command_runner.go:130] > StartLimitBurst=3
	I0314 19:44:05.244196    8428 command_runner.go:130] > StartLimitIntervalSec=60
	I0314 19:44:05.244196    8428 command_runner.go:130] > [Service]
	I0314 19:44:05.244196    8428 command_runner.go:130] > Type=notify
	I0314 19:44:05.244196    8428 command_runner.go:130] > Restart=on-failure
	I0314 19:44:05.244196    8428 command_runner.go:130] > Environment=NO_PROXY=172.17.93.236
	I0314 19:44:05.244196    8428 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0314 19:44:05.244196    8428 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0314 19:44:05.244196    8428 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0314 19:44:05.244196    8428 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0314 19:44:05.244196    8428 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0314 19:44:05.244196    8428 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0314 19:44:05.244196    8428 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0314 19:44:05.244196    8428 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0314 19:44:05.244196    8428 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0314 19:44:05.244196    8428 command_runner.go:130] > ExecStart=
	I0314 19:44:05.244196    8428 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0314 19:44:05.244196    8428 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0314 19:44:05.244196    8428 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0314 19:44:05.244196    8428 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0314 19:44:05.244196    8428 command_runner.go:130] > LimitNOFILE=infinity
	I0314 19:44:05.244720    8428 command_runner.go:130] > LimitNPROC=infinity
	I0314 19:44:05.244720    8428 command_runner.go:130] > LimitCORE=infinity
	I0314 19:44:05.244720    8428 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0314 19:44:05.244720    8428 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0314 19:44:05.244780    8428 command_runner.go:130] > TasksMax=infinity
	I0314 19:44:05.244780    8428 command_runner.go:130] > TimeoutStartSec=0
	I0314 19:44:05.244822    8428 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0314 19:44:05.244822    8428 command_runner.go:130] > Delegate=yes
	I0314 19:44:05.244822    8428 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0314 19:44:05.244886    8428 command_runner.go:130] > KillMode=process
	I0314 19:44:05.244886    8428 command_runner.go:130] > [Install]
	I0314 19:44:05.244925    8428 command_runner.go:130] > WantedBy=multi-user.target
	I0314 19:44:05.254966    8428 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0314 19:44:05.284772    8428 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0314 19:44:05.316522    8428 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0314 19:44:05.346740    8428 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0314 19:44:05.378469    8428 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0314 19:44:05.434710    8428 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0314 19:44:05.457345    8428 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0314 19:44:05.486496    8428 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0314 19:44:05.496594    8428 ssh_runner.go:195] Run: which cri-dockerd
	I0314 19:44:05.502693    8428 command_runner.go:130] > /usr/bin/cri-dockerd
	I0314 19:44:05.511454    8428 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0314 19:44:05.528357    8428 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0314 19:44:05.566730    8428 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0314 19:44:05.755177    8428 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0314 19:44:05.932341    8428 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0314 19:44:05.932451    8428 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0314 19:44:05.971592    8428 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 19:44:06.153863    8428 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0314 19:44:08.743376    8428 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5892643s)
	I0314 19:44:08.752821    8428 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0314 19:44:08.783374    8428 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0314 19:44:08.817883    8428 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0314 19:44:09.004360    8428 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0314 19:44:09.185525    8428 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 19:44:09.361058    8428 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0314 19:44:09.397440    8428 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0314 19:44:09.428488    8428 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 19:44:09.610459    8428 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0314 19:44:09.712439    8428 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0314 19:44:09.724634    8428 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0314 19:44:09.732955    8428 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0314 19:44:09.732955    8428 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0314 19:44:09.732955    8428 command_runner.go:130] > Device: 0,22	Inode: 846         Links: 1
	I0314 19:44:09.732955    8428 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0314 19:44:09.732955    8428 command_runner.go:130] > Access: 2024-03-14 19:44:09.889828811 +0000
	I0314 19:44:09.732955    8428 command_runner.go:130] > Modify: 2024-03-14 19:44:09.889828811 +0000
	I0314 19:44:09.732955    8428 command_runner.go:130] > Change: 2024-03-14 19:44:09.893829164 +0000
	I0314 19:44:09.733104    8428 command_runner.go:130] >  Birth: -
	I0314 19:44:09.733155    8428 start.go:562] Will wait 60s for crictl version
	I0314 19:44:09.741469    8428 ssh_runner.go:195] Run: which crictl
	I0314 19:44:09.746596    8428 command_runner.go:130] > /usr/bin/crictl
	I0314 19:44:09.756167    8428 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0314 19:44:09.828061    8428 command_runner.go:130] > Version:  0.1.0
	I0314 19:44:09.828061    8428 command_runner.go:130] > RuntimeName:  docker
	I0314 19:44:09.828061    8428 command_runner.go:130] > RuntimeVersion:  25.0.4
	I0314 19:44:09.828061    8428 command_runner.go:130] > RuntimeApiVersion:  v1
	I0314 19:44:09.828061    8428 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  25.0.4
	RuntimeApiVersion:  v1
	I0314 19:44:09.837622    8428 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0314 19:44:09.873435    8428 command_runner.go:130] > 25.0.4
	I0314 19:44:09.880979    8428 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0314 19:44:09.912289    8428 command_runner.go:130] > 25.0.4
	I0314 19:44:09.916104    8428 out.go:204] * Preparing Kubernetes v1.28.4 on Docker 25.0.4 ...
	I0314 19:44:09.918093    8428 out.go:177]   - env NO_PROXY=172.17.93.236
	I0314 19:44:09.920068    8428 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0314 19:44:09.924061    8428 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0314 19:44:09.924061    8428 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0314 19:44:09.924061    8428 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0314 19:44:09.924061    8428 ip.go:207] Found interface: {Index:7 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:82:e8:09 Flags:up|broadcast|multicast|running}
	I0314 19:44:09.926404    8428 ip.go:210] interface addr: fe80::e3be:cf7e:6bd2:b964/64
	I0314 19:44:09.926404    8428 ip.go:210] interface addr: 172.17.80.1/20
	I0314 19:44:09.937245    8428 ssh_runner.go:195] Run: grep 172.17.80.1	host.minikube.internal$ /etc/hosts
	I0314 19:44:09.942748    8428 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.17.80.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0314 19:44:09.963451    8428 mustload.go:65] Loading cluster: multinode-442000
	I0314 19:44:09.964043    8428 config.go:182] Loaded profile config "multinode-442000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0314 19:44:09.964509    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000 ).state
	I0314 19:44:12.011664    8428 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:44:12.011664    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:44:12.011664    8428 host.go:66] Checking if "multinode-442000" exists ...
	I0314 19:44:12.012607    8428 certs.go:68] Setting up C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-442000 for IP: 172.17.93.200
	I0314 19:44:12.012607    8428 certs.go:194] generating shared ca certs ...
	I0314 19:44:12.012607    8428 certs.go:226] acquiring lock for ca certs: {Name:mk3bd6e475aadf28590677101b36f61c132d4290 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 19:44:12.013204    8428 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key
	I0314 19:44:12.013421    8428 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key
	I0314 19:44:12.013626    8428 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0314 19:44:12.013844    8428 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0314 19:44:12.013986    8428 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0314 19:44:12.014022    8428 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0314 19:44:12.014022    8428 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\11052.pem (1338 bytes)
	W0314 19:44:12.014557    8428 certs.go:480] ignoring C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\11052_empty.pem, impossibly tiny 0 bytes
	I0314 19:44:12.014662    8428 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0314 19:44:12.014775    8428 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0314 19:44:12.014989    8428 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0314 19:44:12.015190    8428 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0314 19:44:12.015572    8428 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\110522.pem (1708 bytes)
	I0314 19:44:12.015673    8428 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\11052.pem -> /usr/share/ca-certificates/11052.pem
	I0314 19:44:12.015767    8428 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\110522.pem -> /usr/share/ca-certificates/110522.pem
	I0314 19:44:12.015900    8428 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0314 19:44:12.016007    8428 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0314 19:44:12.062723    8428 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0314 19:44:12.105466    8428 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0314 19:44:12.148126    8428 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0314 19:44:12.188631    8428 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\11052.pem --> /usr/share/ca-certificates/11052.pem (1338 bytes)
	I0314 19:44:12.236602    8428 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\110522.pem --> /usr/share/ca-certificates/110522.pem (1708 bytes)
	I0314 19:44:12.278564    8428 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0314 19:44:12.330250    8428 ssh_runner.go:195] Run: openssl version
	I0314 19:44:12.337936    8428 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0314 19:44:12.347970    8428 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11052.pem && ln -fs /usr/share/ca-certificates/11052.pem /etc/ssl/certs/11052.pem"
	I0314 19:44:12.376306    8428 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11052.pem
	I0314 19:44:12.383055    8428 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Mar 14 17:58 /usr/share/ca-certificates/11052.pem
	I0314 19:44:12.383055    8428 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 14 17:58 /usr/share/ca-certificates/11052.pem
	I0314 19:44:12.391962    8428 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11052.pem
	I0314 19:44:12.399937    8428 command_runner.go:130] > 51391683
	I0314 19:44:12.409261    8428 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11052.pem /etc/ssl/certs/51391683.0"
	I0314 19:44:12.436253    8428 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110522.pem && ln -fs /usr/share/ca-certificates/110522.pem /etc/ssl/certs/110522.pem"
	I0314 19:44:12.469463    8428 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/110522.pem
	I0314 19:44:12.477415    8428 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Mar 14 17:58 /usr/share/ca-certificates/110522.pem
	I0314 19:44:12.477415    8428 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 14 17:58 /usr/share/ca-certificates/110522.pem
	I0314 19:44:12.485416    8428 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110522.pem
	I0314 19:44:12.495082    8428 command_runner.go:130] > 3ec20f2e
	I0314 19:44:12.508688    8428 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110522.pem /etc/ssl/certs/3ec20f2e.0"
	I0314 19:44:12.544212    8428 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0314 19:44:12.572103    8428 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0314 19:44:12.578992    8428 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Mar 14 17:44 /usr/share/ca-certificates/minikubeCA.pem
	I0314 19:44:12.578992    8428 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 14 17:44 /usr/share/ca-certificates/minikubeCA.pem
	I0314 19:44:12.588463    8428 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0314 19:44:12.597619    8428 command_runner.go:130] > b5213941
	I0314 19:44:12.606348    8428 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0314 19:44:12.633790    8428 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0314 19:44:12.640836    8428 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0314 19:44:12.640970    8428 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0314 19:44:12.641173    8428 kubeadm.go:928] updating node {m02 172.17.93.200 8443 v1.28.4 docker false true} ...
	I0314 19:44:12.641223    8428 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-442000-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.17.93.200
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-442000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0314 19:44:12.650957    8428 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0314 19:44:12.668250    8428 command_runner.go:130] > kubeadm
	I0314 19:44:12.668273    8428 command_runner.go:130] > kubectl
	I0314 19:44:12.668273    8428 command_runner.go:130] > kubelet
	I0314 19:44:12.668343    8428 binaries.go:44] Found k8s binaries, skipping transfer
	I0314 19:44:12.677540    8428 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0314 19:44:12.695074    8428 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (320 bytes)
	I0314 19:44:12.726385    8428 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0314 19:44:12.762264    8428 ssh_runner.go:195] Run: grep 172.17.93.236	control-plane.minikube.internal$ /etc/hosts
	I0314 19:44:12.768995    8428 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.17.93.236	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0314 19:44:12.797587    8428 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 19:44:12.993042    8428 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0314 19:44:13.020969    8428 host.go:66] Checking if "multinode-442000" exists ...
	I0314 19:44:13.021509    8428 start.go:316] joinCluster: &{Name:multinode-442000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.
4 ClusterName:multinode-442000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.93.236 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.17.93.200 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.17.84.215 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:
false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 19:44:13.021509    8428 start.go:329] removing existing worker node "m02" before attempting to rejoin cluster: &{Name:m02 IP:172.17.93.200 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0314 19:44:13.021509    8428 host.go:66] Checking if "multinode-442000-m02" exists ...
	I0314 19:44:13.022125    8428 mustload.go:65] Loading cluster: multinode-442000
	I0314 19:44:13.022571    8428 config.go:182] Loaded profile config "multinode-442000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0314 19:44:13.022732    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000 ).state
	I0314 19:44:15.004316    8428 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:44:15.004743    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:44:15.004743    8428 host.go:66] Checking if "multinode-442000" exists ...
	I0314 19:44:15.005313    8428 api_server.go:166] Checking apiserver status ...
	I0314 19:44:15.014175    8428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:44:15.014175    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000 ).state
	I0314 19:44:17.011297    8428 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:44:17.011297    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:44:17.011974    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-442000 ).networkadapters[0]).ipaddresses[0]
	I0314 19:44:19.350271    8428 main.go:141] libmachine: [stdout =====>] : 172.17.93.236
	
	I0314 19:44:19.350271    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:44:19.351153    8428 sshutil.go:53] new ssh client: &{IP:172.17.93.236 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-442000\id_rsa Username:docker}
	I0314 19:44:19.467782    8428 command_runner.go:130] > 2008
	I0314 19:44:19.468498    8428 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (4.4539366s)
	I0314 19:44:19.479317    8428 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2008/cgroup
	W0314 19:44:19.497262    8428 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2008/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0314 19:44:19.508930    8428 ssh_runner.go:195] Run: ls
	I0314 19:44:19.518325    8428 api_server.go:253] Checking apiserver healthz at https://172.17.93.236:8443/healthz ...
	I0314 19:44:19.529186    8428 api_server.go:279] https://172.17.93.236:8443/healthz returned 200:
	ok
	I0314 19:44:19.544218    8428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl drain multinode-442000-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data
	I0314 19:44:19.693698    8428 command_runner.go:130] ! Warning: ignoring DaemonSet-managed Pods: kube-system/kindnet-c7m4p, kube-system/kube-proxy-72dzs
	I0314 19:44:22.732229    8428 command_runner.go:130] > node/multinode-442000-m02 cordoned
	I0314 19:44:22.732229    8428 command_runner.go:130] > pod "busybox-5b5d89c9d6-8drpb" has DeletionTimestamp older than 1 seconds, skipping
	I0314 19:44:22.732355    8428 command_runner.go:130] > node/multinode-442000-m02 drained
	I0314 19:44:22.732355    8428 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl drain multinode-442000-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data: (3.1878989s)
	I0314 19:44:22.732355    8428 node.go:128] successfully drained node "multinode-442000-m02"
	I0314 19:44:22.732355    8428 ssh_runner.go:195] Run: /bin/bash -c "KUBECONFIG=/var/lib/minikube/kubeconfig sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --force --ignore-preflight-errors=all --cri-socket=unix:///var/run/cri-dockerd.sock"
	I0314 19:44:22.732666    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000-m02 ).state
	I0314 19:44:24.694226    8428 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:44:24.694226    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:44:24.694226    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-442000-m02 ).networkadapters[0]).ipaddresses[0]
	I0314 19:44:27.034071    8428 main.go:141] libmachine: [stdout =====>] : 172.17.93.200
	
	I0314 19:44:27.034071    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:44:27.034571    8428 sshutil.go:53] new ssh client: &{IP:172.17.93.200 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-442000-m02\id_rsa Username:docker}
	I0314 19:44:27.419352    8428 command_runner.go:130] > [preflight] Running pre-flight checks
	I0314 19:44:27.421117    8428 command_runner.go:130] > [reset] Deleted contents of the etcd data directory: /var/lib/etcd
	I0314 19:44:27.422201    8428 command_runner.go:130] > [reset] Stopping the kubelet service
	I0314 19:44:27.436164    8428 command_runner.go:130] > [reset] Unmounting mounted directories in "/var/lib/kubelet"
	I0314 19:44:28.014663    8428 command_runner.go:130] > [reset] Deleting contents of directories: [/etc/kubernetes/manifests /var/lib/kubelet /etc/kubernetes/pki]
	I0314 19:44:28.034315    8428 command_runner.go:130] > [reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
	I0314 19:44:28.034435    8428 command_runner.go:130] > The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d
	I0314 19:44:28.034435    8428 command_runner.go:130] > The reset process does not reset or clean up iptables rules or IPVS tables.
	I0314 19:44:28.034435    8428 command_runner.go:130] > If you wish to reset iptables, you must do so manually by using the "iptables" command.
	I0314 19:44:28.034435    8428 command_runner.go:130] > If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
	I0314 19:44:28.034490    8428 command_runner.go:130] > to reset your system's IPVS tables.
	I0314 19:44:28.034490    8428 command_runner.go:130] > The reset process does not clean your kubeconfig files and you must remove them manually.
	I0314 19:44:28.034490    8428 command_runner.go:130] > Please, check the contents of the $HOME/.kube/config file.
	I0314 19:44:28.036255    8428 command_runner.go:130] ! W0314 19:44:27.679521    1550 removeetcdmember.go:106] [reset] No kubeadm config, using etcd pod spec to get data directory
	I0314 19:44:28.036387    8428 command_runner.go:130] ! W0314 19:44:28.271846    1550 cleanupnode.go:99] [reset] Failed to remove containers: failed to stop running pod a2877e9c2a8bda33c0139c1a1bf02c535834060c5ea2dbf379c752c83c6a304c: output: E0314 19:44:27.964246    1610 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \"busybox-5b5d89c9d6-8drpb_default\" network: cni config uninitialized" podSandboxID="a2877e9c2a8bda33c0139c1a1bf02c535834060c5ea2dbf379c752c83c6a304c"
	I0314 19:44:28.036532    8428 command_runner.go:130] ! time="2024-03-14T19:44:27Z" level=fatal msg="stopping the pod sandbox \"a2877e9c2a8bda33c0139c1a1bf02c535834060c5ea2dbf379c752c83c6a304c\": rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \"busybox-5b5d89c9d6-8drpb_default\" network: cni config uninitialized"
	I0314 19:44:28.036532    8428 command_runner.go:130] ! : exit status 1
	I0314 19:44:28.036634    8428 ssh_runner.go:235] Completed: /bin/bash -c "KUBECONFIG=/var/lib/minikube/kubeconfig sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --force --ignore-preflight-errors=all --cri-socket=unix:///var/run/cri-dockerd.sock": (5.303688s)
	I0314 19:44:28.036770    8428 node.go:155] successfully reset node "multinode-442000-m02"
	I0314 19:44:28.037570    8428 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0314 19:44:28.038453    8428 kapi.go:59] client config for multinode-442000: &rest.Config{Host:"https://172.17.93.236:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\multinode-442000\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\multinode-442000\\client.key", CAFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1ec9180), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0314 19:44:28.039575    8428 cert_rotation.go:137] Starting client certificate rotation controller
	I0314 19:44:28.039934    8428 request.go:1212] Request Body: {"kind":"DeleteOptions","apiVersion":"v1"}
	I0314 19:44:28.040011    8428 round_trippers.go:463] DELETE https://172.17.93.236:8443/api/v1/nodes/multinode-442000-m02
	I0314 19:44:28.040079    8428 round_trippers.go:469] Request Headers:
	I0314 19:44:28.040079    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:44:28.040079    8428 round_trippers.go:473]     Content-Type: application/json
	I0314 19:44:28.040114    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:44:28.062066    8428 round_trippers.go:574] Response Status: 200 OK in 21 milliseconds
	I0314 19:44:28.062066    8428 round_trippers.go:577] Response Headers:
	I0314 19:44:28.062129    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:44:28 GMT
	I0314 19:44:28.062129    8428 round_trippers.go:580]     Audit-Id: 07eab05f-7218-4546-b48a-64d5d569cb3d
	I0314 19:44:28.062129    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:44:28.062129    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:44:28.062159    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:44:28.062159    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:44:28.062159    8428 round_trippers.go:580]     Content-Length: 171
	I0314 19:44:28.062187    8428 request.go:1212] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"multinode-442000-m02","kind":"nodes","uid":"5f369d83-fce6-47fe-b14b-171ed626975b"}}
	I0314 19:44:28.062187    8428 node.go:180] successfully deleted node "multinode-442000-m02"
	I0314 19:44:28.062187    8428 start.go:333] successfully removed existing worker node "m02" from cluster: &{Name:m02 IP:172.17.93.200 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0314 19:44:28.062187    8428 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0314 19:44:28.062187    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000 ).state
	I0314 19:44:30.043379    8428 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:44:30.043379    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:44:30.043464    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-442000 ).networkadapters[0]).ipaddresses[0]
	I0314 19:44:32.376935    8428 main.go:141] libmachine: [stdout =====>] : 172.17.93.236
	
	I0314 19:44:32.376935    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:44:32.377349    8428 sshutil.go:53] new ssh client: &{IP:172.17.93.236 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-442000\id_rsa Username:docker}
	I0314 19:44:32.616519    8428 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token lkihtm.szfhj1z8jquppx08 --discovery-token-ca-cert-hash sha256:d6cca618a9b89cd38081689b02ff56bf93130e2cdd7cca4172dee33bbb1d34eb 
	I0314 19:44:32.616574    8428 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0": (4.5540472s)
	I0314 19:44:32.616707    8428 start.go:342] trying to join worker node "m02" to cluster: &{Name:m02 IP:172.17.93.200 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0314 19:44:32.616767    8428 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token lkihtm.szfhj1z8jquppx08 --discovery-token-ca-cert-hash sha256:d6cca618a9b89cd38081689b02ff56bf93130e2cdd7cca4172dee33bbb1d34eb --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=multinode-442000-m02"
	I0314 19:44:32.851207    8428 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0314 19:44:35.673533    8428 command_runner.go:130] > [preflight] Running pre-flight checks
	I0314 19:44:35.673609    8428 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0314 19:44:35.673609    8428 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0314 19:44:35.673609    8428 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0314 19:44:35.673691    8428 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0314 19:44:35.673691    8428 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0314 19:44:35.673691    8428 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I0314 19:44:35.673691    8428 command_runner.go:130] > This node has joined the cluster:
	I0314 19:44:35.673691    8428 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0314 19:44:35.673756    8428 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0314 19:44:35.673805    8428 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0314 19:44:35.673805    8428 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token lkihtm.szfhj1z8jquppx08 --discovery-token-ca-cert-hash sha256:d6cca618a9b89cd38081689b02ff56bf93130e2cdd7cca4172dee33bbb1d34eb --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=multinode-442000-m02": (3.0567517s)
	I0314 19:44:35.673900    8428 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0314 19:44:35.884827    8428 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
	I0314 19:44:36.082729    8428 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-442000-m02 minikube.k8s.io/updated_at=2024_03_14T19_44_36_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=c6f78a3db54ac629870afb44fb5bc8be9e04a8c7 minikube.k8s.io/name=multinode-442000 minikube.k8s.io/primary=false
	I0314 19:44:36.230888    8428 command_runner.go:130] > node/multinode-442000-m02 labeled
	I0314 19:44:36.231007    8428 start.go:318] duration metric: took 23.207766s to joinCluster
	I0314 19:44:36.231133    8428 start.go:234] Will wait 6m0s for node &{Name:m02 IP:172.17.93.200 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0314 19:44:36.251113    8428 out.go:177] * Verifying Kubernetes components...
	I0314 19:44:36.231711    8428 config.go:182] Loaded profile config "multinode-442000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0314 19:44:36.264093    8428 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 19:44:36.480416    8428 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0314 19:44:36.516051    8428 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0314 19:44:36.516651    8428 kapi.go:59] client config for multinode-442000: &rest.Config{Host:"https://172.17.93.236:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\multinode-442000\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\multinode-442000\\client.key", CAFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1ec9180), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0314 19:44:36.517650    8428 node_ready.go:35] waiting up to 6m0s for node "multinode-442000-m02" to be "Ready" ...
	I0314 19:44:36.517650    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000-m02
	I0314 19:44:36.517650    8428 round_trippers.go:469] Request Headers:
	I0314 19:44:36.517650    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:44:36.517650    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:44:36.522545    8428 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 19:44:36.522899    8428 round_trippers.go:577] Response Headers:
	I0314 19:44:36.522899    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:44:36.522899    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:44:36 GMT
	I0314 19:44:36.522899    8428 round_trippers.go:580]     Audit-Id: f151b53b-b6c9-4e7d-83a3-1dce6974d5e5
	I0314 19:44:36.522899    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:44:36.522899    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:44:36.522899    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:44:36.523118    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000-m02","uid":"269d9995-0aad-4983-a7b9-80160bdd37ef","resourceVersion":"2061","creationTimestamp":"2024-03-14T19:44:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_14T19_44_36_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:44:35Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3688 chars]
	I0314 19:44:37.020808    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000-m02
	I0314 19:44:37.020889    8428 round_trippers.go:469] Request Headers:
	I0314 19:44:37.020889    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:44:37.020889    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:44:37.024184    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:44:37.024704    8428 round_trippers.go:577] Response Headers:
	I0314 19:44:37.024704    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:44:37.024704    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:44:37.024704    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:44:37 GMT
	I0314 19:44:37.024704    8428 round_trippers.go:580]     Audit-Id: 009ac116-9398-40d2-ae94-8ec46b3b4e95
	I0314 19:44:37.024704    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:44:37.024704    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:44:37.024963    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000-m02","uid":"269d9995-0aad-4983-a7b9-80160bdd37ef","resourceVersion":"2061","creationTimestamp":"2024-03-14T19:44:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_14T19_44_36_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:44:35Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3688 chars]
	I0314 19:44:37.520754    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000-m02
	I0314 19:44:37.521019    8428 round_trippers.go:469] Request Headers:
	I0314 19:44:37.521019    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:44:37.521019    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:44:37.524988    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:44:37.525328    8428 round_trippers.go:577] Response Headers:
	I0314 19:44:37.525328    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:44:37.525328    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:44:37 GMT
	I0314 19:44:37.525328    8428 round_trippers.go:580]     Audit-Id: 1e022147-cc71-4930-8c96-3d4fb5d43d2f
	I0314 19:44:37.525328    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:44:37.525384    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:44:37.525384    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:44:37.525567    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000-m02","uid":"269d9995-0aad-4983-a7b9-80160bdd37ef","resourceVersion":"2061","creationTimestamp":"2024-03-14T19:44:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_14T19_44_36_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:44:35Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3688 chars]
	I0314 19:44:38.021838    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000-m02
	I0314 19:44:38.021907    8428 round_trippers.go:469] Request Headers:
	I0314 19:44:38.021907    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:44:38.021907    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:44:38.026490    8428 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 19:44:38.026490    8428 round_trippers.go:577] Response Headers:
	I0314 19:44:38.026490    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:44:38.026490    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:44:38 GMT
	I0314 19:44:38.026490    8428 round_trippers.go:580]     Audit-Id: 606f2489-4a52-49e8-b3c1-544d0f45ce12
	I0314 19:44:38.026490    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:44:38.026490    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:44:38.026490    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:44:38.026490    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000-m02","uid":"269d9995-0aad-4983-a7b9-80160bdd37ef","resourceVersion":"2061","creationTimestamp":"2024-03-14T19:44:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_14T19_44_36_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:44:35Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3688 chars]
	I0314 19:44:38.522636    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000-m02
	I0314 19:44:38.522636    8428 round_trippers.go:469] Request Headers:
	I0314 19:44:38.522845    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:44:38.522845    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:44:38.527373    8428 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 19:44:38.527373    8428 round_trippers.go:577] Response Headers:
	I0314 19:44:38.527373    8428 round_trippers.go:580]     Audit-Id: b54c0c43-5bc3-4769-b1fd-ad0e6184979d
	I0314 19:44:38.527373    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:44:38.527373    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:44:38.527373    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:44:38.527373    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:44:38.527373    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:44:38 GMT
	I0314 19:44:38.528059    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000-m02","uid":"269d9995-0aad-4983-a7b9-80160bdd37ef","resourceVersion":"2061","creationTimestamp":"2024-03-14T19:44:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_14T19_44_36_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:44:35Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3688 chars]
	I0314 19:44:38.528059    8428 node_ready.go:53] node "multinode-442000-m02" has status "Ready":"False"
	I0314 19:44:39.022793    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000-m02
	I0314 19:44:39.022881    8428 round_trippers.go:469] Request Headers:
	I0314 19:44:39.022881    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:44:39.022881    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:44:39.027371    8428 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 19:44:39.027456    8428 round_trippers.go:577] Response Headers:
	I0314 19:44:39.027456    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:44:39 GMT
	I0314 19:44:39.027551    8428 round_trippers.go:580]     Audit-Id: c76c39d0-a348-465a-bad6-e031a98aa3f3
	I0314 19:44:39.027597    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:44:39.027626    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:44:39.027626    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:44:39.027626    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:44:39.027626    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000-m02","uid":"269d9995-0aad-4983-a7b9-80160bdd37ef","resourceVersion":"2077","creationTimestamp":"2024-03-14T19:44:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_14T19_44_36_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:44:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3797 chars]
	I0314 19:44:39.524181    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000-m02
	I0314 19:44:39.524181    8428 round_trippers.go:469] Request Headers:
	I0314 19:44:39.524277    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:44:39.524277    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:44:39.528596    8428 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 19:44:39.528964    8428 round_trippers.go:577] Response Headers:
	I0314 19:44:39.528964    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:44:39 GMT
	I0314 19:44:39.528964    8428 round_trippers.go:580]     Audit-Id: be8f6a95-4345-41b8-83e1-2154edbed859
	I0314 19:44:39.528964    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:44:39.528964    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:44:39.528964    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:44:39.528964    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:44:39.529658    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000-m02","uid":"269d9995-0aad-4983-a7b9-80160bdd37ef","resourceVersion":"2077","creationTimestamp":"2024-03-14T19:44:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_14T19_44_36_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:44:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3797 chars]
	I0314 19:44:40.022622    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000-m02
	I0314 19:44:40.022704    8428 round_trippers.go:469] Request Headers:
	I0314 19:44:40.022704    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:44:40.022704    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:44:40.027921    8428 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0314 19:44:40.027921    8428 round_trippers.go:577] Response Headers:
	I0314 19:44:40.027921    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:44:40.027921    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:44:40.027921    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:44:40.027921    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:44:40 GMT
	I0314 19:44:40.027921    8428 round_trippers.go:580]     Audit-Id: 6d92ac80-9360-426a-b1a0-7ce7c32daec3
	I0314 19:44:40.027921    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:44:40.027921    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000-m02","uid":"269d9995-0aad-4983-a7b9-80160bdd37ef","resourceVersion":"2077","creationTimestamp":"2024-03-14T19:44:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_14T19_44_36_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:44:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3797 chars]
	I0314 19:44:40.524210    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000-m02
	I0314 19:44:40.524291    8428 round_trippers.go:469] Request Headers:
	I0314 19:44:40.524291    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:44:40.524291    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:44:40.528132    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:44:40.528132    8428 round_trippers.go:577] Response Headers:
	I0314 19:44:40.528132    8428 round_trippers.go:580]     Audit-Id: b30a3256-6123-48eb-b236-48446c8eef47
	I0314 19:44:40.528132    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:44:40.528132    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:44:40.528132    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:44:40.528132    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:44:40.528132    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:44:40 GMT
	I0314 19:44:40.528132    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000-m02","uid":"269d9995-0aad-4983-a7b9-80160bdd37ef","resourceVersion":"2077","creationTimestamp":"2024-03-14T19:44:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_14T19_44_36_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:44:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3797 chars]
	I0314 19:44:40.528651    8428 node_ready.go:53] node "multinode-442000-m02" has status "Ready":"False"
	I0314 19:44:41.023193    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000-m02
	I0314 19:44:41.023193    8428 round_trippers.go:469] Request Headers:
	I0314 19:44:41.023193    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:44:41.023193    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:44:41.026769    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:44:41.026769    8428 round_trippers.go:577] Response Headers:
	I0314 19:44:41.026769    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:44:41.026769    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:44:41 GMT
	I0314 19:44:41.027264    8428 round_trippers.go:580]     Audit-Id: f2a31b0c-b71c-4293-8cdb-325c1d21de88
	I0314 19:44:41.027264    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:44:41.027264    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:44:41.027264    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:44:41.027382    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000-m02","uid":"269d9995-0aad-4983-a7b9-80160bdd37ef","resourceVersion":"2077","creationTimestamp":"2024-03-14T19:44:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_14T19_44_36_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:44:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3797 chars]
	I0314 19:44:41.525036    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000-m02
	I0314 19:44:41.525107    8428 round_trippers.go:469] Request Headers:
	I0314 19:44:41.525107    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:44:41.525107    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:44:41.530881    8428 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0314 19:44:41.530881    8428 round_trippers.go:577] Response Headers:
	I0314 19:44:41.530881    8428 round_trippers.go:580]     Audit-Id: 457ddfb4-88a3-49c1-8a9c-5c9b8799ba35
	I0314 19:44:41.530881    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:44:41.530881    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:44:41.530881    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:44:41.530881    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:44:41.530881    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:44:41 GMT
	I0314 19:44:41.530881    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000-m02","uid":"269d9995-0aad-4983-a7b9-80160bdd37ef","resourceVersion":"2077","creationTimestamp":"2024-03-14T19:44:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_14T19_44_36_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:44:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3797 chars]
	I0314 19:44:42.022897    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000-m02
	I0314 19:44:42.022987    8428 round_trippers.go:469] Request Headers:
	I0314 19:44:42.022987    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:44:42.022987    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:44:42.028650    8428 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0314 19:44:42.028650    8428 round_trippers.go:577] Response Headers:
	I0314 19:44:42.028650    8428 round_trippers.go:580]     Audit-Id: f5fc9e4f-2fd4-457a-867c-d12d3b12e9c4
	I0314 19:44:42.028650    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:44:42.028650    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:44:42.029179    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:44:42.029179    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:44:42.029179    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:44:42 GMT
	I0314 19:44:42.029317    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000-m02","uid":"269d9995-0aad-4983-a7b9-80160bdd37ef","resourceVersion":"2077","creationTimestamp":"2024-03-14T19:44:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_14T19_44_36_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:44:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3797 chars]
	I0314 19:44:42.521467    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000-m02
	I0314 19:44:42.521543    8428 round_trippers.go:469] Request Headers:
	I0314 19:44:42.521543    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:44:42.521543    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:44:42.527184    8428 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0314 19:44:42.527184    8428 round_trippers.go:577] Response Headers:
	I0314 19:44:42.527184    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:44:42.527184    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:44:42.527184    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:44:42 GMT
	I0314 19:44:42.527184    8428 round_trippers.go:580]     Audit-Id: dff35d79-e600-4dd2-85f3-a3185dd1cf2a
	I0314 19:44:42.527184    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:44:42.527184    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:44:42.527859    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000-m02","uid":"269d9995-0aad-4983-a7b9-80160bdd37ef","resourceVersion":"2077","creationTimestamp":"2024-03-14T19:44:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_14T19_44_36_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:44:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3797 chars]
	I0314 19:44:43.022079    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000-m02
	I0314 19:44:43.022159    8428 round_trippers.go:469] Request Headers:
	I0314 19:44:43.022159    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:44:43.022159    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:44:43.026530    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:44:43.026530    8428 round_trippers.go:577] Response Headers:
	I0314 19:44:43.026615    8428 round_trippers.go:580]     Audit-Id: 92c300b6-3443-4bed-a12b-86fd2381d3bf
	I0314 19:44:43.026615    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:44:43.026615    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:44:43.026669    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:44:43.026669    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:44:43.026669    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:44:43 GMT
	I0314 19:44:43.026772    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000-m02","uid":"269d9995-0aad-4983-a7b9-80160bdd37ef","resourceVersion":"2077","creationTimestamp":"2024-03-14T19:44:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_14T19_44_36_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:44:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3797 chars]
	I0314 19:44:43.027300    8428 node_ready.go:53] node "multinode-442000-m02" has status "Ready":"False"
	I0314 19:44:43.524207    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000-m02
	I0314 19:44:43.524207    8428 round_trippers.go:469] Request Headers:
	I0314 19:44:43.524207    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:44:43.524207    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:44:43.527851    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:44:43.527851    8428 round_trippers.go:577] Response Headers:
	I0314 19:44:43.527851    8428 round_trippers.go:580]     Audit-Id: 7106586b-f89a-4a30-97c9-fb26b6589539
	I0314 19:44:43.527851    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:44:43.527851    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:44:43.527851    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:44:43.527851    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:44:43.528792    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:44:43 GMT
	I0314 19:44:43.528947    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000-m02","uid":"269d9995-0aad-4983-a7b9-80160bdd37ef","resourceVersion":"2077","creationTimestamp":"2024-03-14T19:44:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_14T19_44_36_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:44:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3797 chars]
	I0314 19:44:44.021420    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000-m02
	I0314 19:44:44.021595    8428 round_trippers.go:469] Request Headers:
	I0314 19:44:44.021595    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:44:44.021595    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:44:44.025262    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:44:44.025744    8428 round_trippers.go:577] Response Headers:
	I0314 19:44:44.025744    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:44:44 GMT
	I0314 19:44:44.025744    8428 round_trippers.go:580]     Audit-Id: a2081053-dc2e-40ed-a363-79541e921114
	I0314 19:44:44.025744    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:44:44.025744    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:44:44.025744    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:44:44.025744    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:44:44.025960    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000-m02","uid":"269d9995-0aad-4983-a7b9-80160bdd37ef","resourceVersion":"2077","creationTimestamp":"2024-03-14T19:44:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_14T19_44_36_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:44:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3797 chars]
	I0314 19:44:44.522760    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000-m02
	I0314 19:44:44.522850    8428 round_trippers.go:469] Request Headers:
	I0314 19:44:44.522850    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:44:44.522850    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:44:44.527016    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:44:44.527016    8428 round_trippers.go:577] Response Headers:
	I0314 19:44:44.527128    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:44:44.527128    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:44:44.527128    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:44:44.527174    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:44:44 GMT
	I0314 19:44:44.527174    8428 round_trippers.go:580]     Audit-Id: 250e711d-ebf7-49e4-8b58-b03372d08dca
	I0314 19:44:44.527174    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:44:44.527174    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000-m02","uid":"269d9995-0aad-4983-a7b9-80160bdd37ef","resourceVersion":"2077","creationTimestamp":"2024-03-14T19:44:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_14T19_44_36_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:44:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3797 chars]
	I0314 19:44:45.024098    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000-m02
	I0314 19:44:45.024371    8428 round_trippers.go:469] Request Headers:
	I0314 19:44:45.024371    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:44:45.024371    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:44:45.027820    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:44:45.027820    8428 round_trippers.go:577] Response Headers:
	I0314 19:44:45.028198    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:44:45.028198    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:44:45 GMT
	I0314 19:44:45.028198    8428 round_trippers.go:580]     Audit-Id: 9f300c51-6b2c-40ef-a72c-0e41bba8af55
	I0314 19:44:45.028198    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:44:45.028198    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:44:45.028198    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:44:45.028324    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000-m02","uid":"269d9995-0aad-4983-a7b9-80160bdd37ef","resourceVersion":"2077","creationTimestamp":"2024-03-14T19:44:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_14T19_44_36_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:44:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3797 chars]
	I0314 19:44:45.028811    8428 node_ready.go:53] node "multinode-442000-m02" has status "Ready":"False"
	I0314 19:44:45.524612    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000-m02
	I0314 19:44:45.524797    8428 round_trippers.go:469] Request Headers:
	I0314 19:44:45.524797    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:44:45.524830    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:44:45.528555    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:44:45.529340    8428 round_trippers.go:577] Response Headers:
	I0314 19:44:45.529340    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:44:45.529384    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:44:45.529384    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:44:45.529384    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:44:45 GMT
	I0314 19:44:45.529384    8428 round_trippers.go:580]     Audit-Id: b72d7e2d-2e27-4687-8d92-47b50540e97c
	I0314 19:44:45.529384    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:44:45.529384    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000-m02","uid":"269d9995-0aad-4983-a7b9-80160bdd37ef","resourceVersion":"2077","creationTimestamp":"2024-03-14T19:44:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_14T19_44_36_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:44:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3797 chars]
	I0314 19:44:46.024341    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000-m02
	I0314 19:44:46.024430    8428 round_trippers.go:469] Request Headers:
	I0314 19:44:46.024430    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:44:46.024430    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:44:46.027725    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:44:46.027725    8428 round_trippers.go:577] Response Headers:
	I0314 19:44:46.027725    8428 round_trippers.go:580]     Audit-Id: cc505574-5779-4e3a-a679-08ef6f53473f
	I0314 19:44:46.027725    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:44:46.027725    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:44:46.027725    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:44:46.027725    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:44:46.027725    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:44:46 GMT
	I0314 19:44:46.028585    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000-m02","uid":"269d9995-0aad-4983-a7b9-80160bdd37ef","resourceVersion":"2086","creationTimestamp":"2024-03-14T19:44:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_14T19_44_36_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:44:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4066 chars]
	I0314 19:44:46.527107    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000-m02
	I0314 19:44:46.527323    8428 round_trippers.go:469] Request Headers:
	I0314 19:44:46.527323    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:44:46.527323    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:44:46.530708    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:44:46.531253    8428 round_trippers.go:577] Response Headers:
	I0314 19:44:46.531253    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:44:46.531253    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:44:46.531253    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:44:46.531253    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:44:46 GMT
	I0314 19:44:46.531253    8428 round_trippers.go:580]     Audit-Id: 7dffc06f-e25b-4936-bfef-969df03e0e7e
	I0314 19:44:46.531253    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:44:46.531436    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000-m02","uid":"269d9995-0aad-4983-a7b9-80160bdd37ef","resourceVersion":"2086","creationTimestamp":"2024-03-14T19:44:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_14T19_44_36_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:44:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4066 chars]
	I0314 19:44:47.026797    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000-m02
	I0314 19:44:47.026797    8428 round_trippers.go:469] Request Headers:
	I0314 19:44:47.026797    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:44:47.026797    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:44:47.030503    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:44:47.030503    8428 round_trippers.go:577] Response Headers:
	I0314 19:44:47.030503    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:44:47 GMT
	I0314 19:44:47.030503    8428 round_trippers.go:580]     Audit-Id: be40900e-d975-47b7-a9d7-ad52fda93fd0
	I0314 19:44:47.030503    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:44:47.030503    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:44:47.030503    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:44:47.030503    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:44:47.031318    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000-m02","uid":"269d9995-0aad-4983-a7b9-80160bdd37ef","resourceVersion":"2086","creationTimestamp":"2024-03-14T19:44:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_14T19_44_36_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:44:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4066 chars]
	I0314 19:44:47.031900    8428 node_ready.go:53] node "multinode-442000-m02" has status "Ready":"False"
	I0314 19:44:47.528060    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000-m02
	I0314 19:44:47.528060    8428 round_trippers.go:469] Request Headers:
	I0314 19:44:47.528060    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:44:47.528060    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:44:47.533377    8428 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 19:44:47.533377    8428 round_trippers.go:577] Response Headers:
	I0314 19:44:47.533459    8428 round_trippers.go:580]     Audit-Id: 9428b3c7-89a6-4bd6-b6c1-447de9aa240b
	I0314 19:44:47.533459    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:44:47.533459    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:44:47.533459    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:44:47.533459    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:44:47.533459    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:44:47 GMT
	I0314 19:44:47.533656    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000-m02","uid":"269d9995-0aad-4983-a7b9-80160bdd37ef","resourceVersion":"2086","creationTimestamp":"2024-03-14T19:44:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_14T19_44_36_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:44:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4066 chars]
	I0314 19:44:48.024627    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000-m02
	I0314 19:44:48.024680    8428 round_trippers.go:469] Request Headers:
	I0314 19:44:48.024733    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:44:48.024733    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:44:48.029113    8428 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 19:44:48.029113    8428 round_trippers.go:577] Response Headers:
	I0314 19:44:48.029113    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:44:48 GMT
	I0314 19:44:48.029113    8428 round_trippers.go:580]     Audit-Id: 49cbee37-2f7d-4769-95e5-bbb9b2a8811f
	I0314 19:44:48.029113    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:44:48.029113    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:44:48.029113    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:44:48.029113    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:44:48.029113    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000-m02","uid":"269d9995-0aad-4983-a7b9-80160bdd37ef","resourceVersion":"2086","creationTimestamp":"2024-03-14T19:44:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_14T19_44_36_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:44:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4066 chars]
	I0314 19:44:48.532419    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000-m02
	I0314 19:44:48.532531    8428 round_trippers.go:469] Request Headers:
	I0314 19:44:48.532531    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:44:48.532531    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:44:48.536483    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:44:48.536574    8428 round_trippers.go:577] Response Headers:
	I0314 19:44:48.536611    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:44:48.536611    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:44:48.536611    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:44:48 GMT
	I0314 19:44:48.536611    8428 round_trippers.go:580]     Audit-Id: 19f0e3ed-bd07-4f43-9751-5820c71e20d5
	I0314 19:44:48.536611    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:44:48.536611    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:44:48.536611    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000-m02","uid":"269d9995-0aad-4983-a7b9-80160bdd37ef","resourceVersion":"2086","creationTimestamp":"2024-03-14T19:44:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_14T19_44_36_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:44:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4066 chars]
	I0314 19:44:49.026744    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000-m02
	I0314 19:44:49.026744    8428 round_trippers.go:469] Request Headers:
	I0314 19:44:49.026744    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:44:49.026744    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:44:49.031450    8428 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 19:44:49.031529    8428 round_trippers.go:577] Response Headers:
	I0314 19:44:49.031529    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:44:49.031529    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:44:49.031529    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:44:49 GMT
	I0314 19:44:49.031529    8428 round_trippers.go:580]     Audit-Id: 8b81c027-d892-4816-94bc-d00c6a714181
	I0314 19:44:49.031529    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:44:49.031529    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:44:49.031617    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000-m02","uid":"269d9995-0aad-4983-a7b9-80160bdd37ef","resourceVersion":"2086","creationTimestamp":"2024-03-14T19:44:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_14T19_44_36_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:44:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4066 chars]
	I0314 19:44:49.032076    8428 node_ready.go:53] node "multinode-442000-m02" has status "Ready":"False"
	I0314 19:44:49.530160    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000-m02
	I0314 19:44:49.530230    8428 round_trippers.go:469] Request Headers:
	I0314 19:44:49.530230    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:44:49.530230    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:44:49.534526    8428 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 19:44:49.534526    8428 round_trippers.go:577] Response Headers:
	I0314 19:44:49.534526    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:44:49.534526    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:44:49 GMT
	I0314 19:44:49.534526    8428 round_trippers.go:580]     Audit-Id: 226ca101-665e-4eab-a5de-75ade9244506
	I0314 19:44:49.534526    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:44:49.534526    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:44:49.534526    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:44:49.535004    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000-m02","uid":"269d9995-0aad-4983-a7b9-80160bdd37ef","resourceVersion":"2086","creationTimestamp":"2024-03-14T19:44:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_14T19_44_36_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:44:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4066 chars]
	I0314 19:44:50.031212    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000-m02
	I0314 19:44:50.031507    8428 round_trippers.go:469] Request Headers:
	I0314 19:44:50.031507    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:44:50.031507    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:44:50.040419    8428 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0314 19:44:50.040419    8428 round_trippers.go:577] Response Headers:
	I0314 19:44:50.040419    8428 round_trippers.go:580]     Audit-Id: 992d37ba-898e-4e34-9ee6-7678bf0fea9c
	I0314 19:44:50.040419    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:44:50.040419    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:44:50.040419    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:44:50.040419    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:44:50.040419    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:44:50 GMT
	I0314 19:44:50.042114    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000-m02","uid":"269d9995-0aad-4983-a7b9-80160bdd37ef","resourceVersion":"2086","creationTimestamp":"2024-03-14T19:44:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_14T19_44_36_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:44:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4066 chars]
	I0314 19:44:50.530902    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000-m02
	I0314 19:44:50.531131    8428 round_trippers.go:469] Request Headers:
	I0314 19:44:50.531215    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:44:50.531215    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:44:50.535026    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:44:50.535026    8428 round_trippers.go:577] Response Headers:
	I0314 19:44:50.535026    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:44:50.535389    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:44:50.535389    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:44:50 GMT
	I0314 19:44:50.535389    8428 round_trippers.go:580]     Audit-Id: ad2129bf-28c0-4b07-b974-a7c3a1cb2bde
	I0314 19:44:50.535389    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:44:50.535389    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:44:50.535598    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000-m02","uid":"269d9995-0aad-4983-a7b9-80160bdd37ef","resourceVersion":"2086","creationTimestamp":"2024-03-14T19:44:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_14T19_44_36_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:44:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4066 chars]
	I0314 19:44:51.032508    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000-m02
	I0314 19:44:51.032585    8428 round_trippers.go:469] Request Headers:
	I0314 19:44:51.032585    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:44:51.032585    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:44:51.036268    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:44:51.036426    8428 round_trippers.go:577] Response Headers:
	I0314 19:44:51.036426    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:44:51.036426    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:44:51.036426    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:44:51 GMT
	I0314 19:44:51.036426    8428 round_trippers.go:580]     Audit-Id: e9987cc4-fff5-460d-80fb-898ad9c42b74
	I0314 19:44:51.036426    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:44:51.036426    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:44:51.036490    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000-m02","uid":"269d9995-0aad-4983-a7b9-80160bdd37ef","resourceVersion":"2086","creationTimestamp":"2024-03-14T19:44:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_14T19_44_36_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:44:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4066 chars]
	I0314 19:44:51.037020    8428 node_ready.go:53] node "multinode-442000-m02" has status "Ready":"False"
	I0314 19:44:51.520101    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000-m02
	I0314 19:44:51.520101    8428 round_trippers.go:469] Request Headers:
	I0314 19:44:51.520101    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:44:51.520101    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:44:51.523812    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:44:51.523812    8428 round_trippers.go:577] Response Headers:
	I0314 19:44:51.523812    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:44:51.523812    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:44:51.523812    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:44:51 GMT
	I0314 19:44:51.523812    8428 round_trippers.go:580]     Audit-Id: c4ffd01c-8f45-44be-80b7-24213b5d7af9
	I0314 19:44:51.523812    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:44:51.523812    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:44:51.524802    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000-m02","uid":"269d9995-0aad-4983-a7b9-80160bdd37ef","resourceVersion":"2086","creationTimestamp":"2024-03-14T19:44:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_14T19_44_36_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:44:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4066 chars]
	I0314 19:44:52.021468    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000-m02
	I0314 19:44:52.021468    8428 round_trippers.go:469] Request Headers:
	I0314 19:44:52.021527    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:44:52.021527    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:44:52.024756    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:44:52.025353    8428 round_trippers.go:577] Response Headers:
	I0314 19:44:52.025353    8428 round_trippers.go:580]     Audit-Id: 47827d2c-26e1-44a3-8f95-1430870c470c
	I0314 19:44:52.025399    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:44:52.025399    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:44:52.025399    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:44:52.025399    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:44:52.025399    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:44:52 GMT
	I0314 19:44:52.025548    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000-m02","uid":"269d9995-0aad-4983-a7b9-80160bdd37ef","resourceVersion":"2086","creationTimestamp":"2024-03-14T19:44:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_14T19_44_36_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:44:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4066 chars]
	I0314 19:44:52.523383    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000-m02
	I0314 19:44:52.523590    8428 round_trippers.go:469] Request Headers:
	I0314 19:44:52.523590    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:44:52.523590    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:44:52.528867    8428 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0314 19:44:52.529775    8428 round_trippers.go:577] Response Headers:
	I0314 19:44:52.529775    8428 round_trippers.go:580]     Audit-Id: f5b9006c-0760-4f05-a42b-4f7bd99f77cb
	I0314 19:44:52.529775    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:44:52.529775    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:44:52.529775    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:44:52.529775    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:44:52.529775    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:44:52 GMT
	I0314 19:44:52.529974    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000-m02","uid":"269d9995-0aad-4983-a7b9-80160bdd37ef","resourceVersion":"2086","creationTimestamp":"2024-03-14T19:44:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_14T19_44_36_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:44:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4066 chars]
	I0314 19:44:53.026213    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000-m02
	I0314 19:44:53.026287    8428 round_trippers.go:469] Request Headers:
	I0314 19:44:53.026287    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:44:53.026287    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:44:53.029960    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:44:53.030059    8428 round_trippers.go:577] Response Headers:
	I0314 19:44:53.030059    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:44:53.030059    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:44:53 GMT
	I0314 19:44:53.030059    8428 round_trippers.go:580]     Audit-Id: be5e9cdf-a475-4f2f-a45c-188880ba2984
	I0314 19:44:53.030151    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:44:53.030241    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:44:53.030241    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:44:53.030483    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000-m02","uid":"269d9995-0aad-4983-a7b9-80160bdd37ef","resourceVersion":"2086","creationTimestamp":"2024-03-14T19:44:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_14T19_44_36_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:44:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4066 chars]
	I0314 19:44:53.527960    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000-m02
	I0314 19:44:53.527960    8428 round_trippers.go:469] Request Headers:
	I0314 19:44:53.527960    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:44:53.527960    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:44:53.533130    8428 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0314 19:44:53.533130    8428 round_trippers.go:577] Response Headers:
	I0314 19:44:53.533130    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:44:53 GMT
	I0314 19:44:53.533130    8428 round_trippers.go:580]     Audit-Id: 949fe0c9-6ce0-4473-a6b8-6bb98425793e
	I0314 19:44:53.533130    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:44:53.533130    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:44:53.533130    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:44:53.533130    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:44:53.533744    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000-m02","uid":"269d9995-0aad-4983-a7b9-80160bdd37ef","resourceVersion":"2086","creationTimestamp":"2024-03-14T19:44:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_14T19_44_36_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:44:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4066 chars]
	I0314 19:44:53.534086    8428 node_ready.go:53] node "multinode-442000-m02" has status "Ready":"False"
	I0314 19:44:54.025392    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000-m02
	I0314 19:44:54.025474    8428 round_trippers.go:469] Request Headers:
	I0314 19:44:54.025474    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:44:54.025474    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:44:54.031248    8428 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0314 19:44:54.031248    8428 round_trippers.go:577] Response Headers:
	I0314 19:44:54.031306    8428 round_trippers.go:580]     Audit-Id: e866fbe5-3e90-4d50-a5d7-0fdbb4eeea16
	I0314 19:44:54.031306    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:44:54.031306    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:44:54.031306    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:44:54.031306    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:44:54.031306    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:44:54 GMT
	I0314 19:44:54.031459    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000-m02","uid":"269d9995-0aad-4983-a7b9-80160bdd37ef","resourceVersion":"2086","creationTimestamp":"2024-03-14T19:44:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_14T19_44_36_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:44:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4066 chars]
	I0314 19:44:54.526516    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000-m02
	I0314 19:44:54.526516    8428 round_trippers.go:469] Request Headers:
	I0314 19:44:54.526516    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:44:54.526516    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:44:54.530184    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:44:54.530508    8428 round_trippers.go:577] Response Headers:
	I0314 19:44:54.530565    8428 round_trippers.go:580]     Audit-Id: 579b04a8-ce88-466a-ba03-da457c6e0b58
	I0314 19:44:54.530593    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:44:54.530593    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:44:54.530649    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:44:54.530699    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:44:54.530699    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:44:54 GMT
	I0314 19:44:54.530947    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000-m02","uid":"269d9995-0aad-4983-a7b9-80160bdd37ef","resourceVersion":"2086","creationTimestamp":"2024-03-14T19:44:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_14T19_44_36_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:44:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4066 chars]
	I0314 19:44:55.025559    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000-m02
	I0314 19:44:55.025761    8428 round_trippers.go:469] Request Headers:
	I0314 19:44:55.025837    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:44:55.025837    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:44:55.031629    8428 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0314 19:44:55.031629    8428 round_trippers.go:577] Response Headers:
	I0314 19:44:55.031629    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:44:55.031629    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:44:55.031629    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:44:55.031629    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:44:55 GMT
	I0314 19:44:55.031629    8428 round_trippers.go:580]     Audit-Id: 1454818e-b0b2-400c-a348-7408cac08c5e
	I0314 19:44:55.031629    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:44:55.031970    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000-m02","uid":"269d9995-0aad-4983-a7b9-80160bdd37ef","resourceVersion":"2110","creationTimestamp":"2024-03-14T19:44:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_14T19_44_36_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:44:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3932 chars]
	I0314 19:44:55.032103    8428 node_ready.go:49] node "multinode-442000-m02" has status "Ready":"True"
	I0314 19:44:55.032103    8428 node_ready.go:38] duration metric: took 18.5130743s for node "multinode-442000-m02" to be "Ready" ...
	I0314 19:44:55.032103    8428 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0314 19:44:55.032103    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/namespaces/kube-system/pods
	I0314 19:44:55.032103    8428 round_trippers.go:469] Request Headers:
	I0314 19:44:55.032103    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:44:55.032103    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:44:55.038123    8428 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0314 19:44:55.038438    8428 round_trippers.go:577] Response Headers:
	I0314 19:44:55.038438    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:44:55.038438    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:44:55.038438    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:44:55.038438    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:44:55 GMT
	I0314 19:44:55.038438    8428 round_trippers.go:580]     Audit-Id: 7221dde7-b0c7-4093-9385-86fa4c1a9551
	I0314 19:44:55.038438    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:44:55.039626    8428 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"2112"},"items":[{"metadata":{"name":"coredns-5dd5756b68-d22jc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2a563b3f-a175-4dc2-9f0b-67dbaefbfaac","resourceVersion":"1908","creationTimestamp":"2024-03-14T19:19:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"65689a14-ed43-48af-8104-53ea2e3991f3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:19:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"65689a14-ed43-48af-8104-53ea2e3991f3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 82557 chars]
	I0314 19:44:55.043548    8428 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-d22jc" in "kube-system" namespace to be "Ready" ...
	I0314 19:44:55.044116    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-d22jc
	I0314 19:44:55.044116    8428 round_trippers.go:469] Request Headers:
	I0314 19:44:55.044116    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:44:55.044163    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:44:55.047137    8428 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0314 19:44:55.047137    8428 round_trippers.go:577] Response Headers:
	I0314 19:44:55.047137    8428 round_trippers.go:580]     Audit-Id: a29737c5-cbe5-41d5-b0d0-efa9c7fcb612
	I0314 19:44:55.047816    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:44:55.047816    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:44:55.047816    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:44:55.047816    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:44:55.047816    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:44:55 GMT
	I0314 19:44:55.047917    8428 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-d22jc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2a563b3f-a175-4dc2-9f0b-67dbaefbfaac","resourceVersion":"1908","creationTimestamp":"2024-03-14T19:19:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"65689a14-ed43-48af-8104-53ea2e3991f3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:19:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"65689a14-ed43-48af-8104-53ea2e3991f3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6493 chars]
	I0314 19:44:55.048302    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:44:55.048302    8428 round_trippers.go:469] Request Headers:
	I0314 19:44:55.048302    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:44:55.048302    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:44:55.050873    8428 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0314 19:44:55.050873    8428 round_trippers.go:577] Response Headers:
	I0314 19:44:55.050873    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:44:55.050873    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:44:55.050873    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:44:55.050873    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:44:55.050873    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:44:55 GMT
	I0314 19:44:55.050873    8428 round_trippers.go:580]     Audit-Id: 7b83fe14-89fa-4208-a8a8-19d75c229969
	I0314 19:44:55.051823    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1868","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0314 19:44:55.052187    8428 pod_ready.go:92] pod "coredns-5dd5756b68-d22jc" in "kube-system" namespace has status "Ready":"True"
	I0314 19:44:55.052187    8428 pod_ready.go:81] duration metric: took 8.6381ms for pod "coredns-5dd5756b68-d22jc" in "kube-system" namespace to be "Ready" ...
	I0314 19:44:55.052187    8428 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-442000" in "kube-system" namespace to be "Ready" ...
	I0314 19:44:55.052187    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-442000
	I0314 19:44:55.052187    8428 round_trippers.go:469] Request Headers:
	I0314 19:44:55.052187    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:44:55.052187    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:44:55.055078    8428 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0314 19:44:55.055611    8428 round_trippers.go:577] Response Headers:
	I0314 19:44:55.055611    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:44:55.055611    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:44:55 GMT
	I0314 19:44:55.055611    8428 round_trippers.go:580]     Audit-Id: ebb19d2a-d5be-4221-b97d-f21a89d54183
	I0314 19:44:55.055611    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:44:55.055611    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:44:55.055611    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:44:55.055761    8428 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-442000","namespace":"kube-system","uid":"106cc31d-907f-4853-9e8d-f13c8ac4e398","resourceVersion":"1808","creationTimestamp":"2024-03-14T19:41:06Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.17.93.236:2379","kubernetes.io/config.hash":"fa99a5621d016aa714804afcaa1e0a53","kubernetes.io/config.mirror":"fa99a5621d016aa714804afcaa1e0a53","kubernetes.io/config.seen":"2024-03-14T19:41:00.367789550Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:41:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 5863 chars]
	I0314 19:44:55.055960    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:44:55.055960    8428 round_trippers.go:469] Request Headers:
	I0314 19:44:55.055960    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:44:55.055960    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:44:55.059249    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:44:55.059249    8428 round_trippers.go:577] Response Headers:
	I0314 19:44:55.059249    8428 round_trippers.go:580]     Audit-Id: 26320e85-7756-45af-943d-a496f77b5177
	I0314 19:44:55.059249    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:44:55.059249    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:44:55.059249    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:44:55.059249    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:44:55.059249    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:44:55 GMT
	I0314 19:44:55.059715    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1868","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0314 19:44:55.059715    8428 pod_ready.go:92] pod "etcd-multinode-442000" in "kube-system" namespace has status "Ready":"True"
	I0314 19:44:55.059715    8428 pod_ready.go:81] duration metric: took 7.5276ms for pod "etcd-multinode-442000" in "kube-system" namespace to be "Ready" ...
	I0314 19:44:55.059715    8428 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-442000" in "kube-system" namespace to be "Ready" ...
	I0314 19:44:55.060245    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-442000
	I0314 19:44:55.060245    8428 round_trippers.go:469] Request Headers:
	I0314 19:44:55.060245    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:44:55.060245    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:44:55.062428    8428 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0314 19:44:55.062428    8428 round_trippers.go:577] Response Headers:
	I0314 19:44:55.062428    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:44:55.062428    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:44:55.062428    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:44:55.062428    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:44:55.062428    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:44:55 GMT
	I0314 19:44:55.062428    8428 round_trippers.go:580]     Audit-Id: d6747f61-2a1e-4a63-a093-49c0d2fa8c3c
	I0314 19:44:55.063384    8428 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-442000","namespace":"kube-system","uid":"ebdd5ddf-2b02-4315-bc64-1b10c383d507","resourceVersion":"1817","creationTimestamp":"2024-03-14T19:41:06Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.17.93.236:8443","kubernetes.io/config.hash":"7754d2f32966faec8123dc3b8a2af767","kubernetes.io/config.mirror":"7754d2f32966faec8123dc3b8a2af767","kubernetes.io/config.seen":"2024-03-14T19:41:00.350706636Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:41:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7400 chars]
	I0314 19:44:55.063384    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:44:55.063384    8428 round_trippers.go:469] Request Headers:
	I0314 19:44:55.063384    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:44:55.063384    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:44:55.066036    8428 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0314 19:44:55.066036    8428 round_trippers.go:577] Response Headers:
	I0314 19:44:55.066036    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:44:55.066036    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:44:55.066036    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:44:55 GMT
	I0314 19:44:55.066036    8428 round_trippers.go:580]     Audit-Id: c8159632-c2e1-48f3-aa39-7528ec8b1265
	I0314 19:44:55.066036    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:44:55.066036    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:44:55.067055    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1868","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0314 19:44:55.067704    8428 pod_ready.go:92] pod "kube-apiserver-multinode-442000" in "kube-system" namespace has status "Ready":"True"
	I0314 19:44:55.067704    8428 pod_ready.go:81] duration metric: took 7.988ms for pod "kube-apiserver-multinode-442000" in "kube-system" namespace to be "Ready" ...
	I0314 19:44:55.067748    8428 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-442000" in "kube-system" namespace to be "Ready" ...
	I0314 19:44:55.067780    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-442000
	I0314 19:44:55.067780    8428 round_trippers.go:469] Request Headers:
	I0314 19:44:55.067780    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:44:55.067780    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:44:55.070008    8428 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0314 19:44:55.070008    8428 round_trippers.go:577] Response Headers:
	I0314 19:44:55.070008    8428 round_trippers.go:580]     Audit-Id: a86a88b8-91fe-4d6e-99a3-b0c2533e83ad
	I0314 19:44:55.070008    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:44:55.070008    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:44:55.070008    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:44:55.070008    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:44:55.070008    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:44:55 GMT
	I0314 19:44:55.071101    8428 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-442000","namespace":"kube-system","uid":"b16fc874-ef74-44ca-a54f-bb678bf982df","resourceVersion":"1813","creationTimestamp":"2024-03-14T19:19:01Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"a7ee530f2bd843eddeace8cd6ec0d204","kubernetes.io/config.mirror":"a7ee530f2bd843eddeace8cd6ec0d204","kubernetes.io/config.seen":"2024-03-14T19:18:55.420205308Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:19:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7175 chars]
	I0314 19:44:55.071720    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:44:55.071720    8428 round_trippers.go:469] Request Headers:
	I0314 19:44:55.071720    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:44:55.071720    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:44:55.074984    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:44:55.074984    8428 round_trippers.go:577] Response Headers:
	I0314 19:44:55.074984    8428 round_trippers.go:580]     Audit-Id: 69a22c54-086e-43c1-a97e-e8fc9348ef17
	I0314 19:44:55.074984    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:44:55.074984    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:44:55.074984    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:44:55.074984    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:44:55.074984    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:44:55 GMT
	I0314 19:44:55.075688    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1868","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0314 19:44:55.076003    8428 pod_ready.go:92] pod "kube-controller-manager-multinode-442000" in "kube-system" namespace has status "Ready":"True"
	I0314 19:44:55.076003    8428 pod_ready.go:81] duration metric: took 8.2541ms for pod "kube-controller-manager-multinode-442000" in "kube-system" namespace to be "Ready" ...
	I0314 19:44:55.076003    8428 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-72dzs" in "kube-system" namespace to be "Ready" ...
	I0314 19:44:55.225643    8428 request.go:629] Waited for 149.49ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.93.236:8443/api/v1/namespaces/kube-system/pods/kube-proxy-72dzs
	I0314 19:44:55.225832    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/namespaces/kube-system/pods/kube-proxy-72dzs
	I0314 19:44:55.225832    8428 round_trippers.go:469] Request Headers:
	I0314 19:44:55.225832    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:44:55.225832    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:44:55.229641    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:44:55.229641    8428 round_trippers.go:577] Response Headers:
	I0314 19:44:55.229641    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:44:55.229641    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:44:55.229641    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:44:55 GMT
	I0314 19:44:55.229641    8428 round_trippers.go:580]     Audit-Id: 18982d73-5ebe-47cf-a6b4-63e5a753ddaf
	I0314 19:44:55.229641    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:44:55.229641    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:44:55.230037    8428 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-72dzs","generateName":"kube-proxy-","namespace":"kube-system","uid":"80b840b0-3803-4102-a966-ea73aed74f49","resourceVersion":"2094","creationTimestamp":"2024-03-14T19:22:02Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"6fc4cc4b-ef3f-4f16-8df5-a146058b364e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:22:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6fc4cc4b-ef3f-4f16-8df5-a146058b364e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5542 chars]
	I0314 19:44:55.427584    8428 request.go:629] Waited for 196.9716ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.93.236:8443/api/v1/nodes/multinode-442000-m02
	I0314 19:44:55.427927    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000-m02
	I0314 19:44:55.427927    8428 round_trippers.go:469] Request Headers:
	I0314 19:44:55.427927    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:44:55.427927    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:44:55.431778    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:44:55.431778    8428 round_trippers.go:577] Response Headers:
	I0314 19:44:55.432330    8428 round_trippers.go:580]     Audit-Id: a462edc8-0c50-4db1-8881-d940ec00b59c
	I0314 19:44:55.432330    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:44:55.432330    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:44:55.432330    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:44:55.432330    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:44:55.432330    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:44:55 GMT
	I0314 19:44:55.432458    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000-m02","uid":"269d9995-0aad-4983-a7b9-80160bdd37ef","resourceVersion":"2110","creationTimestamp":"2024-03-14T19:44:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_14T19_44_36_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:44:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3932 chars]
	I0314 19:44:55.432932    8428 pod_ready.go:92] pod "kube-proxy-72dzs" in "kube-system" namespace has status "Ready":"True"
	I0314 19:44:55.432932    8428 pod_ready.go:81] duration metric: took 356.903ms for pod "kube-proxy-72dzs" in "kube-system" namespace to be "Ready" ...
	I0314 19:44:55.432932    8428 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-cg28g" in "kube-system" namespace to be "Ready" ...
	I0314 19:44:55.631631    8428 request.go:629] Waited for 198.4338ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.93.236:8443/api/v1/namespaces/kube-system/pods/kube-proxy-cg28g
	I0314 19:44:55.631803    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/namespaces/kube-system/pods/kube-proxy-cg28g
	I0314 19:44:55.631803    8428 round_trippers.go:469] Request Headers:
	I0314 19:44:55.631803    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:44:55.631803    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:44:55.635498    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:44:55.636214    8428 round_trippers.go:577] Response Headers:
	I0314 19:44:55.636214    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:44:55.636214    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:44:55.636214    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:44:55.636214    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:44:55 GMT
	I0314 19:44:55.636214    8428 round_trippers.go:580]     Audit-Id: f51fe332-d756-4c07-8dd7-5b2d7e182b6d
	I0314 19:44:55.636214    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:44:55.636441    8428 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-cg28g","generateName":"kube-proxy-","namespace":"kube-system","uid":"c7f798bf-6722-4731-af8d-ccd5703d116e","resourceVersion":"1728","creationTimestamp":"2024-03-14T19:19:16Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"6fc4cc4b-ef3f-4f16-8df5-a146058b364e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:19:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6fc4cc4b-ef3f-4f16-8df5-a146058b364e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5739 chars]
	I0314 19:44:55.833688    8428 request.go:629] Waited for 196.5699ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:44:55.833926    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:44:55.834130    8428 round_trippers.go:469] Request Headers:
	I0314 19:44:55.834130    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:44:55.834130    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:44:55.838166    8428 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 19:44:55.838166    8428 round_trippers.go:577] Response Headers:
	I0314 19:44:55.838917    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:44:55.838917    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:44:55.838917    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:44:56 GMT
	I0314 19:44:55.838917    8428 round_trippers.go:580]     Audit-Id: 8fb43a63-d3d5-4e28-ba6d-f92a65d17b86
	I0314 19:44:55.838917    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:44:55.838917    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:44:55.839435    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1868","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0314 19:44:55.839974    8428 pod_ready.go:92] pod "kube-proxy-cg28g" in "kube-system" namespace has status "Ready":"True"
	I0314 19:44:55.840012    8428 pod_ready.go:81] duration metric: took 407.0501ms for pod "kube-proxy-cg28g" in "kube-system" namespace to be "Ready" ...
	I0314 19:44:55.840048    8428 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-w2qls" in "kube-system" namespace to be "Ready" ...
	I0314 19:44:56.037784    8428 request.go:629] Waited for 197.6423ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.93.236:8443/api/v1/namespaces/kube-system/pods/kube-proxy-w2qls
	I0314 19:44:56.038143    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/namespaces/kube-system/pods/kube-proxy-w2qls
	I0314 19:44:56.038143    8428 round_trippers.go:469] Request Headers:
	I0314 19:44:56.038223    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:44:56.038270    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:44:56.042145    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:44:56.042145    8428 round_trippers.go:577] Response Headers:
	I0314 19:44:56.042717    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:44:56.042717    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:44:56.042717    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:44:56.042717    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:44:56 GMT
	I0314 19:44:56.042717    8428 round_trippers.go:580]     Audit-Id: 940a2641-4309-4676-98c0-2d3de3e95f4a
	I0314 19:44:56.042776    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:44:56.042911    8428 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-w2qls","generateName":"kube-proxy-","namespace":"kube-system","uid":"7a53e602-282e-4b63-a993-a5d23d3c615f","resourceVersion":"1678","creationTimestamp":"2024-03-14T19:26:25Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"6fc4cc4b-ef3f-4f16-8df5-a146058b364e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:26:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6fc4cc4b-ef3f-4f16-8df5-a146058b364e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5767 chars]
	I0314 19:44:56.240539    8428 request.go:629] Waited for 196.7889ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.93.236:8443/api/v1/nodes/multinode-442000-m03
	I0314 19:44:56.240539    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000-m03
	I0314 19:44:56.240539    8428 round_trippers.go:469] Request Headers:
	I0314 19:44:56.240539    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:44:56.240539    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:44:56.244496    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:44:56.244496    8428 round_trippers.go:577] Response Headers:
	I0314 19:44:56.244496    8428 round_trippers.go:580]     Audit-Id: 3c75c8cd-e522-4f5a-ae2a-d1d4550ee94d
	I0314 19:44:56.244496    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:44:56.244496    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:44:56.244496    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:44:56.244496    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:44:56.244496    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:44:56 GMT
	I0314 19:44:56.245210    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000-m03","uid":"1b8e342b-6e96-49e8-a22c-874445d29fe3","resourceVersion":"1846","creationTimestamp":"2024-03-14T19:36:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_14T19_36_47_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:36:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4399 chars]
	I0314 19:44:56.245592    8428 pod_ready.go:97] node "multinode-442000-m03" hosting pod "kube-proxy-w2qls" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-442000-m03" has status "Ready":"Unknown"
	I0314 19:44:56.245592    8428 pod_ready.go:81] duration metric: took 405.5144ms for pod "kube-proxy-w2qls" in "kube-system" namespace to be "Ready" ...
	E0314 19:44:56.245592    8428 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-442000-m03" hosting pod "kube-proxy-w2qls" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-442000-m03" has status "Ready":"Unknown"
	I0314 19:44:56.245592    8428 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-442000" in "kube-system" namespace to be "Ready" ...
	I0314 19:44:56.426970    8428 request.go:629] Waited for 180.8045ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.93.236:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-442000
	I0314 19:44:56.427304    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-442000
	I0314 19:44:56.427304    8428 round_trippers.go:469] Request Headers:
	I0314 19:44:56.427304    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:44:56.427304    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:44:56.431085    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:44:56.431085    8428 round_trippers.go:577] Response Headers:
	I0314 19:44:56.431085    8428 round_trippers.go:580]     Audit-Id: 2f795128-3064-4963-ab46-652c719623a5
	I0314 19:44:56.431085    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:44:56.431085    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:44:56.431085    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:44:56.431085    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:44:56.431085    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:44:56 GMT
	I0314 19:44:56.432232    8428 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-442000","namespace":"kube-system","uid":"76b10598-fe0d-4a14-a8e4-a32221fbb68f","resourceVersion":"1803","creationTimestamp":"2024-03-14T19:19:01Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"2b2434280023596d1e3c90125a7219ed","kubernetes.io/config.mirror":"2b2434280023596d1e3c90125a7219ed","kubernetes.io/config.seen":"2024-03-14T19:18:55.420206709Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-14T19:19:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 4905 chars]
	I0314 19:44:56.632348    8428 request.go:629] Waited for 199.4266ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:44:56.632439    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes/multinode-442000
	I0314 19:44:56.632534    8428 round_trippers.go:469] Request Headers:
	I0314 19:44:56.632534    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:44:56.632597    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:44:56.636810    8428 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 19:44:56.636810    8428 round_trippers.go:577] Response Headers:
	I0314 19:44:56.636810    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:44:56.636810    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:44:56.636810    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:44:56.636810    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:44:56.636810    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:44:56 GMT
	I0314 19:44:56.636921    8428 round_trippers.go:580]     Audit-Id: b75dbdaf-e647-4f65-be35-b54e988a9d92
	I0314 19:44:56.637315    8428 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1868","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-14T19:19:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0314 19:44:56.637315    8428 pod_ready.go:92] pod "kube-scheduler-multinode-442000" in "kube-system" namespace has status "Ready":"True"
	I0314 19:44:56.637315    8428 pod_ready.go:81] duration metric: took 391.6935ms for pod "kube-scheduler-multinode-442000" in "kube-system" namespace to be "Ready" ...
	I0314 19:44:56.637315    8428 pod_ready.go:38] duration metric: took 1.6050924s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0314 19:44:56.637858    8428 system_svc.go:44] waiting for kubelet service to be running ....
	I0314 19:44:56.647302    8428 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0314 19:44:56.670358    8428 system_svc.go:56] duration metric: took 32.4974ms WaitForService to wait for kubelet
	I0314 19:44:56.670421    8428 kubeadm.go:576] duration metric: took 20.4376182s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0314 19:44:56.670421    8428 node_conditions.go:102] verifying NodePressure condition ...
	I0314 19:44:56.833907    8428 request.go:629] Waited for 163.0453ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.93.236:8443/api/v1/nodes
	I0314 19:44:56.833907    8428 round_trippers.go:463] GET https://172.17.93.236:8443/api/v1/nodes
	I0314 19:44:56.833907    8428 round_trippers.go:469] Request Headers:
	I0314 19:44:56.833907    8428 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0314 19:44:56.833907    8428 round_trippers.go:473]     Accept: application/json, */*
	I0314 19:44:56.837622    8428 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 19:44:56.837622    8428 round_trippers.go:577] Response Headers:
	I0314 19:44:56.837622    8428 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d73a2cc-b6cb-4d64-b0e7-7be039f8f01d
	I0314 19:44:56.837622    8428 round_trippers.go:580]     Date: Thu, 14 Mar 2024 19:44:57 GMT
	I0314 19:44:56.837622    8428 round_trippers.go:580]     Audit-Id: 21e3f0e9-8317-4b72-b74e-ec4cc60bd3b2
	I0314 19:44:56.837622    8428 round_trippers.go:580]     Cache-Control: no-cache, private
	I0314 19:44:56.837622    8428 round_trippers.go:580]     Content-Type: application/json
	I0314 19:44:56.837622    8428 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 53597728-0681-463c-bae7-662e735d3928
	I0314 19:44:56.838815    8428 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"2114"},"items":[{"metadata":{"name":"multinode-442000","uid":"83d7b6d9-6c9d-412a-8666-79be48276e86","resourceVersion":"1868","creationTimestamp":"2024-03-14T19:19:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-442000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c6f78a3db54ac629870afb44fb5bc8be9e04a8c7","minikube.k8s.io/name":"multinode-442000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_14T19_19_05_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 15606 chars]
	I0314 19:44:56.839497    8428 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0314 19:44:56.839497    8428 node_conditions.go:123] node cpu capacity is 2
	I0314 19:44:56.839497    8428 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0314 19:44:56.839497    8428 node_conditions.go:123] node cpu capacity is 2
	I0314 19:44:56.839497    8428 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0314 19:44:56.839497    8428 node_conditions.go:123] node cpu capacity is 2
	I0314 19:44:56.839497    8428 node_conditions.go:105] duration metric: took 169.0636ms to run NodePressure ...
	I0314 19:44:56.839497    8428 start.go:240] waiting for startup goroutines ...
	I0314 19:44:56.839497    8428 start.go:254] writing updated cluster config ...
	I0314 19:44:56.843730    8428 out.go:177] 
	I0314 19:44:56.846646    8428 config.go:182] Loaded profile config "ha-832100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0314 19:44:56.857353    8428 config.go:182] Loaded profile config "multinode-442000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0314 19:44:56.857353    8428 profile.go:142] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-442000\config.json ...
	I0314 19:44:56.863225    8428 out.go:177] * Starting "multinode-442000-m03" worker node in "multinode-442000" cluster
	I0314 19:44:56.865317    8428 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0314 19:44:56.865317    8428 cache.go:56] Caching tarball of preloaded images
	I0314 19:44:56.865713    8428 preload.go:173] Found C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0314 19:44:56.865713    8428 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0314 19:44:56.865713    8428 profile.go:142] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-442000\config.json ...
	I0314 19:44:56.873278    8428 start.go:360] acquireMachinesLock for multinode-442000-m03: {Name:mk814f158b6187cc9297257c36fdbe0d2871c950 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0314 19:44:56.873278    8428 start.go:364] duration metric: took 0s to acquireMachinesLock for "multinode-442000-m03"
	I0314 19:44:56.873278    8428 start.go:96] Skipping create...Using existing machine configuration
	I0314 19:44:56.873278    8428 fix.go:54] fixHost starting: m03
	I0314 19:44:56.874011    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000-m03 ).state
	I0314 19:44:58.832982    8428 main.go:141] libmachine: [stdout =====>] : Off
	
	I0314 19:44:58.832982    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:44:58.832982    8428 fix.go:112] recreateIfNeeded on multinode-442000-m03: state=Stopped err=<nil>
	W0314 19:44:58.833998    8428 fix.go:138] unexpected machine state, will restart: <nil>
	I0314 19:44:58.848474    8428 out.go:177] * Restarting existing hyperv VM for "multinode-442000-m03" ...
	I0314 19:44:58.853649    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-442000-m03
	I0314 19:45:01.109749    8428 main.go:141] libmachine: [stdout =====>] : 
	I0314 19:45:01.109749    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:45:01.109749    8428 main.go:141] libmachine: Waiting for host to start...
	I0314 19:45:01.109749    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000-m03 ).state
	I0314 19:45:03.199728    8428 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:45:03.199728    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:45:03.199728    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-442000-m03 ).networkadapters[0]).ipaddresses[0]
	I0314 19:45:05.518024    8428 main.go:141] libmachine: [stdout =====>] : 
	I0314 19:45:05.518024    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:45:06.531932    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000-m03 ).state
	I0314 19:45:08.542630    8428 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:45:08.542630    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:45:08.542720    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-442000-m03 ).networkadapters[0]).ipaddresses[0]
	I0314 19:45:10.857335    8428 main.go:141] libmachine: [stdout =====>] : 
	I0314 19:45:10.857335    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:45:11.868282    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000-m03 ).state
	I0314 19:45:13.861311    8428 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:45:13.861828    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:45:13.861966    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-442000-m03 ).networkadapters[0]).ipaddresses[0]
	I0314 19:45:16.155080    8428 main.go:141] libmachine: [stdout =====>] : 
	I0314 19:45:16.155080    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:45:17.165288    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000-m03 ).state
	I0314 19:45:19.169146    8428 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:45:19.169146    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:45:19.169461    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-442000-m03 ).networkadapters[0]).ipaddresses[0]
	I0314 19:45:21.484034    8428 main.go:141] libmachine: [stdout =====>] : 
	I0314 19:45:21.484866    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:45:22.491632    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000-m03 ).state
	I0314 19:45:24.563128    8428 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:45:24.563128    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:45:24.563128    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-442000-m03 ).networkadapters[0]).ipaddresses[0]
	I0314 19:45:26.923984    8428 main.go:141] libmachine: [stdout =====>] : 172.17.91.252
	
	I0314 19:45:26.923984    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:45:26.926928    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000-m03 ).state
	I0314 19:45:28.891146    8428 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:45:28.891828    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:45:28.891828    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-442000-m03 ).networkadapters[0]).ipaddresses[0]
	I0314 19:45:31.267766    8428 main.go:141] libmachine: [stdout =====>] : 172.17.91.252
	
	I0314 19:45:31.268252    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:45:31.268252    8428 profile.go:142] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-442000\config.json ...
	I0314 19:45:31.270525    8428 machine.go:94] provisionDockerMachine start ...
	I0314 19:45:31.270681    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000-m03 ).state
	I0314 19:45:33.252026    8428 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:45:33.252026    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:45:33.252275    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-442000-m03 ).networkadapters[0]).ipaddresses[0]
	I0314 19:45:35.594673    8428 main.go:141] libmachine: [stdout =====>] : 172.17.91.252
	
	I0314 19:45:35.594673    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:45:35.598570    8428 main.go:141] libmachine: Using SSH client type: native
	I0314 19:45:35.598656    8428 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xac9f80] 0xaccb60 <nil>  [] 0s} 172.17.91.252 22 <nil> <nil>}
	I0314 19:45:35.598656    8428 main.go:141] libmachine: About to run SSH command:
	hostname
	I0314 19:45:35.729082    8428 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0314 19:45:35.729082    8428 buildroot.go:166] provisioning hostname "multinode-442000-m03"
	I0314 19:45:35.729185    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000-m03 ).state
	I0314 19:45:37.700219    8428 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:45:37.700219    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:45:37.700775    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-442000-m03 ).networkadapters[0]).ipaddresses[0]
	I0314 19:45:40.070258    8428 main.go:141] libmachine: [stdout =====>] : 172.17.91.252
	
	I0314 19:45:40.070780    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:45:40.074459    8428 main.go:141] libmachine: Using SSH client type: native
	I0314 19:45:40.074982    8428 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xac9f80] 0xaccb60 <nil>  [] 0s} 172.17.91.252 22 <nil> <nil>}
	I0314 19:45:40.074982    8428 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-442000-m03 && echo "multinode-442000-m03" | sudo tee /etc/hostname
	I0314 19:45:40.234976    8428 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-442000-m03
	
	I0314 19:45:40.234976    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000-m03 ).state
	I0314 19:45:42.242937    8428 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:45:42.243078    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:45:42.243078    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-442000-m03 ).networkadapters[0]).ipaddresses[0]
	I0314 19:45:44.611732    8428 main.go:141] libmachine: [stdout =====>] : 172.17.91.252
	
	I0314 19:45:44.611732    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:45:44.615670    8428 main.go:141] libmachine: Using SSH client type: native
	I0314 19:45:44.616085    8428 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xac9f80] 0xaccb60 <nil>  [] 0s} 172.17.91.252 22 <nil> <nil>}
	I0314 19:45:44.616085    8428 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-442000-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-442000-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-442000-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0314 19:45:44.769043    8428 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0314 19:45:44.769043    8428 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube7\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube7\minikube-integration\.minikube}
	I0314 19:45:44.769043    8428 buildroot.go:174] setting up certificates
	I0314 19:45:44.769043    8428 provision.go:84] configureAuth start
	I0314 19:45:44.769043    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000-m03 ).state
	I0314 19:45:46.726827    8428 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:45:46.726827    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:45:46.726827    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-442000-m03 ).networkadapters[0]).ipaddresses[0]
	I0314 19:45:49.050646    8428 main.go:141] libmachine: [stdout =====>] : 172.17.91.252
	
	I0314 19:45:49.050646    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:45:49.051003    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000-m03 ).state
	I0314 19:45:51.016284    8428 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:45:51.016284    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:45:51.016369    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-442000-m03 ).networkadapters[0]).ipaddresses[0]
	I0314 19:45:53.398590    8428 main.go:141] libmachine: [stdout =====>] : 172.17.91.252
	
	I0314 19:45:53.398590    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:45:53.398650    8428 provision.go:143] copyHostCerts
	I0314 19:45:53.398766    8428 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem
	I0314 19:45:53.398994    8428 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem, removing ...
	I0314 19:45:53.398994    8428 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.pem
	I0314 19:45:53.399553    8428 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0314 19:45:53.400200    8428 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem
	I0314 19:45:53.400200    8428 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem, removing ...
	I0314 19:45:53.400200    8428 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cert.pem
	I0314 19:45:53.400746    8428 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0314 19:45:53.401030    8428 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem
	I0314 19:45:53.401728    8428 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem, removing ...
	I0314 19:45:53.401728    8428 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\key.pem
	I0314 19:45:53.401989    8428 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem (1679 bytes)
	I0314 19:45:53.402692    8428 provision.go:117] generating server cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-442000-m03 san=[127.0.0.1 172.17.91.252 localhost minikube multinode-442000-m03]
	I0314 19:45:53.975510    8428 provision.go:177] copyRemoteCerts
	I0314 19:45:53.985591    8428 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0314 19:45:53.985662    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000-m03 ).state
	I0314 19:45:55.944611    8428 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:45:55.945412    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:45:55.945531    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-442000-m03 ).networkadapters[0]).ipaddresses[0]
	I0314 19:45:58.312194    8428 main.go:141] libmachine: [stdout =====>] : 172.17.91.252
	
	I0314 19:45:58.312890    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:45:58.313257    8428 sshutil.go:53] new ssh client: &{IP:172.17.91.252 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-442000-m03\id_rsa Username:docker}
	I0314 19:45:58.422676    8428 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.4366857s)
	I0314 19:45:58.422676    8428 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0314 19:45:58.422676    8428 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0314 19:45:58.464670    8428 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0314 19:45:58.464670    8428 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1229 bytes)
	I0314 19:45:58.514330    8428 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0314 19:45:58.514587    8428 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0314 19:45:58.556708    8428 provision.go:87] duration metric: took 13.7866442s to configureAuth
	I0314 19:45:58.556708    8428 buildroot.go:189] setting minikube options for container-runtime
	I0314 19:45:58.557328    8428 config.go:182] Loaded profile config "multinode-442000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0314 19:45:58.557328    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-442000-m03 ).state
	I0314 19:46:00.542326    8428 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 19:46:00.542326    8428 main.go:141] libmachine: [stderr =====>] : 
	I0314 19:46:00.543122    8428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-442000-m03 ).networkadapters[0]).ipaddresses[0]
	
	
	==> Docker <==
	Mar 14 19:42:14 multinode-442000 dockerd[1043]: 2024/03/14 19:42:14 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Mar 14 19:42:17 multinode-442000 dockerd[1043]: 2024/03/14 19:42:17 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Mar 14 19:42:17 multinode-442000 dockerd[1043]: 2024/03/14 19:42:17 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Mar 14 19:42:17 multinode-442000 dockerd[1043]: 2024/03/14 19:42:17 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Mar 14 19:42:17 multinode-442000 dockerd[1043]: 2024/03/14 19:42:17 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Mar 14 19:42:18 multinode-442000 dockerd[1043]: 2024/03/14 19:42:18 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Mar 14 19:42:18 multinode-442000 dockerd[1043]: 2024/03/14 19:42:18 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Mar 14 19:42:18 multinode-442000 dockerd[1043]: 2024/03/14 19:42:18 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Mar 14 19:42:18 multinode-442000 dockerd[1043]: 2024/03/14 19:42:18 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Mar 14 19:42:18 multinode-442000 dockerd[1043]: 2024/03/14 19:42:18 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Mar 14 19:42:18 multinode-442000 dockerd[1043]: 2024/03/14 19:42:18 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Mar 14 19:42:18 multinode-442000 dockerd[1043]: 2024/03/14 19:42:18 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Mar 14 19:42:18 multinode-442000 dockerd[1043]: 2024/03/14 19:42:18 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Mar 14 19:42:21 multinode-442000 dockerd[1043]: 2024/03/14 19:42:21 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Mar 14 19:42:21 multinode-442000 dockerd[1043]: 2024/03/14 19:42:21 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Mar 14 19:42:21 multinode-442000 dockerd[1043]: 2024/03/14 19:42:21 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Mar 14 19:42:21 multinode-442000 dockerd[1043]: 2024/03/14 19:42:21 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Mar 14 19:42:21 multinode-442000 dockerd[1043]: 2024/03/14 19:42:21 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Mar 14 19:42:21 multinode-442000 dockerd[1043]: 2024/03/14 19:42:21 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Mar 14 19:42:22 multinode-442000 dockerd[1043]: 2024/03/14 19:42:22 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Mar 14 19:42:22 multinode-442000 dockerd[1043]: 2024/03/14 19:42:22 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Mar 14 19:42:22 multinode-442000 dockerd[1043]: 2024/03/14 19:42:22 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Mar 14 19:42:22 multinode-442000 dockerd[1043]: 2024/03/14 19:42:22 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Mar 14 19:42:22 multinode-442000 dockerd[1043]: 2024/03/14 19:42:22 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Mar 14 19:42:22 multinode-442000 dockerd[1043]: 2024/03/14 19:42:22 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	b159aedddf94a       ead0a4a53df89                                                                                         4 minutes ago       Running             coredns                   1                   89f326046d00d       coredns-5dd5756b68-d22jc
	813492ad2d666       8c811b4aec35f                                                                                         4 minutes ago       Running             busybox                   1                   cddebe360bf3a       busybox-5b5d89c9d6-7446n
	3167caea2534f       6e38f40d628db                                                                                         4 minutes ago       Running             storage-provisioner       2                   a723f141543f2       storage-provisioner
	999e4c168afef       4950bb10b3f87                                                                                         5 minutes ago       Running             kindnet-cni               1                   a9176b5544663       kindnet-7b9lf
	497007582e446       83f6cc407eed8                                                                                         5 minutes ago       Running             kube-proxy                1                   f513a7aff6720       kube-proxy-cg28g
	2876622a2618d       6e38f40d628db                                                                                         5 minutes ago       Exited              storage-provisioner       1                   a723f141543f2       storage-provisioner
	32d90a3ea2131       e3db313c6dbc0                                                                                         5 minutes ago       Running             kube-scheduler            1                   c70744e60ac50       kube-scheduler-multinode-442000
	a598d24960de8       7fe0e6f37db33                                                                                         5 minutes ago       Running             kube-apiserver            0                   a27fa2188ee4c       kube-apiserver-multinode-442000
	12baf105f0bb2       d058aa5ab969c                                                                                         5 minutes ago       Running             kube-controller-manager   1                   67475bf80ddd9       kube-controller-manager-multinode-442000
	a81a9c43c3552       73deb9a3f7025                                                                                         5 minutes ago       Running             etcd                      0                   35dd339c8a08d       etcd-multinode-442000
	0cd43cdaa31c9       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   23 minutes ago      Exited              busybox                   0                   fa0f2372c88ee       busybox-5b5d89c9d6-7446n
	8899bc0038935       ead0a4a53df89                                                                                         27 minutes ago      Exited              coredns                   0                   a3dba3fc54c01       coredns-5dd5756b68-d22jc
	1a321c0e89971       kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988              27 minutes ago      Exited              kindnet-cni               0                   b046b896affe9       kindnet-7b9lf
	2a62baf3f1b46       83f6cc407eed8                                                                                         27 minutes ago      Exited              kube-proxy                0                   9b3244b47278e       kube-proxy-cg28g
	dbb603289bf16       e3db313c6dbc0                                                                                         27 minutes ago      Exited              kube-scheduler            0                   54e39762d7a64       kube-scheduler-multinode-442000
	16b80f73683dc       d058aa5ab969c                                                                                         27 minutes ago      Exited              kube-controller-manager   0                   102c907609a3a       kube-controller-manager-multinode-442000
	
	
	==> coredns [8899bc003893] <==
	[INFO] 10.244.1.2:46248 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.00024762s
	[INFO] 10.244.1.2:46501 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000100408s
	[INFO] 10.244.1.2:52414 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000056704s
	[INFO] 10.244.1.2:44908 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000121409s
	[INFO] 10.244.1.2:49578 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00011941s
	[INFO] 10.244.1.2:51057 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000060205s
	[INFO] 10.244.1.2:56240 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000055805s
	[INFO] 10.244.0.3:32901 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000172914s
	[INFO] 10.244.0.3:41115 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000149912s
	[INFO] 10.244.0.3:40494 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00013161s
	[INFO] 10.244.0.3:40575 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000077106s
	[INFO] 10.244.1.2:55307 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000194115s
	[INFO] 10.244.1.2:46435 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00025832s
	[INFO] 10.244.1.2:52095 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000156813s
	[INFO] 10.244.1.2:57849 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00012701s
	[INFO] 10.244.0.3:47270 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000244119s
	[INFO] 10.244.0.3:59009 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000411532s
	[INFO] 10.244.0.3:40925 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000108108s
	[INFO] 10.244.0.3:56417 - 5 "PTR IN 1.80.17.172.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000067706s
	[INFO] 10.244.1.2:36896 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000108409s
	[INFO] 10.244.1.2:38949 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000118209s
	[INFO] 10.244.1.2:56933 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000156413s
	[INFO] 10.244.1.2:35971 - 5 "PTR IN 1.80.17.172.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000072406s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [b159aedddf94] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = d518b2f22d7013b4ce33ee954d9f8802810eac8bed02a1cf0be20d76208a6f83258316421f15df605ab13f1704501370ffcd7655fbac5818a200880248c94b94
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:38965 - 37747 "HINFO IN 9162400456686827331.1281991328183180689. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.052220616s
	
	
	==> describe nodes <==
	Name:               multinode-442000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-442000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c6f78a3db54ac629870afb44fb5bc8be9e04a8c7
	                    minikube.k8s.io/name=multinode-442000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_14T19_19_05_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 14 Mar 2024 19:19:00 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-442000
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 14 Mar 2024 19:46:32 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 14 Mar 2024 19:41:41 +0000   Thu, 14 Mar 2024 19:18:58 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 14 Mar 2024 19:41:41 +0000   Thu, 14 Mar 2024 19:18:58 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 14 Mar 2024 19:41:41 +0000   Thu, 14 Mar 2024 19:18:58 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 14 Mar 2024 19:41:41 +0000   Thu, 14 Mar 2024 19:41:41 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.17.93.236
	  Hostname:    multinode-442000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164268Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164268Ki
	  pods:               110
	System Info:
	  Machine ID:                 37c811f81f1d4d709fd4a6eb79d70749
	  System UUID:                8469b663-ea90-da4f-856d-11034a8f65d8
	  Boot ID:                    91589624-f8f3-469e-b556-aa6dd64e54de
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://25.0.4
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-7446n                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	  kube-system                 coredns-5dd5756b68-d22jc                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     27m
	  kube-system                 etcd-multinode-442000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         5m29s
	  kube-system                 kindnet-7b9lf                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      27m
	  kube-system                 kube-apiserver-multinode-442000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m29s
	  kube-system                 kube-controller-manager-multinode-442000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 kube-proxy-cg28g                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 kube-scheduler-multinode-442000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 27m                    kube-proxy       
	  Normal  Starting                 5m26s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  27m (x8 over 27m)      kubelet          Node multinode-442000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    27m (x8 over 27m)      kubelet          Node multinode-442000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     27m (x7 over 27m)      kubelet          Node multinode-442000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  27m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  27m                    kubelet          Node multinode-442000 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  27m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    27m                    kubelet          Node multinode-442000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     27m                    kubelet          Node multinode-442000 status is now: NodeHasSufficientPID
	  Normal  Starting                 27m                    kubelet          Starting kubelet.
	  Normal  RegisteredNode           27m                    node-controller  Node multinode-442000 event: Registered Node multinode-442000 in Controller
	  Normal  NodeReady                27m                    kubelet          Node multinode-442000 status is now: NodeReady
	  Normal  Starting                 5m35s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m35s (x8 over 5m35s)  kubelet          Node multinode-442000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m35s (x8 over 5m35s)  kubelet          Node multinode-442000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m35s (x7 over 5m35s)  kubelet          Node multinode-442000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m35s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m17s                  node-controller  Node multinode-442000 event: Registered Node multinode-442000 in Controller
	
	
	Name:               multinode-442000-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-442000-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c6f78a3db54ac629870afb44fb5bc8be9e04a8c7
	                    minikube.k8s.io/name=multinode-442000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_03_14T19_44_36_0700
	                    minikube.k8s.io/version=v1.32.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 14 Mar 2024 19:44:35 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-442000-m02
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 14 Mar 2024 19:46:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 14 Mar 2024 19:44:54 +0000   Thu, 14 Mar 2024 19:44:35 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 14 Mar 2024 19:44:54 +0000   Thu, 14 Mar 2024 19:44:35 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 14 Mar 2024 19:44:54 +0000   Thu, 14 Mar 2024 19:44:35 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 14 Mar 2024 19:44:54 +0000   Thu, 14 Mar 2024 19:44:54 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.17.93.200
	  Hostname:    multinode-442000-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164268Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164268Ki
	  pods:               110
	System Info:
	  Machine ID:                 3437d2bd3dc44e44bed144354415183d
	  System UUID:                0b9b8376-0767-f940-9973-d373e3dc050d
	  Boot ID:                    6d6b7df3-9b26-4626-95eb-6743d6697099
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://25.0.4
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-rsgh2    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m16s
	  kube-system                 kindnet-c7m4p               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      24m
	  kube-system                 kube-proxy-72dzs            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 24m                kube-proxy       
	  Normal  Starting                 106s               kube-proxy       
	  Normal  NodeHasSufficientMemory  24m (x5 over 24m)  kubelet          Node multinode-442000-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    24m (x5 over 24m)  kubelet          Node multinode-442000-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     24m (x5 over 24m)  kubelet          Node multinode-442000-m02 status is now: NodeHasSufficientPID
	  Normal  NodeReady                24m                kubelet          Node multinode-442000-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  2m (x5 over 2m1s)  kubelet          Node multinode-442000-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m (x5 over 2m1s)  kubelet          Node multinode-442000-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m (x5 over 2m1s)  kubelet          Node multinode-442000-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           116s               node-controller  Node multinode-442000-m02 event: Registered Node multinode-442000-m02 in Controller
	  Normal  NodeReady                101s               kubelet          Node multinode-442000-m02 status is now: NodeReady
	
	
	Name:               multinode-442000-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-442000-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c6f78a3db54ac629870afb44fb5bc8be9e04a8c7
	                    minikube.k8s.io/name=multinode-442000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_03_14T19_36_47_0700
	                    minikube.k8s.io/version=v1.32.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 14 Mar 2024 19:36:47 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-442000-m03
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 14 Mar 2024 19:37:37 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Thu, 14 Mar 2024 19:36:54 +0000   Thu, 14 Mar 2024 19:38:21 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Thu, 14 Mar 2024 19:36:54 +0000   Thu, 14 Mar 2024 19:38:21 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Thu, 14 Mar 2024 19:36:54 +0000   Thu, 14 Mar 2024 19:38:21 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Thu, 14 Mar 2024 19:36:54 +0000   Thu, 14 Mar 2024 19:38:21 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  172.17.84.215
	  Hostname:    multinode-442000-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164268Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164268Ki
	  pods:               110
	System Info:
	  Machine ID:                 dc7772516bfe448db22a5c28796f53ab
	  System UUID:                71573585-d564-f043-9154-3d5854ce61b8
	  Boot ID:                    fed746b2-110b-43ee-9065-09983ba74a37
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://25.0.4
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-r7zdb       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      20m
	  kube-system                 kube-proxy-w2qls    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 20m                    kube-proxy       
	  Normal  Starting                 9m46s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  20m (x5 over 20m)      kubelet          Node multinode-442000-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    20m (x5 over 20m)      kubelet          Node multinode-442000-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     20m (x5 over 20m)      kubelet          Node multinode-442000-m03 status is now: NodeHasSufficientPID
	  Normal  NodeReady                19m                    kubelet          Node multinode-442000-m03 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  9m48s (x5 over 9m50s)  kubelet          Node multinode-442000-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m48s (x5 over 9m50s)  kubelet          Node multinode-442000-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m48s (x5 over 9m50s)  kubelet          Node multinode-442000-m03 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m44s                  node-controller  Node multinode-442000-m03 event: Registered Node multinode-442000-m03 in Controller
	  Normal  NodeReady                9m41s                  kubelet          Node multinode-442000-m03 status is now: NodeReady
	  Normal  NodeNotReady             8m14s                  node-controller  Node multinode-442000-m03 status is now: NodeNotReady
	  Normal  RegisteredNode           5m17s                  node-controller  Node multinode-442000-m03 event: Registered Node multinode-442000-m03 in Controller
	
	
	==> dmesg <==
	[  +0.017569] * Found PM-Timer Bug on the chipset. Due to workarounds for a bug,
	              * this clock source is slow. Consider trying other clock sources
	[  +5.774438] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.663188] psmouse serio1: trackpoint: failed to get extended button data, assuming 3 buttons
	[  +1.473946] systemd-fstab-generator[113]: Ignoring "noauto" option for root device
	[  +5.849126] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[Mar14 19:40] systemd-fstab-generator[630]: Ignoring "noauto" option for root device
	[  +0.179743] systemd-fstab-generator[642]: Ignoring "noauto" option for root device
	[ +24.853688] systemd-fstab-generator[971]: Ignoring "noauto" option for root device
	[  +0.096946] kauditd_printk_skb: 73 callbacks suppressed
	[  +0.497369] systemd-fstab-generator[1009]: Ignoring "noauto" option for root device
	[  +0.185545] systemd-fstab-generator[1021]: Ignoring "noauto" option for root device
	[  +0.215423] systemd-fstab-generator[1035]: Ignoring "noauto" option for root device
	[  +2.887443] systemd-fstab-generator[1220]: Ignoring "noauto" option for root device
	[  +0.193519] systemd-fstab-generator[1232]: Ignoring "noauto" option for root device
	[  +0.182072] systemd-fstab-generator[1244]: Ignoring "noauto" option for root device
	[  +0.258988] systemd-fstab-generator[1259]: Ignoring "noauto" option for root device
	[  +0.819687] systemd-fstab-generator[1381]: Ignoring "noauto" option for root device
	[  +0.099817] kauditd_printk_skb: 205 callbacks suppressed
	[  +2.940519] systemd-fstab-generator[1516]: Ignoring "noauto" option for root device
	[Mar14 19:41] kauditd_printk_skb: 84 callbacks suppressed
	[  +4.042735] systemd-fstab-generator[3087]: Ignoring "noauto" option for root device
	[  +7.733278] kauditd_printk_skb: 70 callbacks suppressed
	
	
	==> etcd [a81a9c43c355] <==
	{"level":"info","ts":"2024-03-14T19:41:02.154977Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-03-14T19:41:02.154992Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-03-14T19:41:02.158559Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fa26a6ed08186c39 switched to configuration voters=(18025278095570267193)"}
	{"level":"info","ts":"2024-03-14T19:41:02.158756Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"76b99849a2fc5549","local-member-id":"fa26a6ed08186c39","added-peer-id":"fa26a6ed08186c39","added-peer-peer-urls":["https://172.17.86.124:2380"]}
	{"level":"info","ts":"2024-03-14T19:41:02.158933Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"76b99849a2fc5549","local-member-id":"fa26a6ed08186c39","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-14T19:41:02.158969Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-14T19:41:02.159838Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-03-14T19:41:02.160148Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"fa26a6ed08186c39","initial-advertise-peer-urls":["https://172.17.93.236:2380"],"listen-peer-urls":["https://172.17.93.236:2380"],"advertise-client-urls":["https://172.17.93.236:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.17.93.236:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-03-14T19:41:02.160272Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-03-14T19:41:02.161335Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"172.17.93.236:2380"}
	{"level":"info","ts":"2024-03-14T19:41:02.161389Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"172.17.93.236:2380"}
	{"level":"info","ts":"2024-03-14T19:41:03.281331Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fa26a6ed08186c39 is starting a new election at term 2"}
	{"level":"info","ts":"2024-03-14T19:41:03.281645Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fa26a6ed08186c39 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-03-14T19:41:03.281829Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fa26a6ed08186c39 received MsgPreVoteResp from fa26a6ed08186c39 at term 2"}
	{"level":"info","ts":"2024-03-14T19:41:03.281928Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fa26a6ed08186c39 became candidate at term 3"}
	{"level":"info","ts":"2024-03-14T19:41:03.282044Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fa26a6ed08186c39 received MsgVoteResp from fa26a6ed08186c39 at term 3"}
	{"level":"info","ts":"2024-03-14T19:41:03.282164Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fa26a6ed08186c39 became leader at term 3"}
	{"level":"info","ts":"2024-03-14T19:41:03.282332Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: fa26a6ed08186c39 elected leader fa26a6ed08186c39 at term 3"}
	{"level":"info","ts":"2024-03-14T19:41:03.292472Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"fa26a6ed08186c39","local-member-attributes":"{Name:multinode-442000 ClientURLs:[https://172.17.93.236:2379]}","request-path":"/0/members/fa26a6ed08186c39/attributes","cluster-id":"76b99849a2fc5549","publish-timeout":"7s"}
	{"level":"info","ts":"2024-03-14T19:41:03.292867Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-14T19:41:03.296522Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-03-14T19:41:03.298446Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-14T19:41:03.311867Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.17.93.236:2379"}
	{"level":"info","ts":"2024-03-14T19:41:03.311957Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-03-14T19:41:03.31205Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 19:46:35 up 7 min,  0 users,  load average: 0.50, 0.39, 0.20
	Linux multinode-442000 5.10.207 #1 SMP Wed Mar 13 22:01:28 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [1a321c0e8997] <==
	I0314 19:37:57.176659       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.3.0/24] 
	I0314 19:38:07.189890       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:38:07.189993       1 main.go:227] handling current node
	I0314 19:38:07.190008       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:38:07.190016       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:38:07.190217       1 main.go:223] Handling node with IPs: map[172.17.84.215:{}]
	I0314 19:38:07.190245       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.3.0/24] 
	I0314 19:38:17.196541       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:38:17.196633       1 main.go:227] handling current node
	I0314 19:38:17.196647       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:38:17.196655       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:38:17.196888       1 main.go:223] Handling node with IPs: map[172.17.84.215:{}]
	I0314 19:38:17.197012       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.3.0/24] 
	I0314 19:38:27.217365       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:38:27.217460       1 main.go:227] handling current node
	I0314 19:38:27.217475       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:38:27.217483       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:38:27.217621       1 main.go:223] Handling node with IPs: map[172.17.84.215:{}]
	I0314 19:38:27.217634       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.3.0/24] 
	I0314 19:38:37.229941       1 main.go:223] Handling node with IPs: map[172.17.86.124:{}]
	I0314 19:38:37.230048       1 main.go:227] handling current node
	I0314 19:38:37.230062       1 main.go:223] Handling node with IPs: map[172.17.80.135:{}]
	I0314 19:38:37.230070       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:38:37.230268       1 main.go:223] Handling node with IPs: map[172.17.84.215:{}]
	I0314 19:38:37.230338       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [999e4c168afe] <==
	I0314 19:45:49.057762       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.3.0/24] 
	I0314 19:45:59.069539       1 main.go:223] Handling node with IPs: map[172.17.93.236:{}]
	I0314 19:45:59.069641       1 main.go:227] handling current node
	I0314 19:45:59.069656       1 main.go:223] Handling node with IPs: map[172.17.93.200:{}]
	I0314 19:45:59.069665       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:45:59.069833       1 main.go:223] Handling node with IPs: map[172.17.84.215:{}]
	I0314 19:45:59.069939       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.3.0/24] 
	I0314 19:46:09.083307       1 main.go:223] Handling node with IPs: map[172.17.93.236:{}]
	I0314 19:46:09.083973       1 main.go:227] handling current node
	I0314 19:46:09.084051       1 main.go:223] Handling node with IPs: map[172.17.93.200:{}]
	I0314 19:46:09.084064       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:46:09.084385       1 main.go:223] Handling node with IPs: map[172.17.84.215:{}]
	I0314 19:46:09.084432       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.3.0/24] 
	I0314 19:46:19.100459       1 main.go:223] Handling node with IPs: map[172.17.93.236:{}]
	I0314 19:46:19.100542       1 main.go:227] handling current node
	I0314 19:46:19.100555       1 main.go:223] Handling node with IPs: map[172.17.93.200:{}]
	I0314 19:46:19.100563       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:46:19.101134       1 main.go:223] Handling node with IPs: map[172.17.84.215:{}]
	I0314 19:46:19.101225       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.3.0/24] 
	I0314 19:46:29.116082       1 main.go:223] Handling node with IPs: map[172.17.93.236:{}]
	I0314 19:46:29.116176       1 main.go:227] handling current node
	I0314 19:46:29.116190       1 main.go:223] Handling node with IPs: map[172.17.93.200:{}]
	I0314 19:46:29.116198       1 main.go:250] Node multinode-442000-m02 has CIDR [10.244.1.0/24] 
	I0314 19:46:29.116478       1 main.go:223] Handling node with IPs: map[172.17.84.215:{}]
	I0314 19:46:29.116501       1 main.go:250] Node multinode-442000-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [a598d24960de] <==
	I0314 19:41:05.730411       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0314 19:41:05.730521       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0314 19:41:05.730616       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0314 19:41:05.799477       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0314 19:41:05.813580       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0314 19:41:05.830168       1 shared_informer.go:318] Caches are synced for configmaps
	I0314 19:41:05.830217       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0314 19:41:05.830281       1 aggregator.go:166] initial CRD sync complete...
	I0314 19:41:05.830289       1 autoregister_controller.go:141] Starting autoregister controller
	I0314 19:41:05.830295       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0314 19:41:05.830301       1 cache.go:39] Caches are synced for autoregister controller
	I0314 19:41:05.846941       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0314 19:41:05.857057       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0314 19:41:05.858966       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0314 19:41:05.865554       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I0314 19:41:05.865721       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I0314 19:41:06.667315       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0314 19:41:07.118314       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [172.17.93.236]
	I0314 19:41:07.120612       1 controller.go:624] quota admission added evaluator for: endpoints
	I0314 19:41:07.135973       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0314 19:41:09.049225       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0314 19:41:09.264220       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0314 19:41:09.277110       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0314 19:41:09.393446       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0314 19:41:09.422214       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	
	==> kube-controller-manager [12baf105f0bb] <==
	I0314 19:42:12.281325       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="115.209µs"
	I0314 19:42:12.305037       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="74.006µs"
	I0314 19:42:12.366507       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="32.074928ms"
	I0314 19:42:12.368560       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="106.408µs"
	I0314 19:44:20.000326       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5b5d89c9d6-rsgh2"
	I0314 19:44:20.018088       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="46.965334ms"
	I0314 19:44:20.018437       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="286.922µs"
	I0314 19:44:20.064614       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="44.71816ms"
	I0314 19:44:20.064918       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="202.215µs"
	I0314 19:44:29.153217       1 event.go:307] "Event occurred" object="multinode-442000-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RemovingNode" message="Node multinode-442000-m02 event: Removing Node multinode-442000-m02 from Controller"
	I0314 19:44:35.834326       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-442000-m02\" does not exist"
	I0314 19:44:35.838097       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6-8drpb" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-5b5d89c9d6-8drpb"
	I0314 19:44:35.854765       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-442000-m02" podCIDRs=["10.244.1.0/24"]
	I0314 19:44:36.273218       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="77.206µs"
	I0314 19:44:39.155274       1 event.go:307] "Event occurred" object="multinode-442000-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-442000-m02 event: Registered Node multinode-442000-m02 in Controller"
	I0314 19:44:55.017615       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-442000-m02"
	I0314 19:44:55.064439       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="86.907µs"
	I0314 19:44:59.182285       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6-8drpb" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-5b5d89c9d6-8drpb"
	I0314 19:45:02.356478       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="85.906µs"
	I0314 19:45:02.839310       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="116.509µs"
	I0314 19:45:02.848171       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="54.005µs"
	I0314 19:45:04.134059       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="79.607µs"
	I0314 19:45:04.153492       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="44.903µs"
	I0314 19:45:05.910783       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="25.847493ms"
	I0314 19:45:05.911308       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="114.309µs"
	
	
	==> kube-controller-manager [16b80f73683d] <==
	I0314 19:22:48.344640       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="8.018521ms"
	I0314 19:22:48.344838       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="42.804µs"
	I0314 19:26:25.208780       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-442000-m02"
	I0314 19:26:25.214591       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-442000-m03\" does not exist"
	I0314 19:26:25.248082       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-442000-m03" podCIDRs=["10.244.2.0/24"]
	I0314 19:26:25.265233       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-r7zdb"
	I0314 19:26:25.273144       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-w2qls"
	I0314 19:26:26.207170       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-442000-m03"
	I0314 19:26:26.207236       1 event.go:307] "Event occurred" object="multinode-442000-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-442000-m03 event: Registered Node multinode-442000-m03 in Controller"
	I0314 19:26:43.758846       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-442000-m02"
	I0314 19:33:46.333556       1 event.go:307] "Event occurred" object="multinode-442000-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-442000-m03 status is now: NodeNotReady"
	I0314 19:33:46.333891       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-442000-m02"
	I0314 19:33:46.348976       1 event.go:307] "Event occurred" object="kube-system/kindnet-r7zdb" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0314 19:33:46.370200       1 event.go:307] "Event occurred" object="kube-system/kube-proxy-w2qls" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0314 19:36:39.868492       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-442000-m02"
	I0314 19:36:41.400896       1 event.go:307] "Event occurred" object="multinode-442000-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RemovingNode" message="Node multinode-442000-m03 event: Removing Node multinode-442000-m03 from Controller"
	I0314 19:36:47.335802       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-442000-m03\" does not exist"
	I0314 19:36:47.336128       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-442000-m02"
	I0314 19:36:47.352987       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-442000-m03" podCIDRs=["10.244.3.0/24"]
	I0314 19:36:51.403261       1 event.go:307] "Event occurred" object="multinode-442000-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-442000-m03 event: Registered Node multinode-442000-m03 in Controller"
	I0314 19:36:54.976864       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-442000-m02"
	I0314 19:38:21.463528       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-442000-m02"
	I0314 19:38:21.463818       1 event.go:307] "Event occurred" object="multinode-442000-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-442000-m03 status is now: NodeNotReady"
	I0314 19:38:21.486796       1 event.go:307] "Event occurred" object="kube-system/kindnet-r7zdb" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0314 19:38:21.501217       1 event.go:307] "Event occurred" object="kube-system/kube-proxy-w2qls" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	
	
	==> kube-proxy [2a62baf3f1b4] <==
	I0314 19:19:18.247796       1 server_others.go:69] "Using iptables proxy"
	I0314 19:19:18.275162       1 node.go:141] Successfully retrieved node IP: 172.17.86.124
	I0314 19:19:18.379821       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0314 19:19:18.379851       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0314 19:19:18.395429       1 server_others.go:152] "Using iptables Proxier"
	I0314 19:19:18.395506       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0314 19:19:18.395856       1 server.go:846] "Version info" version="v1.28.4"
	I0314 19:19:18.395890       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0314 19:19:18.417861       1 config.go:188] "Starting service config controller"
	I0314 19:19:18.417913       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0314 19:19:18.417950       1 config.go:97] "Starting endpoint slice config controller"
	I0314 19:19:18.420511       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0314 19:19:18.426566       1 config.go:315] "Starting node config controller"
	I0314 19:19:18.426600       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0314 19:19:18.519508       1 shared_informer.go:318] Caches are synced for service config
	I0314 19:19:18.524347       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0314 19:19:18.527360       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-proxy [497007582e44] <==
	I0314 19:41:08.342277       1 server_others.go:69] "Using iptables proxy"
	I0314 19:41:08.381589       1 node.go:141] Successfully retrieved node IP: 172.17.93.236
	I0314 19:41:08.703360       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0314 19:41:08.703384       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0314 19:41:08.724122       1 server_others.go:152] "Using iptables Proxier"
	I0314 19:41:08.726554       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0314 19:41:08.729424       1 server.go:846] "Version info" version="v1.28.4"
	I0314 19:41:08.729460       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0314 19:41:08.732062       1 config.go:188] "Starting service config controller"
	I0314 19:41:08.732501       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0314 19:41:08.732571       1 config.go:97] "Starting endpoint slice config controller"
	I0314 19:41:08.732581       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0314 19:41:08.733523       1 config.go:315] "Starting node config controller"
	I0314 19:41:08.733550       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0314 19:41:08.832968       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0314 19:41:08.833049       1 shared_informer.go:318] Caches are synced for service config
	I0314 19:41:08.835501       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [32d90a3ea213] <==
	I0314 19:41:03.376319       1 serving.go:348] Generated self-signed cert in-memory
	W0314 19:41:05.770317       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0314 19:41:05.770426       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0314 19:41:05.770581       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0314 19:41:05.770640       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0314 19:41:05.841573       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.4"
	I0314 19:41:05.841674       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0314 19:41:05.844125       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0314 19:41:05.845062       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0314 19:41:05.845143       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0314 19:41:05.845293       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0314 19:41:05.946840       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [dbb603289bf1] <==
	E0314 19:19:01.454398       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0314 19:19:01.505982       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0314 19:19:01.506182       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0314 19:19:01.640521       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0314 19:19:01.640836       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0314 19:19:01.681052       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0314 19:19:01.681953       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0314 19:19:01.732243       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0314 19:19:01.732288       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0314 19:19:01.767241       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0314 19:19:01.767329       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0314 19:19:01.783665       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0314 19:19:01.783845       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0314 19:19:01.812936       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0314 19:19:01.813027       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0314 19:19:01.821109       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0314 19:19:01.821267       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0314 19:19:01.843311       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0314 19:19:01.843339       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0314 19:19:01.914649       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0314 19:19:01.914986       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0314 19:19:04.090863       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0314 19:38:43.236637       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0314 19:38:43.237145       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	E0314 19:38:43.237439       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Mar 14 19:42:00 multinode-442000 kubelet[1523]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 14 19:42:00 multinode-442000 kubelet[1523]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 14 19:42:00 multinode-442000 kubelet[1523]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 14 19:42:11 multinode-442000 kubelet[1523]: I0314 19:42:11.167906    1523 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="89f326046d00d990fbe8611867f6438ef498caad91d78b4f265633a7cd56307f"
	Mar 14 19:42:11 multinode-442000 kubelet[1523]: I0314 19:42:11.214897    1523 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cddebe360bf3a58d057146523ff9f043ddb40843d3e55a24f8f364524780a439"
	Mar 14 19:43:00 multinode-442000 kubelet[1523]: E0314 19:43:00.513457    1523 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 14 19:43:00 multinode-442000 kubelet[1523]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 14 19:43:00 multinode-442000 kubelet[1523]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 14 19:43:00 multinode-442000 kubelet[1523]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 14 19:43:00 multinode-442000 kubelet[1523]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 14 19:44:00 multinode-442000 kubelet[1523]: E0314 19:44:00.518856    1523 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 14 19:44:00 multinode-442000 kubelet[1523]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 14 19:44:00 multinode-442000 kubelet[1523]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 14 19:44:00 multinode-442000 kubelet[1523]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 14 19:44:00 multinode-442000 kubelet[1523]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 14 19:45:00 multinode-442000 kubelet[1523]: E0314 19:45:00.513671    1523 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 14 19:45:00 multinode-442000 kubelet[1523]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 14 19:45:00 multinode-442000 kubelet[1523]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 14 19:45:00 multinode-442000 kubelet[1523]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 14 19:45:00 multinode-442000 kubelet[1523]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 14 19:46:00 multinode-442000 kubelet[1523]: E0314 19:46:00.514164    1523 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 14 19:46:00 multinode-442000 kubelet[1523]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 14 19:46:00 multinode-442000 kubelet[1523]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 14 19:46:00 multinode-442000 kubelet[1523]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 14 19:46:00 multinode-442000 kubelet[1523]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0314 19:46:23.981760    9528 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-442000 -n multinode-442000
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-442000 -n multinode-442000: (11.1275729s)
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-442000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (565.82s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (302.06s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-956500 --driver=hyperv
no_kubernetes_test.go:95: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p NoKubernetes-956500 --driver=hyperv: exit status 1 (4m59.7212426s)

                                                
                                                
-- stdout --
	* [NoKubernetes-956500] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4046 Build 19045.4046
	  - KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=18384
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the hyperv driver based on user configuration
	* Starting "NoKubernetes-956500" primary control-plane node in "NoKubernetes-956500" cluster
	* Creating hyperv VM (CPUs=2, Memory=6000MB, Disk=20000MB) ...

                                                
                                                
-- /stdout --
** stderr ** 
	W0314 20:02:58.577973   12608 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
no_kubernetes_test.go:97: failed to start minikube with args: "out/minikube-windows-amd64.exe start -p NoKubernetes-956500 --driver=hyperv" : exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p NoKubernetes-956500 -n NoKubernetes-956500
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p NoKubernetes-956500 -n NoKubernetes-956500: exit status 7 (2.3375529s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W0314 20:07:58.314243   11828 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-956500" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithK8s (302.06s)

                                                
                                    

Test pass (172/217)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 17.92
4 TestDownloadOnly/v1.20.0/preload-exists 0.06
7 TestDownloadOnly/v1.20.0/kubectl 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.3
9 TestDownloadOnly/v1.20.0/DeleteAll 1.08
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 1.22
12 TestDownloadOnly/v1.28.4/json-events 12.67
13 TestDownloadOnly/v1.28.4/preload-exists 0
16 TestDownloadOnly/v1.28.4/kubectl 0
17 TestDownloadOnly/v1.28.4/LogsDuration 0.43
18 TestDownloadOnly/v1.28.4/DeleteAll 1.24
19 TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds 1.05
21 TestDownloadOnly/v1.29.0-rc.2/json-events 19.52
22 TestDownloadOnly/v1.29.0-rc.2/preload-exists 0
25 TestDownloadOnly/v1.29.0-rc.2/kubectl 0
26 TestDownloadOnly/v1.29.0-rc.2/LogsDuration 0.25
27 TestDownloadOnly/v1.29.0-rc.2/DeleteAll 1.18
28 TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds 1.08
30 TestBinaryMirror 6.64
31 TestOffline 253.98
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.27
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.25
36 TestAddons/Setup 377.27
39 TestAddons/parallel/Ingress 61.19
40 TestAddons/parallel/InspektorGadget 24.79
41 TestAddons/parallel/MetricsServer 19.93
42 TestAddons/parallel/HelmTiller 26.63
44 TestAddons/parallel/CSI 98.24
45 TestAddons/parallel/Headlamp 39
46 TestAddons/parallel/CloudSpanner 19.4
47 TestAddons/parallel/LocalPath 28.99
48 TestAddons/parallel/NvidiaDevicePlugin 20.19
49 TestAddons/parallel/Yakd 5.02
52 TestAddons/serial/GCPAuth/Namespaces 0.31
53 TestAddons/StoppedEnableDisable 50.33
54 TestCertOptions 419.95
55 TestCertExpiration 818.36
56 TestDockerFlags 377.58
57 TestForceSystemdFlag 384.13
58 TestForceSystemdEnv 522.51
65 TestErrorSpam/start 16.06
66 TestErrorSpam/status 34.29
67 TestErrorSpam/pause 21.46
68 TestErrorSpam/unpause 21.47
69 TestErrorSpam/stop 57.1
72 TestFunctional/serial/CopySyncFile 0.03
73 TestFunctional/serial/StartWithProxy 233.08
74 TestFunctional/serial/AuditLog 0
75 TestFunctional/serial/SoftStart 115.07
76 TestFunctional/serial/KubeContext 0.12
77 TestFunctional/serial/KubectlGetPods 0.2
80 TestFunctional/serial/CacheCmd/cache/add_remote 24.38
81 TestFunctional/serial/CacheCmd/cache/add_local 9.56
82 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.24
83 TestFunctional/serial/CacheCmd/cache/list 0.23
84 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 8.68
85 TestFunctional/serial/CacheCmd/cache/cache_reload 33.7
86 TestFunctional/serial/CacheCmd/cache/delete 0.47
87 TestFunctional/serial/MinikubeKubectlCmd 0.41
89 TestFunctional/serial/ExtraConfig 115.82
90 TestFunctional/serial/ComponentHealth 0.16
91 TestFunctional/serial/LogsCmd 8
92 TestFunctional/serial/LogsFileCmd 10.02
93 TestFunctional/serial/InvalidService 19.5
99 TestFunctional/parallel/StatusCmd 38.55
103 TestFunctional/parallel/ServiceCmdConnect 25.97
104 TestFunctional/parallel/AddonsCmd 0.64
105 TestFunctional/parallel/PersistentVolumeClaim 39.24
107 TestFunctional/parallel/SSHCmd 19.5
108 TestFunctional/parallel/CpCmd 56.08
109 TestFunctional/parallel/MySQL 55.02
110 TestFunctional/parallel/FileSync 9.34
111 TestFunctional/parallel/CertSync 55.26
115 TestFunctional/parallel/NodeLabels 0.25
117 TestFunctional/parallel/NonActiveRuntimeDisabled 9.69
119 TestFunctional/parallel/License 2.59
120 TestFunctional/parallel/ServiceCmd/DeployApp 16.38
121 TestFunctional/parallel/ProfileCmd/profile_not_create 10.43
122 TestFunctional/parallel/Version/short 0.23
123 TestFunctional/parallel/Version/components 7.46
124 TestFunctional/parallel/ProfileCmd/profile_list 10.65
125 TestFunctional/parallel/ImageCommands/ImageListShort 6.96
126 TestFunctional/parallel/ImageCommands/ImageListTable 6.87
127 TestFunctional/parallel/ImageCommands/ImageListJson 7.05
128 TestFunctional/parallel/ImageCommands/ImageListYaml 6.92
129 TestFunctional/parallel/ImageCommands/ImageBuild 25
130 TestFunctional/parallel/ImageCommands/Setup 3.74
131 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 23.76
132 TestFunctional/parallel/ServiceCmd/List 13.27
133 TestFunctional/parallel/ProfileCmd/profile_json_output 10.79
134 TestFunctional/parallel/ServiceCmd/JSONOutput 13.06
135 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 19.64
138 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 25.69
140 TestFunctional/parallel/ImageCommands/ImageSaveToFile 8.69
141 TestFunctional/parallel/ImageCommands/ImageRemove 15.37
143 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 8.44
144 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
146 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 14.57
147 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 16.83
153 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
154 TestFunctional/parallel/UpdateContextCmd/no_changes 2.28
155 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 2.95
156 TestFunctional/parallel/UpdateContextCmd/no_clusters 2.3
157 TestFunctional/parallel/DockerEnv/powershell 39.11
158 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 9.41
159 TestFunctional/delete_addon-resizer_images 0.41
160 TestFunctional/delete_my-image_image 0.16
161 TestFunctional/delete_minikube_cached_images 0.15
165 TestMutliControlPlane/serial/StartCluster 691.76
166 TestMutliControlPlane/serial/DeployApp 10.11
168 TestMutliControlPlane/serial/AddWorkerNode 236.54
169 TestMutliControlPlane/serial/NodeLabels 0.17
170 TestMutliControlPlane/serial/HAppyAfterClusterStart 26.05
171 TestMutliControlPlane/serial/CopyFile 574.87
172 TestMutliControlPlane/serial/StopSecondaryNode 67.06
173 TestMutliControlPlane/serial/DegradedAfterControlPlaneNodeStop 19.52
177 TestImageBuild/serial/Setup 187.38
178 TestImageBuild/serial/NormalBuild 8.96
179 TestImageBuild/serial/BuildWithBuildArg 8.36
180 TestImageBuild/serial/BuildWithDockerIgnore 7.18
181 TestImageBuild/serial/BuildWithSpecifiedDockerfile 6.96
185 TestJSONOutput/start/Command 200.46
186 TestJSONOutput/start/Audit 0
188 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
189 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
191 TestJSONOutput/pause/Command 7.28
192 TestJSONOutput/pause/Audit 0
194 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
195 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
197 TestJSONOutput/unpause/Command 7.29
198 TestJSONOutput/unpause/Audit 0
200 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
203 TestJSONOutput/stop/Command 36.88
204 TestJSONOutput/stop/Audit 0
206 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
207 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
208 TestErrorJSONOutput 1.33
213 TestMainNoArgs 0.22
214 TestMinikubeProfile 500.67
217 TestMountStart/serial/StartWithMountFirst 145.81
218 TestMountStart/serial/VerifyMountFirst 8.79
219 TestMountStart/serial/StartWithMountSecond 146.45
220 TestMountStart/serial/VerifyMountSecond 8.84
221 TestMountStart/serial/DeleteFirst 29.21
222 TestMountStart/serial/VerifyMountPostDelete 8.93
223 TestMountStart/serial/Stop 24.7
224 TestMountStart/serial/RestartStopped 111.21
225 TestMountStart/serial/VerifyMountPostStop 8.79
228 TestMultiNode/serial/FreshStart2Nodes 399.19
229 TestMultiNode/serial/DeployApp2Nodes 8.77
231 TestMultiNode/serial/AddNode 211.54
232 TestMultiNode/serial/MultiNodeLabels 0.18
233 TestMultiNode/serial/ProfileList 11.34
234 TestMultiNode/serial/CopyFile 331.42
236 TestMultiNode/serial/StartAfterStop 171.46
241 TestPreload 489.36
242 TestScheduledStopWindows 318.54
247 TestRunningBinaryUpgrade 914.55
249 TestKubernetesUpgrade 1082.3
252 TestNoKubernetes/serial/StartNoK8sWithVersion 0.34
265 TestStoppedBinaryUpgrade/Setup 0.85
266 TestStoppedBinaryUpgrade/Upgrade 863.43
275 TestPause/serial/Start 505.33
276 TestPause/serial/SecondStartNoReconfiguration 260.51
277 TestStoppedBinaryUpgrade/MinikubeLogs 9.26
278 TestPause/serial/Pause 8.51
279 TestPause/serial/VerifyStatus 12.51
280 TestPause/serial/Unpause 7.48
281 TestPause/serial/PauseAgain 7.57
282 TestPause/serial/DeletePaused 47.78
283 TestPause/serial/VerifyDeletedResources 9.45
x
+
TestDownloadOnly/v1.20.0/json-events (17.92s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-677800 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=hyperv
aaa_download_only_test.go:81: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-677800 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=hyperv: (17.9239072s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (17.92s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
--- PASS: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.3s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-677800
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-677800: exit status 85 (297.8447ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |       User        | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-677800 | minikube7\jenkins | v1.32.0 | 14 Mar 24 17:40 UTC |          |
	|         | -p download-only-677800        |                      |                   |         |                     |          |
	|         | --force --alsologtostderr      |                      |                   |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |                   |         |                     |          |
	|         | --container-runtime=docker     |                      |                   |         |                     |          |
	|         | --driver=hyperv                |                      |                   |         |                     |          |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/14 17:40:52
	Running on machine: minikube7
	Binary: Built with gc go1.22.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0314 17:40:52.223800   13376 out.go:291] Setting OutFile to fd 656 ...
	I0314 17:40:52.224802   13376 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 17:40:52.224802   13376 out.go:304] Setting ErrFile to fd 660...
	I0314 17:40:52.224802   13376 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0314 17:40:52.236264   13376 root.go:314] Error reading config file at C:\Users\jenkins.minikube7\minikube-integration\.minikube\config\config.json: open C:\Users\jenkins.minikube7\minikube-integration\.minikube\config\config.json: The system cannot find the path specified.
	I0314 17:40:52.245454   13376 out.go:298] Setting JSON to true
	I0314 17:40:52.247867   13376 start.go:129] hostinfo: {"hostname":"minikube7","uptime":59857,"bootTime":1710378195,"procs":193,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4046 Build 19045.4046","kernelVersion":"10.0.19045.4046 Build 19045.4046","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f66ed2ea-6c04-4a6b-8eea-b2fb0953e990"}
	W0314 17:40:52.248880   13376 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0314 17:40:52.254747   13376 out.go:97] [download-only-677800] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4046 Build 19045.4046
	I0314 17:40:52.262274   13376 out.go:169] KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0314 17:40:52.255309   13376 notify.go:220] Checking for updates...
	W0314 17:40:52.255309   13376 preload.go:294] Failed to list preload files: open C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball: The system cannot find the file specified.
	I0314 17:40:52.266290   13376 out.go:169] MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	I0314 17:40:52.268671   13376 out.go:169] MINIKUBE_LOCATION=18384
	I0314 17:40:52.269859   13376 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W0314 17:40:52.275803   13376 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0314 17:40:52.276321   13376 driver.go:392] Setting default libvirt URI to qemu:///system
	I0314 17:40:57.463796   13376 out.go:97] Using the hyperv driver based on user configuration
	I0314 17:40:57.463867   13376 start.go:297] selected driver: hyperv
	I0314 17:40:57.463867   13376 start.go:901] validating driver "hyperv" against <nil>
	I0314 17:40:57.463867   13376 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0314 17:40:57.533201   13376 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=65534MB, container=0MB
	I0314 17:40:57.534353   13376 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0314 17:40:57.534353   13376 cni.go:84] Creating CNI manager for ""
	I0314 17:40:57.534353   13376 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0314 17:40:57.534353   13376 start.go:340] cluster config:
	{Name:download-only-677800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:6000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-677800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 17:40:57.535512   13376 iso.go:125] acquiring lock: {Name:mk1b3e73402180391a20a865a9454da445c269fc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0314 17:40:57.539015   13376 out.go:97] Downloading VM boot image ...
	I0314 17:40:57.539095   13376 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso.sha256 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\iso\amd64\minikube-v1.32.1-1710348681-18375-amd64.iso
	I0314 17:41:02.539854   13376 out.go:97] Starting "download-only-677800" primary control-plane node in "download-only-677800" cluster
	I0314 17:41:02.539854   13376 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0314 17:41:02.584480   13376 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0314 17:41:02.584957   13376 cache.go:56] Caching tarball of preloaded images
	I0314 17:41:02.585707   13376 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0314 17:41:02.592212   13376 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0314 17:41:02.592212   13376 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0314 17:41:02.658360   13376 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4?checksum=md5:9a82241e9b8b4ad2b5cca73108f2c7a3 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0314 17:41:06.830257   13376 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0314 17:41:06.831516   13376 preload.go:255] verifying checksum of C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	
	
	* The control-plane node download-only-677800 host does not exist
	  To start a cluster, run: "minikube start -p download-only-677800"

                                                
                                                
-- /stdout --
** stderr ** 
	W0314 17:41:10.162801    7568 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.30s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (1.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-windows-amd64.exe delete --all
aaa_download_only_test.go:197: (dbg) Done: out/minikube-windows-amd64.exe delete --all: (1.0794378s)
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (1.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (1.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-only-677800
aaa_download_only_test.go:208: (dbg) Done: out/minikube-windows-amd64.exe delete -p download-only-677800: (1.2210033s)
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (1.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/json-events (12.67s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-065000 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=docker --driver=hyperv
aaa_download_only_test.go:81: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-065000 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=docker --driver=hyperv: (12.6660595s)
--- PASS: TestDownloadOnly/v1.28.4/json-events (12.67s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/preload-exists
--- PASS: TestDownloadOnly/v1.28.4/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/kubectl
--- PASS: TestDownloadOnly/v1.28.4/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/LogsDuration (0.43s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-065000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-065000: exit status 85 (427.0524ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |       User        | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-677800 | minikube7\jenkins | v1.32.0 | 14 Mar 24 17:40 UTC |                     |
	|         | -p download-only-677800        |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr      |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |                   |         |                     |                     |
	|         | --container-runtime=docker     |                      |                   |         |                     |                     |
	|         | --driver=hyperv                |                      |                   |         |                     |                     |
	| delete  | --all                          | minikube             | minikube7\jenkins | v1.32.0 | 14 Mar 24 17:41 UTC | 14 Mar 24 17:41 UTC |
	| delete  | -p download-only-677800        | download-only-677800 | minikube7\jenkins | v1.32.0 | 14 Mar 24 17:41 UTC | 14 Mar 24 17:41 UTC |
	| start   | -o=json --download-only        | download-only-065000 | minikube7\jenkins | v1.32.0 | 14 Mar 24 17:41 UTC |                     |
	|         | -p download-only-065000        |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr      |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.28.4   |                      |                   |         |                     |                     |
	|         | --container-runtime=docker     |                      |                   |         |                     |                     |
	|         | --driver=hyperv                |                      |                   |         |                     |                     |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/14 17:41:12
	Running on machine: minikube7
	Binary: Built with gc go1.22.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0314 17:41:12.807261    5644 out.go:291] Setting OutFile to fd 732 ...
	I0314 17:41:12.807928    5644 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 17:41:12.807928    5644 out.go:304] Setting ErrFile to fd 656...
	I0314 17:41:12.807928    5644 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 17:41:12.827095    5644 out.go:298] Setting JSON to true
	I0314 17:41:12.830271    5644 start.go:129] hostinfo: {"hostname":"minikube7","uptime":59877,"bootTime":1710378195,"procs":194,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4046 Build 19045.4046","kernelVersion":"10.0.19045.4046 Build 19045.4046","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f66ed2ea-6c04-4a6b-8eea-b2fb0953e990"}
	W0314 17:41:12.830271    5644 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0314 17:41:12.835419    5644 out.go:97] [download-only-065000] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4046 Build 19045.4046
	I0314 17:41:12.835631    5644 notify.go:220] Checking for updates...
	I0314 17:41:12.837583    5644 out.go:169] KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0314 17:41:12.840944    5644 out.go:169] MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	I0314 17:41:12.843008    5644 out.go:169] MINIKUBE_LOCATION=18384
	I0314 17:41:12.845005    5644 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W0314 17:41:12.849367    5644 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0314 17:41:12.850157    5644 driver.go:392] Setting default libvirt URI to qemu:///system
	I0314 17:41:17.989948    5644 out.go:97] Using the hyperv driver based on user configuration
	I0314 17:41:17.989948    5644 start.go:297] selected driver: hyperv
	I0314 17:41:17.989948    5644 start.go:901] validating driver "hyperv" against <nil>
	I0314 17:41:17.989948    5644 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0314 17:41:18.035139    5644 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=65534MB, container=0MB
	I0314 17:41:18.035735    5644 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0314 17:41:18.036247    5644 cni.go:84] Creating CNI manager for ""
	I0314 17:41:18.036247    5644 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0314 17:41:18.036247    5644 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0314 17:41:18.036247    5644 start.go:340] cluster config:
	{Name:download-only-065000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:6000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:download-only-065000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 17:41:18.036247    5644 iso.go:125] acquiring lock: {Name:mk1b3e73402180391a20a865a9454da445c269fc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0314 17:41:18.039511    5644 out.go:97] Starting "download-only-065000" primary control-plane node in "download-only-065000" cluster
	I0314 17:41:18.039511    5644 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0314 17:41:18.081495    5644 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I0314 17:41:18.081495    5644 cache.go:56] Caching tarball of preloaded images
	I0314 17:41:18.082211    5644 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0314 17:41:18.085704    5644 out.go:97] Downloading Kubernetes v1.28.4 preload ...
	I0314 17:41:18.085815    5644 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 ...
	I0314 17:41:18.152297    5644 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4?checksum=md5:7ebdea7754e21f51b865dbfc36b53b7d -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	
	
	* The control-plane node download-only-065000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-065000"

                                                
                                                
-- /stdout --
** stderr ** 
	W0314 17:41:25.431631    7976 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.4/LogsDuration (0.43s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/DeleteAll (1.24s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-windows-amd64.exe delete --all
aaa_download_only_test.go:197: (dbg) Done: out/minikube-windows-amd64.exe delete --all: (1.2418933s)
--- PASS: TestDownloadOnly/v1.28.4/DeleteAll (1.24s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds (1.05s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-only-065000
aaa_download_only_test.go:208: (dbg) Done: out/minikube-windows-amd64.exe delete -p download-only-065000: (1.0481013s)
--- PASS: TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds (1.05s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/json-events (19.52s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-788200 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=docker --driver=hyperv
aaa_download_only_test.go:81: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-788200 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=docker --driver=hyperv: (19.5234186s)
--- PASS: TestDownloadOnly/v1.29.0-rc.2/json-events (19.52s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/preload-exists
--- PASS: TestDownloadOnly/v1.29.0-rc.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/kubectl
--- PASS: TestDownloadOnly/v1.29.0-rc.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.25s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-788200
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-788200: exit status 85 (246.1432ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| Command |               Args                |       Profile        |       User        | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| start   | -o=json --download-only           | download-only-677800 | minikube7\jenkins | v1.32.0 | 14 Mar 24 17:40 UTC |                     |
	|         | -p download-only-677800           |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr         |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.20.0      |                      |                   |         |                     |                     |
	|         | --container-runtime=docker        |                      |                   |         |                     |                     |
	|         | --driver=hyperv                   |                      |                   |         |                     |                     |
	| delete  | --all                             | minikube             | minikube7\jenkins | v1.32.0 | 14 Mar 24 17:41 UTC | 14 Mar 24 17:41 UTC |
	| delete  | -p download-only-677800           | download-only-677800 | minikube7\jenkins | v1.32.0 | 14 Mar 24 17:41 UTC | 14 Mar 24 17:41 UTC |
	| start   | -o=json --download-only           | download-only-065000 | minikube7\jenkins | v1.32.0 | 14 Mar 24 17:41 UTC |                     |
	|         | -p download-only-065000           |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr         |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.28.4      |                      |                   |         |                     |                     |
	|         | --container-runtime=docker        |                      |                   |         |                     |                     |
	|         | --driver=hyperv                   |                      |                   |         |                     |                     |
	| delete  | --all                             | minikube             | minikube7\jenkins | v1.32.0 | 14 Mar 24 17:41 UTC | 14 Mar 24 17:41 UTC |
	| delete  | -p download-only-065000           | download-only-065000 | minikube7\jenkins | v1.32.0 | 14 Mar 24 17:41 UTC | 14 Mar 24 17:41 UTC |
	| start   | -o=json --download-only           | download-only-788200 | minikube7\jenkins | v1.32.0 | 14 Mar 24 17:41 UTC |                     |
	|         | -p download-only-788200           |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr         |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2 |                      |                   |         |                     |                     |
	|         | --container-runtime=docker        |                      |                   |         |                     |                     |
	|         | --driver=hyperv                   |                      |                   |         |                     |                     |
	|---------|-----------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/14 17:41:28
	Running on machine: minikube7
	Binary: Built with gc go1.22.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0314 17:41:28.221669    9060 out.go:291] Setting OutFile to fd 660 ...
	I0314 17:41:28.222199    9060 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 17:41:28.222199    9060 out.go:304] Setting ErrFile to fd 652...
	I0314 17:41:28.222199    9060 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 17:41:28.241450    9060 out.go:298] Setting JSON to true
	I0314 17:41:28.244278    9060 start.go:129] hostinfo: {"hostname":"minikube7","uptime":59893,"bootTime":1710378195,"procs":194,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4046 Build 19045.4046","kernelVersion":"10.0.19045.4046 Build 19045.4046","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f66ed2ea-6c04-4a6b-8eea-b2fb0953e990"}
	W0314 17:41:28.244278    9060 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0314 17:41:28.392072    9060 out.go:97] [download-only-788200] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4046 Build 19045.4046
	I0314 17:41:28.393096    9060 notify.go:220] Checking for updates...
	I0314 17:41:28.397390    9060 out.go:169] KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0314 17:41:28.400066    9060 out.go:169] MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	I0314 17:41:28.402519    9060 out.go:169] MINIKUBE_LOCATION=18384
	I0314 17:41:28.405055    9060 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W0314 17:41:28.408582    9060 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0314 17:41:28.409542    9060 driver.go:392] Setting default libvirt URI to qemu:///system
	I0314 17:41:33.490365    9060 out.go:97] Using the hyperv driver based on user configuration
	I0314 17:41:33.490457    9060 start.go:297] selected driver: hyperv
	I0314 17:41:33.490457    9060 start.go:901] validating driver "hyperv" against <nil>
	I0314 17:41:33.490792    9060 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0314 17:41:33.532062    9060 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=65534MB, container=0MB
	I0314 17:41:33.533152    9060 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0314 17:41:33.533152    9060 cni.go:84] Creating CNI manager for ""
	I0314 17:41:33.533152    9060 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0314 17:41:33.533152    9060 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0314 17:41:33.533152    9060 start.go:340] cluster config:
	{Name:download-only-788200 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:6000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:download-only-788200 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loc
al ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterva
l:1m0s}
	I0314 17:41:33.533684    9060 iso.go:125] acquiring lock: {Name:mk1b3e73402180391a20a865a9454da445c269fc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0314 17:41:33.536944    9060 out.go:97] Starting "download-only-788200" primary control-plane node in "download-only-788200" cluster
	I0314 17:41:33.537013    9060 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I0314 17:41:33.579569    9060 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4
	I0314 17:41:33.579569    9060 cache.go:56] Caching tarball of preloaded images
	I0314 17:41:33.579569    9060 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I0314 17:41:33.636409    9060 out.go:97] Downloading Kubernetes v1.29.0-rc.2 preload ...
	I0314 17:41:33.637056    9060 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4 ...
	I0314 17:41:33.701464    9060 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4?checksum=md5:47acda482c3add5b56147c92b8d7f468 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4
	I0314 17:41:41.282435    9060 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4 ...
	I0314 17:41:41.283492    9060 preload.go:255] verifying checksum of C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4 ...
	I0314 17:41:42.192536    9060 cache.go:59] Finished verifying existence of preloaded tar for v1.29.0-rc.2 on docker
	I0314 17:41:42.193729    9060 profile.go:142] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\download-only-788200\config.json ...
	I0314 17:41:42.194447    9060 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\download-only-788200\config.json: {Name:mk3727cda763ddf3a23151d7e2a58296ce1231e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 17:41:42.195328    9060 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I0314 17:41:42.195995    9060 download.go:107] Downloading: https://dl.k8s.io/release/v1.29.0-rc.2/bin/windows/amd64/kubectl.exe?checksum=file:https://dl.k8s.io/release/v1.29.0-rc.2/bin/windows/amd64/kubectl.exe.sha256 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\windows\amd64\v1.29.0-rc.2/kubectl.exe
	
	
	* The control-plane node download-only-788200 host does not exist
	  To start a cluster, run: "minikube start -p download-only-788200"

                                                
                                                
-- /stdout --
** stderr ** 
	W0314 17:41:47.668260    9620 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.25s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/DeleteAll (1.18s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-windows-amd64.exe delete --all
aaa_download_only_test.go:197: (dbg) Done: out/minikube-windows-amd64.exe delete --all: (1.181515s)
--- PASS: TestDownloadOnly/v1.29.0-rc.2/DeleteAll (1.18s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds (1.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-only-788200
aaa_download_only_test.go:208: (dbg) Done: out/minikube-windows-amd64.exe delete -p download-only-788200: (1.0784791s)
--- PASS: TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds (1.08s)

                                                
                                    
x
+
TestBinaryMirror (6.64s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-windows-amd64.exe start --download-only -p binary-mirror-808300 --alsologtostderr --binary-mirror http://127.0.0.1:50434 --driver=hyperv
aaa_download_only_test.go:314: (dbg) Done: out/minikube-windows-amd64.exe start --download-only -p binary-mirror-808300 --alsologtostderr --binary-mirror http://127.0.0.1:50434 --driver=hyperv: (5.836269s)
helpers_test.go:175: Cleaning up "binary-mirror-808300" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p binary-mirror-808300
--- PASS: TestBinaryMirror (6.64s)

                                                
                                    
x
+
TestOffline (253.98s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe start -p offline-docker-860100 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=hyperv
aab_offline_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe start -p offline-docker-860100 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=hyperv: (3m28.1296624s)
helpers_test.go:175: Cleaning up "offline-docker-860100" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p offline-docker-860100
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p offline-docker-860100: (45.8510926s)
--- PASS: TestOffline (253.98s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.27s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:928: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p addons-953400
addons_test.go:928: (dbg) Non-zero exit: out/minikube-windows-amd64.exe addons enable dashboard -p addons-953400: exit status 85 (264.3148ms)

                                                
                                                
-- stdout --
	* Profile "addons-953400" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-953400"

                                                
                                                
-- /stdout --
** stderr ** 
	W0314 17:42:00.336664    5636 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.27s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.25s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-windows-amd64.exe addons disable dashboard -p addons-953400
addons_test.go:939: (dbg) Non-zero exit: out/minikube-windows-amd64.exe addons disable dashboard -p addons-953400: exit status 85 (254.0112ms)

                                                
                                                
-- stdout --
	* Profile "addons-953400" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-953400"

                                                
                                                
-- /stdout --
** stderr ** 
	W0314 17:42:00.335674   13296 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.25s)

                                                
                                    
x
+
TestAddons/Setup (377.27s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-windows-amd64.exe start -p addons-953400 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=hyperv --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:109: (dbg) Done: out/minikube-windows-amd64.exe start -p addons-953400 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=hyperv --addons=ingress --addons=ingress-dns --addons=helm-tiller: (6m17.2692319s)
--- PASS: TestAddons/Setup (377.27s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (61.19s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-953400 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-953400 replace --force -f testdata\nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-953400 replace --force -f testdata\nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [2d01eade-95db-4a9d-aa4f-fa6123fffddc] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [2d01eade-95db-4a9d-aa4f-fa6123fffddc] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 12.0108161s
addons_test.go:262: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-953400 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Done: out/minikube-windows-amd64.exe -p addons-953400 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": (9.1284221s)
addons_test.go:269: debug: unexpected stderr for out/minikube-windows-amd64.exe -p addons-953400 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'":
W0314 17:49:35.702385    9476 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
addons_test.go:286: (dbg) Run:  kubectl --context addons-953400 replace --force -f testdata\ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-953400 ip
addons_test.go:291: (dbg) Done: out/minikube-windows-amd64.exe -p addons-953400 ip: (2.3840832s)
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 172.17.87.211
addons_test.go:306: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-953400 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:306: (dbg) Done: out/minikube-windows-amd64.exe -p addons-953400 addons disable ingress-dns --alsologtostderr -v=1: (14.9263174s)
addons_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-953400 addons disable ingress --alsologtostderr -v=1
addons_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe -p addons-953400 addons disable ingress --alsologtostderr -v=1: (20.7229541s)
--- PASS: TestAddons/parallel/Ingress (61.19s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (24.79s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-7glfr" [2dbfe7be-9274-4c45-ab1b-7cfd9a866ec7] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.0090383s
addons_test.go:841: (dbg) Run:  out/minikube-windows-amd64.exe addons disable inspektor-gadget -p addons-953400
addons_test.go:841: (dbg) Done: out/minikube-windows-amd64.exe addons disable inspektor-gadget -p addons-953400: (19.7710734s)
--- PASS: TestAddons/parallel/InspektorGadget (24.79s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (19.93s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:407: metrics-server stabilized in 23.4724ms
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-69cf46c98-z95zf" [2fb36680-9447-477d-abd8-ef22bac39ee7] Running
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.0261322s
addons_test.go:415: (dbg) Run:  kubectl --context addons-953400 top pods -n kube-system
addons_test.go:432: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-953400 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:432: (dbg) Done: out/minikube-windows-amd64.exe -p addons-953400 addons disable metrics-server --alsologtostderr -v=1: (14.7082245s)
--- PASS: TestAddons/parallel/MetricsServer (19.93s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (26.63s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:456: tiller-deploy stabilized in 5.0038ms
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-7b677967b9-gqg8w" [26bf47a9-072f-42e9-9d7f-f46fb8757091] Running
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.0137755s
addons_test.go:473: (dbg) Run:  kubectl --context addons-953400 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:473: (dbg) Done: kubectl --context addons-953400 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (7.1418339s)
addons_test.go:490: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-953400 addons disable helm-tiller --alsologtostderr -v=1
addons_test.go:490: (dbg) Done: out/minikube-windows-amd64.exe -p addons-953400 addons disable helm-tiller --alsologtostderr -v=1: (14.4546258s)
--- PASS: TestAddons/parallel/HelmTiller (26.63s)

                                                
                                    
x
+
TestAddons/parallel/CSI (98.24s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:561: csi-hostpath-driver pods stabilized in 23.1568ms
addons_test.go:564: (dbg) Run:  kubectl --context addons-953400 create -f testdata\csi-hostpath-driver\pvc.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-953400 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-953400 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-953400 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-953400 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-953400 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-953400 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-953400 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-953400 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-953400 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-953400 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-953400 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-953400 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-953400 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-953400 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-953400 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-953400 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-953400 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-953400 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-953400 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-953400 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-953400 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:574: (dbg) Run:  kubectl --context addons-953400 create -f testdata\csi-hostpath-driver\pv-pod.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [926967d3-08e6-4eac-85e1-1799cfddbc1f] Pending
helpers_test.go:344: "task-pv-pod" [926967d3-08e6-4eac-85e1-1799cfddbc1f] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [926967d3-08e6-4eac-85e1-1799cfddbc1f] Running
addons_test.go:579: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 20.0159322s
addons_test.go:584: (dbg) Run:  kubectl --context addons-953400 create -f testdata\csi-hostpath-driver\snapshot.yaml
addons_test.go:589: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-953400 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-953400 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:594: (dbg) Run:  kubectl --context addons-953400 delete pod task-pv-pod
addons_test.go:594: (dbg) Done: kubectl --context addons-953400 delete pod task-pv-pod: (1.6652237s)
addons_test.go:600: (dbg) Run:  kubectl --context addons-953400 delete pvc hpvc
addons_test.go:606: (dbg) Run:  kubectl --context addons-953400 create -f testdata\csi-hostpath-driver\pvc-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-953400 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-953400 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-953400 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-953400 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-953400 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-953400 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-953400 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:616: (dbg) Run:  kubectl --context addons-953400 create -f testdata\csi-hostpath-driver\pv-pod-restore.yaml
addons_test.go:621: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [07647055-5add-40e7-9984-d0f41bca5829] Pending
helpers_test.go:344: "task-pv-pod-restore" [07647055-5add-40e7-9984-d0f41bca5829] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [07647055-5add-40e7-9984-d0f41bca5829] Running
addons_test.go:621: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 9.0190012s
addons_test.go:626: (dbg) Run:  kubectl --context addons-953400 delete pod task-pv-pod-restore
addons_test.go:626: (dbg) Done: kubectl --context addons-953400 delete pod task-pv-pod-restore: (1.500791s)
addons_test.go:630: (dbg) Run:  kubectl --context addons-953400 delete pvc hpvc-restore
addons_test.go:634: (dbg) Run:  kubectl --context addons-953400 delete volumesnapshot new-snapshot-demo
addons_test.go:638: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-953400 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:638: (dbg) Done: out/minikube-windows-amd64.exe -p addons-953400 addons disable csi-hostpath-driver --alsologtostderr -v=1: (21.4495518s)
addons_test.go:642: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-953400 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:642: (dbg) Done: out/minikube-windows-amd64.exe -p addons-953400 addons disable volumesnapshots --alsologtostderr -v=1: (14.5637238s)
--- PASS: TestAddons/parallel/CSI (98.24s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (39s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:824: (dbg) Run:  out/minikube-windows-amd64.exe addons enable headlamp -p addons-953400 --alsologtostderr -v=1
addons_test.go:824: (dbg) Done: out/minikube-windows-amd64.exe addons enable headlamp -p addons-953400 --alsologtostderr -v=1: (15.9809414s)
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-5485c556b-jcnhq" [fad5bc1f-f5df-4045-9b3c-84672d7f3f19] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-5485c556b-jcnhq" [fad5bc1f-f5df-4045-9b3c-84672d7f3f19] Running
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 23.0207777s
--- PASS: TestAddons/parallel/Headlamp (39.00s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (19.4s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-6548d5df46-wfz9s" [61d4fb53-55c0-43c5-955e-6e9eb266d1a0] Running
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.0190657s
addons_test.go:860: (dbg) Run:  out/minikube-windows-amd64.exe addons disable cloud-spanner -p addons-953400
addons_test.go:860: (dbg) Done: out/minikube-windows-amd64.exe addons disable cloud-spanner -p addons-953400: (14.3648156s)
--- PASS: TestAddons/parallel/CloudSpanner (19.40s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (28.99s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:873: (dbg) Run:  kubectl --context addons-953400 apply -f testdata\storage-provisioner-rancher\pvc.yaml
addons_test.go:879: (dbg) Run:  kubectl --context addons-953400 apply -f testdata\storage-provisioner-rancher\pod.yaml
addons_test.go:883: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-953400 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-953400 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-953400 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-953400 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-953400 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-953400 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-953400 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [5fce5b60-ef11-4147-b3e5-9b871f3eee06] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [5fce5b60-ef11-4147-b3e5-9b871f3eee06] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [5fce5b60-ef11-4147-b3e5-9b871f3eee06] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.0183921s
addons_test.go:891: (dbg) Run:  kubectl --context addons-953400 get pvc test-pvc -o=json
addons_test.go:900: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-953400 ssh "cat /opt/local-path-provisioner/pvc-9b8b44a0-12f8-43a2-8d75-342adde9e68c_default_test-pvc/file1"
addons_test.go:900: (dbg) Done: out/minikube-windows-amd64.exe -p addons-953400 ssh "cat /opt/local-path-provisioner/pvc-9b8b44a0-12f8-43a2-8d75-342adde9e68c_default_test-pvc/file1": (9.3363572s)
addons_test.go:912: (dbg) Run:  kubectl --context addons-953400 delete pod test-local-path
addons_test.go:916: (dbg) Run:  kubectl --context addons-953400 delete pvc test-pvc
addons_test.go:920: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-953400 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:920: (dbg) Done: out/minikube-windows-amd64.exe -p addons-953400 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (7.13031s)
--- PASS: TestAddons/parallel/LocalPath (28.99s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (20.19s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-k2kqr" [93445059-341c-47bd-aac9-8a1887ea3d53] Running
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.0156233s
addons_test.go:955: (dbg) Run:  out/minikube-windows-amd64.exe addons disable nvidia-device-plugin -p addons-953400
addons_test.go:955: (dbg) Done: out/minikube-windows-amd64.exe addons disable nvidia-device-plugin -p addons-953400: (14.1622895s)
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (20.19s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (5.02s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-9947fc6bf-tq9fn" [f2e8075a-b365-4d62-acbc-21ab6ab52d61] Running
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.0211531s
--- PASS: TestAddons/parallel/Yakd (5.02s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.31s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:650: (dbg) Run:  kubectl --context addons-953400 create ns new-namespace
addons_test.go:664: (dbg) Run:  kubectl --context addons-953400 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.31s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (50.33s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-windows-amd64.exe stop -p addons-953400
addons_test.go:172: (dbg) Done: out/minikube-windows-amd64.exe stop -p addons-953400: (38.0977602s)
addons_test.go:176: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p addons-953400
addons_test.go:176: (dbg) Done: out/minikube-windows-amd64.exe addons enable dashboard -p addons-953400: (4.879036s)
addons_test.go:180: (dbg) Run:  out/minikube-windows-amd64.exe addons disable dashboard -p addons-953400
addons_test.go:180: (dbg) Done: out/minikube-windows-amd64.exe addons disable dashboard -p addons-953400: (4.5193771s)
addons_test.go:185: (dbg) Run:  out/minikube-windows-amd64.exe addons disable gvisor -p addons-953400
addons_test.go:185: (dbg) Done: out/minikube-windows-amd64.exe addons disable gvisor -p addons-953400: (2.8305786s)
--- PASS: TestAddons/StoppedEnableDisable (50.33s)

                                                
                                    
x
+
TestCertOptions (419.95s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-options-003700 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=hyperv
cert_options_test.go:49: (dbg) Done: out/minikube-windows-amd64.exe start -p cert-options-003700 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=hyperv: (6m1.0557565s)
cert_options_test.go:60: (dbg) Run:  out/minikube-windows-amd64.exe -p cert-options-003700 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Done: out/minikube-windows-amd64.exe -p cert-options-003700 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": (9.26109s)
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-003700 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p cert-options-003700 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Done: out/minikube-windows-amd64.exe ssh -p cert-options-003700 -- "sudo cat /etc/kubernetes/admin.conf": (9.0282975s)
helpers_test.go:175: Cleaning up "cert-options-003700" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p cert-options-003700
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p cert-options-003700: (40.4612186s)
--- PASS: TestCertOptions (419.95s)

                                                
                                    
x
+
TestCertExpiration (818.36s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-expiration-554200 --memory=2048 --cert-expiration=3m --driver=hyperv
E0314 20:23:01.735321   11052 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-953400\client.crt: The system cannot find the path specified.
E0314 20:23:18.487113   11052 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-953400\client.crt: The system cannot find the path specified.
E0314 20:23:38.729286   11052 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-866600\client.crt: The system cannot find the path specified.
cert_options_test.go:123: (dbg) Done: out/minikube-windows-amd64.exe start -p cert-expiration-554200 --memory=2048 --cert-expiration=3m --driver=hyperv: (5m24.9492735s)
E0314 20:28:18.506440   11052 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-953400\client.crt: The system cannot find the path specified.
E0314 20:28:38.747788   11052 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-866600\client.crt: The system cannot find the path specified.
cert_options_test.go:131: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-expiration-554200 --memory=2048 --cert-expiration=8760h --driver=hyperv
cert_options_test.go:131: (dbg) Done: out/minikube-windows-amd64.exe start -p cert-expiration-554200 --memory=2048 --cert-expiration=8760h --driver=hyperv: (4m29.368218s)
helpers_test.go:175: Cleaning up "cert-expiration-554200" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p cert-expiration-554200
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p cert-expiration-554200: (44.030554s)
--- PASS: TestCertExpiration (818.36s)

                                                
                                    
x
+
TestDockerFlags (377.58s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe start -p docker-flags-273600 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=hyperv
docker_test.go:51: (dbg) Done: out/minikube-windows-amd64.exe start -p docker-flags-273600 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=hyperv: (5m13.6772005s)
docker_test.go:56: (dbg) Run:  out/minikube-windows-amd64.exe -p docker-flags-273600 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Done: out/minikube-windows-amd64.exe -p docker-flags-273600 ssh "sudo systemctl show docker --property=Environment --no-pager": (9.0928812s)
docker_test.go:67: (dbg) Run:  out/minikube-windows-amd64.exe -p docker-flags-273600 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Done: out/minikube-windows-amd64.exe -p docker-flags-273600 ssh "sudo systemctl show docker --property=ExecStart --no-pager": (9.3379064s)
helpers_test.go:175: Cleaning up "docker-flags-273600" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p docker-flags-273600
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p docker-flags-273600: (45.469742s)
--- PASS: TestDockerFlags (377.58s)

                                                
                                    
x
+
TestForceSystemdFlag (384.13s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-windows-amd64.exe start -p force-systemd-flag-005600 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=hyperv
docker_test.go:91: (dbg) Done: out/minikube-windows-amd64.exe start -p force-systemd-flag-005600 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=hyperv: (5m28.8509676s)
docker_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe -p force-systemd-flag-005600 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe -p force-systemd-flag-005600 ssh "docker info --format {{.CgroupDriver}}": (9.3258978s)
helpers_test.go:175: Cleaning up "force-systemd-flag-005600" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p force-systemd-flag-005600
E0314 20:08:38.659083   11052 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-866600\client.crt: The system cannot find the path specified.
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p force-systemd-flag-005600: (45.9547526s)
--- PASS: TestForceSystemdFlag (384.13s)

                                                
                                    
x
+
TestForceSystemdEnv (522.51s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-windows-amd64.exe start -p force-systemd-env-244500 --memory=2048 --alsologtostderr -v=5 --driver=hyperv
E0314 20:03:18.392574   11052 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-953400\client.crt: The system cannot find the path specified.
E0314 20:03:38.633394   11052 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-866600\client.crt: The system cannot find the path specified.
E0314 20:06:21.651236   11052 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-953400\client.crt: The system cannot find the path specified.
docker_test.go:155: (dbg) Done: out/minikube-windows-amd64.exe start -p force-systemd-env-244500 --memory=2048 --alsologtostderr -v=5 --driver=hyperv: (7m53.8110467s)
docker_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe -p force-systemd-env-244500 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe -p force-systemd-env-244500 ssh "docker info --format {{.CgroupDriver}}": (9.2167631s)
helpers_test.go:175: Cleaning up "force-systemd-env-244500" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p force-systemd-env-244500
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p force-systemd-env-244500: (39.4813525s)
--- PASS: TestForceSystemdEnv (522.51s)

                                                
                                    
x
+
TestErrorSpam/start (16.06s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-536000 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-536000 start --dry-run
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-536000 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-536000 start --dry-run: (5.2648563s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-536000 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-536000 start --dry-run
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-536000 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-536000 start --dry-run: (5.4170495s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-536000 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-536000 start --dry-run
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-536000 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-536000 start --dry-run: (5.3702586s)
--- PASS: TestErrorSpam/start (16.06s)

                                                
                                    
x
+
TestErrorSpam/status (34.29s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-536000 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-536000 status
E0314 17:56:01.810805   11052 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-953400\client.crt: The system cannot find the path specified.
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-536000 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-536000 status: (11.7780141s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-536000 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-536000 status
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-536000 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-536000 status: (11.2923181s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-536000 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-536000 status
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-536000 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-536000 status: (11.214294s)
--- PASS: TestErrorSpam/status (34.29s)

                                                
                                    
x
+
TestErrorSpam/pause (21.46s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-536000 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-536000 pause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-536000 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-536000 pause: (7.3000673s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-536000 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-536000 pause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-536000 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-536000 pause: (7.1198642s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-536000 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-536000 pause
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-536000 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-536000 pause: (7.0392595s)
--- PASS: TestErrorSpam/pause (21.46s)

                                                
                                    
x
+
TestErrorSpam/unpause (21.47s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-536000 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-536000 unpause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-536000 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-536000 unpause: (7.3025112s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-536000 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-536000 unpause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-536000 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-536000 unpause: (7.061897s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-536000 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-536000 unpause
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-536000 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-536000 unpause: (7.1029402s)
--- PASS: TestErrorSpam/unpause (21.47s)

                                                
                                    
x
+
TestErrorSpam/stop (57.1s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-536000 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-536000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-536000 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-536000 stop: (36.3426524s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-536000 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-536000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-536000 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-536000 stop: (10.5979127s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-536000 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-536000 stop
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-536000 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-536000 stop: (10.1549686s)
--- PASS: TestErrorSpam/stop (57.10s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\test\nested\copy\11052\hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.03s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (233.08s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-866600 --memory=4000 --apiserver-port=8441 --wait=all --driver=hyperv
E0314 17:58:45.667112   11052 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-953400\client.crt: The system cannot find the path specified.
functional_test.go:2230: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-866600 --memory=4000 --apiserver-port=8441 --wait=all --driver=hyperv: (3m53.0758047s)
--- PASS: TestFunctional/serial/StartWithProxy (233.08s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (115.07s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-866600 --alsologtostderr -v=8
E0314 18:03:17.856499   11052 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-953400\client.crt: The system cannot find the path specified.
functional_test.go:655: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-866600 --alsologtostderr -v=8: (1m55.0719887s)
functional_test.go:659: soft start took 1m55.0732538s for "functional-866600" cluster.
--- PASS: TestFunctional/serial/SoftStart (115.07s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.12s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.2s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-866600 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.20s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (24.38s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-866600 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-windows-amd64.exe -p functional-866600 cache add registry.k8s.io/pause:3.1: (8.2890148s)
functional_test.go:1045: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-866600 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-windows-amd64.exe -p functional-866600 cache add registry.k8s.io/pause:3.3: (8.0907692s)
functional_test.go:1045: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-866600 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-windows-amd64.exe -p functional-866600 cache add registry.k8s.io/pause:latest: (8.0001304s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (24.38s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (9.56s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-866600 C:\Users\jenkins.minikube7\AppData\Local\Temp\TestFunctionalserialCacheCmdcacheadd_local2775468781\001
functional_test.go:1073: (dbg) Done: docker build -t minikube-local-cache-test:functional-866600 C:\Users\jenkins.minikube7\AppData\Local\Temp\TestFunctionalserialCacheCmdcacheadd_local2775468781\001: (1.6846385s)
functional_test.go:1085: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-866600 cache add minikube-local-cache-test:functional-866600
functional_test.go:1085: (dbg) Done: out/minikube-windows-amd64.exe -p functional-866600 cache add minikube-local-cache-test:functional-866600: (7.4431309s)
functional_test.go:1090: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-866600 cache delete minikube-local-cache-test:functional-866600
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-866600
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (9.56s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.24s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.24s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.23s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-windows-amd64.exe cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.23s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (8.68s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-866600 ssh sudo crictl images
functional_test.go:1120: (dbg) Done: out/minikube-windows-amd64.exe -p functional-866600 ssh sudo crictl images: (8.6790372s)
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (8.68s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (33.7s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-866600 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1143: (dbg) Done: out/minikube-windows-amd64.exe -p functional-866600 ssh sudo docker rmi registry.k8s.io/pause:latest: (8.7065808s)
functional_test.go:1149: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-866600 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-866600 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (8.7066664s)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	W0314 18:05:07.237909   11244 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-866600 cache reload
functional_test.go:1154: (dbg) Done: out/minikube-windows-amd64.exe -p functional-866600 cache reload: (7.6256209s)
functional_test.go:1159: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-866600 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1159: (dbg) Done: out/minikube-windows-amd64.exe -p functional-866600 ssh sudo crictl inspecti registry.k8s.io/pause:latest: (8.6575347s)
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (33.70s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.47s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.47s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.41s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-866600 kubectl -- --context functional-866600 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.41s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (115.82s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-866600 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-866600 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (1m55.8211089s)
functional_test.go:757: restart took 1m55.8211089s for "functional-866600" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (115.82s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.16s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-866600 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.16s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (8s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-866600 logs
functional_test.go:1232: (dbg) Done: out/minikube-windows-amd64.exe -p functional-866600 logs: (8.0025467s)
--- PASS: TestFunctional/serial/LogsCmd (8.00s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (10.02s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-866600 logs --file C:\Users\jenkins.minikube7\AppData\Local\Temp\TestFunctionalserialLogsFileCmd1597810647\001\logs.txt
E0314 18:08:17.884286   11052 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-953400\client.crt: The system cannot find the path specified.
functional_test.go:1246: (dbg) Done: out/minikube-windows-amd64.exe -p functional-866600 logs --file C:\Users\jenkins.minikube7\AppData\Local\Temp\TestFunctionalserialLogsFileCmd1597810647\001\logs.txt: (10.021617s)
--- PASS: TestFunctional/serial/LogsFileCmd (10.02s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (19.5s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-866600 apply -f testdata\invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-windows-amd64.exe service invalid-svc -p functional-866600
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-windows-amd64.exe service invalid-svc -p functional-866600: exit status 115 (15.5508598s)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://172.17.91.78:30312 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0314 18:08:21.708156    3832 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - C:\Users\jenkins.minikube7\AppData\Local\Temp\minikube_service_c9bf6787273d25f6c9d72c0b156373dea6a4fe44_1.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-866600 delete -f testdata\invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (19.50s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (38.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-866600 status
functional_test.go:850: (dbg) Done: out/minikube-windows-amd64.exe -p functional-866600 status: (13.1023661s)
functional_test.go:856: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-866600 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:856: (dbg) Done: out/minikube-windows-amd64.exe -p functional-866600 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}: (12.7695782s)
functional_test.go:868: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-866600 status -o json
functional_test.go:868: (dbg) Done: out/minikube-windows-amd64.exe -p functional-866600 status -o json: (12.6797781s)
--- PASS: TestFunctional/parallel/StatusCmd (38.55s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (25.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1625: (dbg) Run:  kubectl --context functional-866600 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-866600 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-55497b8b78-nlhtc" [f3ada352-90f8-4c16-b218-072e4195be71] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-55497b8b78-nlhtc" [f3ada352-90f8-4c16-b218-072e4195be71] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 9.0101455s
functional_test.go:1645: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-866600 service hello-node-connect --url
functional_test.go:1645: (dbg) Done: out/minikube-windows-amd64.exe -p functional-866600 service hello-node-connect --url: (16.5483466s)
functional_test.go:1651: found endpoint for hello-node-connect: http://172.17.91.78:30380
functional_test.go:1671: http://172.17.91.78:30380: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-55497b8b78-nlhtc

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://172.17.91.78:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=172.17.91.78:30380
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (25.97s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-866600 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-866600 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (39.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [74f7dcf3-94a7-441e-a9c5-207e2bbd1efe] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.0185334s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-866600 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-866600 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-866600 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-866600 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [9d5c3c8b-4590-419a-a4a9-d8520554a5c8] Pending
helpers_test.go:344: "sp-pod" [9d5c3c8b-4590-419a-a4a9-d8520554a5c8] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [9d5c3c8b-4590-419a-a4a9-d8520554a5c8] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 22.0217443s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-866600 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-866600 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-866600 delete -f testdata/storage-provisioner/pod.yaml: (1.287397s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-866600 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [4560818c-70b8-43e9-9940-3dd036be74b9] Pending
helpers_test.go:344: "sp-pod" [4560818c-70b8-43e9-9940-3dd036be74b9] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [4560818c-70b8-43e9-9940-3dd036be74b9] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 9.016796s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-866600 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (39.24s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (19.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-866600 ssh "echo hello"
functional_test.go:1721: (dbg) Done: out/minikube-windows-amd64.exe -p functional-866600 ssh "echo hello": (10.0192608s)
functional_test.go:1738: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-866600 ssh "cat /etc/hostname"
functional_test.go:1738: (dbg) Done: out/minikube-windows-amd64.exe -p functional-866600 ssh "cat /etc/hostname": (9.4836429s)
--- PASS: TestFunctional/parallel/SSHCmd (19.50s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (56.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-866600 cp testdata\cp-test.txt /home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p functional-866600 cp testdata\cp-test.txt /home/docker/cp-test.txt: (7.9502919s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-866600 ssh -n functional-866600 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p functional-866600 ssh -n functional-866600 "sudo cat /home/docker/cp-test.txt": (9.6442391s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-866600 cp functional-866600:/home/docker/cp-test.txt C:\Users\jenkins.minikube7\AppData\Local\Temp\TestFunctionalparallelCpCmd1109994504\001\cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p functional-866600 cp functional-866600:/home/docker/cp-test.txt C:\Users\jenkins.minikube7\AppData\Local\Temp\TestFunctionalparallelCpCmd1109994504\001\cp-test.txt: (10.570902s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-866600 ssh -n functional-866600 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p functional-866600 ssh -n functional-866600 "sudo cat /home/docker/cp-test.txt": (10.0520917s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-866600 cp testdata\cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p functional-866600 cp testdata\cp-test.txt /tmp/does/not/exist/cp-test.txt: (8.0644598s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-866600 ssh -n functional-866600 "sudo cat /tmp/does/not/exist/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p functional-866600 ssh -n functional-866600 "sudo cat /tmp/does/not/exist/cp-test.txt": (9.7909946s)
--- PASS: TestFunctional/parallel/CpCmd (56.08s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (55.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-866600 replace --force -f testdata\mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-859648c796-j6rhh" [734fbaca-4ffe-42a3-b434-770062af4500] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-859648c796-j6rhh" [734fbaca-4ffe-42a3-b434-770062af4500] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 41.0105888s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-866600 exec mysql-859648c796-j6rhh -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-866600 exec mysql-859648c796-j6rhh -- mysql -ppassword -e "show databases;": exit status 1 (271.0467ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-866600 exec mysql-859648c796-j6rhh -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-866600 exec mysql-859648c796-j6rhh -- mysql -ppassword -e "show databases;": exit status 1 (317.5995ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-866600 exec mysql-859648c796-j6rhh -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-866600 exec mysql-859648c796-j6rhh -- mysql -ppassword -e "show databases;": exit status 1 (288.2665ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-866600 exec mysql-859648c796-j6rhh -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-866600 exec mysql-859648c796-j6rhh -- mysql -ppassword -e "show databases;": exit status 1 (287.9351ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-866600 exec mysql-859648c796-j6rhh -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-866600 exec mysql-859648c796-j6rhh -- mysql -ppassword -e "show databases;": exit status 1 (238.0189ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-866600 exec mysql-859648c796-j6rhh -- mysql -ppassword -e "show databases;"
E0314 18:13:17.898612   11052 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-953400\client.crt: The system cannot find the path specified.
--- PASS: TestFunctional/parallel/MySQL (55.02s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (9.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/11052/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-866600 ssh "sudo cat /etc/test/nested/copy/11052/hosts"
functional_test.go:1927: (dbg) Done: out/minikube-windows-amd64.exe -p functional-866600 ssh "sudo cat /etc/test/nested/copy/11052/hosts": (9.3436092s)
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (9.34s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (55.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/11052.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-866600 ssh "sudo cat /etc/ssl/certs/11052.pem"
functional_test.go:1969: (dbg) Done: out/minikube-windows-amd64.exe -p functional-866600 ssh "sudo cat /etc/ssl/certs/11052.pem": (9.205626s)
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/11052.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-866600 ssh "sudo cat /usr/share/ca-certificates/11052.pem"
functional_test.go:1969: (dbg) Done: out/minikube-windows-amd64.exe -p functional-866600 ssh "sudo cat /usr/share/ca-certificates/11052.pem": (9.1430602s)
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-866600 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1969: (dbg) Done: out/minikube-windows-amd64.exe -p functional-866600 ssh "sudo cat /etc/ssl/certs/51391683.0": (9.3190193s)
functional_test.go:1995: Checking for existence of /etc/ssl/certs/110522.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-866600 ssh "sudo cat /etc/ssl/certs/110522.pem"
functional_test.go:1996: (dbg) Done: out/minikube-windows-amd64.exe -p functional-866600 ssh "sudo cat /etc/ssl/certs/110522.pem": (9.1469533s)
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/110522.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-866600 ssh "sudo cat /usr/share/ca-certificates/110522.pem"
functional_test.go:1996: (dbg) Done: out/minikube-windows-amd64.exe -p functional-866600 ssh "sudo cat /usr/share/ca-certificates/110522.pem": (9.5699851s)
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-866600 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
functional_test.go:1996: (dbg) Done: out/minikube-windows-amd64.exe -p functional-866600 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0": (8.8746104s)
--- PASS: TestFunctional/parallel/CertSync (55.26s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-866600 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (9.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-866600 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-866600 ssh "sudo systemctl is-active crio": exit status 1 (9.6890346s)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	W0314 18:08:40.475381    8848 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (9.69s)

                                                
                                    
x
+
TestFunctional/parallel/License (2.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-windows-amd64.exe license
functional_test.go:2284: (dbg) Done: out/minikube-windows-amd64.exe license: (2.577506s)
--- PASS: TestFunctional/parallel/License (2.59s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (16.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1435: (dbg) Run:  kubectl --context functional-866600 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-866600 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-d7447cc7f-wmhtq" [28fc6eb1-c18c-4445-b94f-412e84d3adf1] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-d7447cc7f-wmhtq" [28fc6eb1-c18c-4445-b94f-412e84d3adf1] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 16.0097882s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (16.38s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (10.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-windows-amd64.exe profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
functional_test.go:1271: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (10.0119561s)
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (10.43s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-866600 version --short
--- PASS: TestFunctional/parallel/Version/short (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (7.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-866600 version -o=json --components
functional_test.go:2266: (dbg) Done: out/minikube-windows-amd64.exe -p functional-866600 version -o=json --components: (7.4576266s)
--- PASS: TestFunctional/parallel/Version/components (7.46s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (10.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-windows-amd64.exe profile list
functional_test.go:1306: (dbg) Done: out/minikube-windows-amd64.exe profile list: (10.388184s)
functional_test.go:1311: Took "10.3883476s" to run "out/minikube-windows-amd64.exe profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-windows-amd64.exe profile list -l
functional_test.go:1325: Took "263.7501ms" to run "out/minikube-windows-amd64.exe profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (10.65s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (6.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-866600 image ls --format short --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-windows-amd64.exe -p functional-866600 image ls --format short --alsologtostderr: (6.9616921s)
functional_test.go:265: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-866600 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.4
registry.k8s.io/kube-proxy:v1.28.4
registry.k8s.io/kube-controller-manager:v1.28.4
registry.k8s.io/kube-apiserver:v1.28.4
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/google-containers/addon-resizer:functional-866600
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-866600
functional_test.go:268: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-866600 image ls --format short --alsologtostderr:
W0314 18:11:38.789130    9152 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I0314 18:11:38.856383    9152 out.go:291] Setting OutFile to fd 1348 ...
I0314 18:11:38.857007    9152 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0314 18:11:38.857007    9152 out.go:304] Setting ErrFile to fd 1040...
I0314 18:11:38.857104    9152 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0314 18:11:38.871356    9152 config.go:182] Loaded profile config "functional-866600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0314 18:11:38.871880    9152 config.go:182] Loaded profile config "functional-866600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0314 18:11:38.872173    9152 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-866600 ).state
I0314 18:11:40.898920    9152 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0314 18:11:40.898973    9152 main.go:141] libmachine: [stderr =====>] : 
I0314 18:11:40.909510    9152 ssh_runner.go:195] Run: systemctl --version
I0314 18:11:40.909510    9152 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-866600 ).state
I0314 18:11:42.960208    9152 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0314 18:11:42.961147    9152 main.go:141] libmachine: [stderr =====>] : 
I0314 18:11:42.961205    9152 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-866600 ).networkadapters[0]).ipaddresses[0]
I0314 18:11:45.424201    9152 main.go:141] libmachine: [stdout =====>] : 172.17.91.78

                                                
                                                
I0314 18:11:45.424201    9152 main.go:141] libmachine: [stderr =====>] : 
I0314 18:11:45.424923    9152 sshutil.go:53] new ssh client: &{IP:172.17.91.78 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\functional-866600\id_rsa Username:docker}
I0314 18:11:45.539850    9152 ssh_runner.go:235] Completed: systemctl --version: (4.6299908s)
I0314 18:11:45.547276    9152 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (6.96s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (6.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-866600 image ls --format table --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-windows-amd64.exe -p functional-866600 image ls --format table --alsologtostderr: (6.8697782s)
functional_test.go:265: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-866600 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| docker.io/library/nginx                     | latest            | 92b11f67642b6 | 187MB  |
| registry.k8s.io/kube-apiserver              | v1.28.4           | 7fe0e6f37db33 | 126MB  |
| registry.k8s.io/coredns/coredns             | v1.10.1           | ead0a4a53df89 | 53.6MB |
| registry.k8s.io/pause                       | 3.3               | 0184c1613d929 | 683kB  |
| registry.k8s.io/pause                       | latest            | 350b164e7ae1d | 240kB  |
| docker.io/library/nginx                     | alpine            | 6913ed9ec8d00 | 42.6MB |
| registry.k8s.io/etcd                        | 3.5.9-0           | 73deb9a3f7025 | 294MB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| registry.k8s.io/echoserver                  | 1.8               | 82e4c8a736a4f | 95.4MB |
| docker.io/library/mysql                     | 5.7               | 5107333e08a87 | 501MB  |
| registry.k8s.io/kube-controller-manager     | v1.28.4           | d058aa5ab969c | 122MB  |
| registry.k8s.io/kube-proxy                  | v1.28.4           | 83f6cc407eed8 | 73.2MB |
| registry.k8s.io/kube-scheduler              | v1.28.4           | e3db313c6dbc0 | 60.1MB |
| gcr.io/google-containers/addon-resizer      | functional-866600 | ffd4cfbbe753e | 32.9MB |
| registry.k8s.io/pause                       | 3.1               | da86e6ba6ca19 | 742kB  |
| docker.io/library/minikube-local-cache-test | functional-866600 | cf4afba35b061 | 30B    |
| registry.k8s.io/pause                       | 3.9               | e6f1816883972 | 744kB  |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-866600 image ls --format table --alsologtostderr:
W0314 18:11:55.534903   10748 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I0314 18:11:55.593902   10748 out.go:291] Setting OutFile to fd 1420 ...
I0314 18:11:55.593902   10748 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0314 18:11:55.593902   10748 out.go:304] Setting ErrFile to fd 1376...
I0314 18:11:55.593902   10748 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0314 18:11:55.608246   10748 config.go:182] Loaded profile config "functional-866600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0314 18:11:55.608246   10748 config.go:182] Loaded profile config "functional-866600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0314 18:11:55.609218   10748 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-866600 ).state
I0314 18:11:57.604056   10748 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0314 18:11:57.604056   10748 main.go:141] libmachine: [stderr =====>] : 
I0314 18:11:57.614145   10748 ssh_runner.go:195] Run: systemctl --version
I0314 18:11:57.614145   10748 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-866600 ).state
I0314 18:11:59.637731   10748 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0314 18:11:59.637731   10748 main.go:141] libmachine: [stderr =====>] : 
I0314 18:11:59.637801   10748 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-866600 ).networkadapters[0]).ipaddresses[0]
I0314 18:12:02.078601   10748 main.go:141] libmachine: [stdout =====>] : 172.17.91.78

                                                
                                                
I0314 18:12:02.078601   10748 main.go:141] libmachine: [stderr =====>] : 
I0314 18:12:02.079215   10748 sshutil.go:53] new ssh client: &{IP:172.17.91.78 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\functional-866600\id_rsa Username:docker}
I0314 18:12:02.183146   10748 ssh_runner.go:235] Completed: systemctl --version: (4.5686557s)
I0314 18:12:02.190782   10748 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (6.87s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (7.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-866600 image ls --format json --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-windows-amd64.exe -p functional-866600 image ls --format json --alsologtostderr: (7.0530347s)
functional_test.go:265: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-866600 image ls --format json --alsologtostderr:
[{"id":"83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.28.4"],"size":"73200000"},{"id":"ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"53600000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id":"6913ed9ec8d009744018c1740879327fe2e085935b2cce7a234bf05347b670d7","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"42600000"},{"id":"e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.4"],"size":"60100000"},{"id":"73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d
35c321de1af9","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"294000000"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-866600"],"size":"32900000"},{"id":"d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.4"],"size":"122000000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"95400000"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.9"],"size":"744000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause
:latest"],"size":"240000"},{"id":"cf4afba35b061f3d78dbf60d8b04ddd85c978ca73725a1b6530d09139eceb1bd","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-866600"],"size":"30"},{"id":"92b11f67642b62bbb98e7e49169c346b30e20cd3c1c034d31087e46924b9312e","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"187000000"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":[],"repoTags":["docker.io/library/mysql:5.7"],"size":"501000000"},{"id":"7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.4"],"size":"126000000"}]
functional_test.go:268: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-866600 image ls --format json --alsologtostderr:
W0314 18:11:48.473010    8800 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I0314 18:11:48.549013    8800 out.go:291] Setting OutFile to fd 1060 ...
I0314 18:11:48.550016    8800 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0314 18:11:48.550016    8800 out.go:304] Setting ErrFile to fd 1396...
I0314 18:11:48.550016    8800 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0314 18:11:48.564014    8800 config.go:182] Loaded profile config "functional-866600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0314 18:11:48.564014    8800 config.go:182] Loaded profile config "functional-866600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0314 18:11:48.565011    8800 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-866600 ).state
I0314 18:11:50.607784    8800 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0314 18:11:50.607784    8800 main.go:141] libmachine: [stderr =====>] : 
I0314 18:11:50.616784    8800 ssh_runner.go:195] Run: systemctl --version
I0314 18:11:50.616784    8800 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-866600 ).state
I0314 18:11:52.637172    8800 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0314 18:11:52.637172    8800 main.go:141] libmachine: [stderr =====>] : 
I0314 18:11:52.637172    8800 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-866600 ).networkadapters[0]).ipaddresses[0]
I0314 18:11:55.046707    8800 main.go:141] libmachine: [stdout =====>] : 172.17.91.78

                                                
                                                
I0314 18:11:55.047109    8800 main.go:141] libmachine: [stderr =====>] : 
I0314 18:11:55.047496    8800 sshutil.go:53] new ssh client: &{IP:172.17.91.78 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\functional-866600\id_rsa Username:docker}
I0314 18:11:55.156081    8800 ssh_runner.go:235] Completed: systemctl --version: (4.5389537s)
I0314 18:11:55.163154    8800 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (7.05s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (6.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-866600 image ls --format yaml --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-windows-amd64.exe -p functional-866600 image ls --format yaml --alsologtostderr: (6.9228766s)
functional_test.go:265: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-866600 image ls --format yaml --alsologtostderr:
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-866600
size: "32900000"
- id: cf4afba35b061f3d78dbf60d8b04ddd85c978ca73725a1b6530d09139eceb1bd
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-866600
size: "30"
- id: 83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.28.4
size: "73200000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- registry.k8s.io/echoserver:1.8
size: "95400000"
- id: 6913ed9ec8d009744018c1740879327fe2e085935b2cce7a234bf05347b670d7
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "42600000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.4
size: "60100000"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "744000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: 92b11f67642b62bbb98e7e49169c346b30e20cd3c1c034d31087e46924b9312e
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "187000000"
- id: 7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.4
size: "126000000"
- id: ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "53600000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.4
size: "122000000"
- id: 73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "294000000"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-866600 image ls --format yaml --alsologtostderr:
W0314 18:11:41.550761    8976 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I0314 18:11:41.604964    8976 out.go:291] Setting OutFile to fd 1104 ...
I0314 18:11:41.618225    8976 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0314 18:11:41.618225    8976 out.go:304] Setting ErrFile to fd 1232...
I0314 18:11:41.618225    8976 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0314 18:11:41.637544    8976 config.go:182] Loaded profile config "functional-866600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0314 18:11:41.638545    8976 config.go:182] Loaded profile config "functional-866600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0314 18:11:41.639321    8976 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-866600 ).state
I0314 18:11:43.675127    8976 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0314 18:11:43.675305    8976 main.go:141] libmachine: [stderr =====>] : 
I0314 18:11:43.684255    8976 ssh_runner.go:195] Run: systemctl --version
I0314 18:11:43.684255    8976 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-866600 ).state
I0314 18:11:45.715449    8976 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0314 18:11:45.715449    8976 main.go:141] libmachine: [stderr =====>] : 
I0314 18:11:45.715449    8976 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-866600 ).networkadapters[0]).ipaddresses[0]
I0314 18:11:48.148936    8976 main.go:141] libmachine: [stdout =====>] : 172.17.91.78

                                                
                                                
I0314 18:11:48.148936    8976 main.go:141] libmachine: [stderr =====>] : 
I0314 18:11:48.149298    8976 sshutil.go:53] new ssh client: &{IP:172.17.91.78 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\functional-866600\id_rsa Username:docker}
I0314 18:11:48.257016    8976 ssh_runner.go:235] Completed: systemctl --version: (4.5724163s)
I0314 18:11:48.266843    8976 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (6.92s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-866600 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-866600 ssh pgrep buildkitd: exit status 1 (8.8444579s)

                                                
                                                
** stderr ** 
	W0314 18:11:45.752701    5872 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-866600 image build -t localhost/my-image:functional-866600 testdata\build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-windows-amd64.exe -p functional-866600 image build -t localhost/my-image:functional-866600 testdata\build --alsologtostderr: (9.3901533s)
functional_test.go:319: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-866600 image build -t localhost/my-image:functional-866600 testdata\build --alsologtostderr:
Sending build context to Docker daemon  3.072kB

Step 1/3 : FROM gcr.io/k8s-minikube/busybox
latest: Pulling from k8s-minikube/busybox
5cc84ad355aa: Pulling fs layer
5cc84ad355aa: Verifying Checksum
5cc84ad355aa: Download complete
5cc84ad355aa: Pull complete
Digest: sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:latest
---> beae173ccac6
Step 2/3 : RUN true
---> Running in 83244e88a5aa
---> Removed intermediate container 83244e88a5aa
---> 4dd1e588ba79
Step 3/3 : ADD content.txt /
---> 7390de8a3a16
Successfully built 7390de8a3a16
Successfully tagged localhost/my-image:functional-866600
functional_test.go:322: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-866600 image build -t localhost/my-image:functional-866600 testdata\build --alsologtostderr:
W0314 18:11:54.615187    7588 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I0314 18:11:54.670568    7588 out.go:291] Setting OutFile to fd 1040 ...
I0314 18:11:54.684741    7588 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0314 18:11:54.684741    7588 out.go:304] Setting ErrFile to fd 1064...
I0314 18:11:54.684741    7588 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0314 18:11:54.698907    7588 config.go:182] Loaded profile config "functional-866600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0314 18:11:54.713893    7588 config.go:182] Loaded profile config "functional-866600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0314 18:11:54.714899    7588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-866600 ).state
I0314 18:11:56.726071    7588 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0314 18:11:56.726277    7588 main.go:141] libmachine: [stderr =====>] : 
I0314 18:11:56.736397    7588 ssh_runner.go:195] Run: systemctl --version
I0314 18:11:56.736397    7588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-866600 ).state
I0314 18:11:58.776393    7588 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0314 18:11:58.776393    7588 main.go:141] libmachine: [stderr =====>] : 
I0314 18:11:58.776393    7588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-866600 ).networkadapters[0]).ipaddresses[0]
I0314 18:12:01.179313    7588 main.go:141] libmachine: [stdout =====>] : 172.17.91.78

                                                
                                                
I0314 18:12:01.179313    7588 main.go:141] libmachine: [stderr =====>] : 
I0314 18:12:01.180431    7588 sshutil.go:53] new ssh client: &{IP:172.17.91.78 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\functional-866600\id_rsa Username:docker}
I0314 18:12:01.264545    7588 ssh_runner.go:235] Completed: systemctl --version: (4.5278057s)
I0314 18:12:01.264545    7588 build_images.go:161] Building image from path: C:\Users\jenkins.minikube7\AppData\Local\Temp\build.1178432815.tar
I0314 18:12:01.281042    7588 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0314 18:12:01.310758    7588 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1178432815.tar
I0314 18:12:01.318181    7588 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1178432815.tar: stat -c "%s %y" /var/lib/minikube/build/build.1178432815.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1178432815.tar': No such file or directory
I0314 18:12:01.318181    7588 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\AppData\Local\Temp\build.1178432815.tar --> /var/lib/minikube/build/build.1178432815.tar (3072 bytes)
I0314 18:12:01.383147    7588 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1178432815
I0314 18:12:01.416577    7588 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1178432815 -xf /var/lib/minikube/build/build.1178432815.tar
I0314 18:12:01.437162    7588 docker.go:360] Building image: /var/lib/minikube/build/build.1178432815
I0314 18:12:01.448333    7588 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-866600 /var/lib/minikube/build/build.1178432815
DEPRECATED: The legacy builder is deprecated and will be removed in a future release.
Install the buildx component to build images with BuildKit:
https://docs.docker.com/go/buildx/

                                                
                                                
I0314 18:12:03.793900    7588 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-866600 /var/lib/minikube/build/build.1178432815: (2.3452408s)
I0314 18:12:03.801899    7588 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1178432815
I0314 18:12:03.832975    7588 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1178432815.tar
I0314 18:12:03.856345    7588 build_images.go:217] Built localhost/my-image:functional-866600 from C:\Users\jenkins.minikube7\AppData\Local\Temp\build.1178432815.tar
I0314 18:12:03.856539    7588 build_images.go:133] succeeded building to: functional-866600
I0314 18:12:03.856539    7588 build_images.go:134] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-866600 image ls
functional_test.go:447: (dbg) Done: out/minikube-windows-amd64.exe -p functional-866600 image ls: (6.7632037s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (25.00s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (3.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (3.497974s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-866600
--- PASS: TestFunctional/parallel/ImageCommands/Setup (3.74s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (23.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-866600 image load --daemon gcr.io/google-containers/addon-resizer:functional-866600 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-windows-amd64.exe -p functional-866600 image load --daemon gcr.io/google-containers/addon-resizer:functional-866600 --alsologtostderr: (15.9939672s)
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-866600 image ls
functional_test.go:447: (dbg) Done: out/minikube-windows-amd64.exe -p functional-866600 image ls: (7.7700768s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (23.76s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (13.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-866600 service list
functional_test.go:1455: (dbg) Done: out/minikube-windows-amd64.exe -p functional-866600 service list: (13.2707724s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (13.27s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (10.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-windows-amd64.exe profile list -o json
functional_test.go:1357: (dbg) Done: out/minikube-windows-amd64.exe profile list -o json: (10.5535124s)
functional_test.go:1362: Took "10.5537479s" to run "out/minikube-windows-amd64.exe profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-windows-amd64.exe profile list -o json --light
functional_test.go:1375: Took "232.7477ms" to run "out/minikube-windows-amd64.exe profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (10.79s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (13.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-866600 service list -o json
functional_test.go:1485: (dbg) Done: out/minikube-windows-amd64.exe -p functional-866600 service list -o json: (13.0614185s)
functional_test.go:1490: Took "13.0615449s" to run "out/minikube-windows-amd64.exe -p functional-866600 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (13.06s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (19.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-866600 image load --daemon gcr.io/google-containers/addon-resizer:functional-866600 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-windows-amd64.exe -p functional-866600 image load --daemon gcr.io/google-containers/addon-resizer:functional-866600 --alsologtostderr: (12.0401041s)
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-866600 image ls
functional_test.go:447: (dbg) Done: out/minikube-windows-amd64.exe -p functional-866600 image ls: (7.5968479s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (19.64s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (25.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (3.2410997s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-866600
functional_test.go:244: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-866600 image load --daemon gcr.io/google-containers/addon-resizer:functional-866600 --alsologtostderr
E0314 18:09:41.084123   11052 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-953400\client.crt: The system cannot find the path specified.
functional_test.go:244: (dbg) Done: out/minikube-windows-amd64.exe -p functional-866600 image load --daemon gcr.io/google-containers/addon-resizer:functional-866600 --alsologtostderr: (14.3445854s)
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-866600 image ls
functional_test.go:447: (dbg) Done: out/minikube-windows-amd64.exe -p functional-866600 image ls: (7.8753369s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (25.69s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (8.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-866600 image save gcr.io/google-containers/addon-resizer:functional-866600 C:\jenkins\workspace\Hyper-V_Windows_integration\addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-windows-amd64.exe -p functional-866600 image save gcr.io/google-containers/addon-resizer:functional-866600 C:\jenkins\workspace\Hyper-V_Windows_integration\addon-resizer-save.tar --alsologtostderr: (8.6874766s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (8.69s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (15.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-866600 image rm gcr.io/google-containers/addon-resizer:functional-866600 --alsologtostderr
functional_test.go:391: (dbg) Done: out/minikube-windows-amd64.exe -p functional-866600 image rm gcr.io/google-containers/addon-resizer:functional-866600 --alsologtostderr: (7.2802525s)
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-866600 image ls
functional_test.go:447: (dbg) Done: out/minikube-windows-amd64.exe -p functional-866600 image ls: (8.0852762s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (15.37s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (8.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-866600 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-866600 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-866600 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 11924: OpenProcess: The parameter is incorrect.
helpers_test.go:508: unable to kill pid 2312: TerminateProcess: Access is denied.
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-866600 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (8.44s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-866600 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (14.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-866600 apply -f testdata\testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [69139248-c8de-468a-b45f-f65d882392fc] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [69139248-c8de-468a-b45f-f65d882392fc] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 14.0094996s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (14.57s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (16.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-866600 image load C:\jenkins\workspace\Hyper-V_Windows_integration\addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-windows-amd64.exe -p functional-866600 image load C:\jenkins\workspace\Hyper-V_Windows_integration\addon-resizer-save.tar --alsologtostderr: (9.6640011s)
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-866600 image ls
functional_test.go:447: (dbg) Done: out/minikube-windows-amd64.exe -p functional-866600 image ls: (7.1669424s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (16.83s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-866600 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 12436: TerminateProcess: Access is denied.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (2.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-866600 update-context --alsologtostderr -v=2
functional_test.go:2115: (dbg) Done: out/minikube-windows-amd64.exe -p functional-866600 update-context --alsologtostderr -v=2: (2.2799799s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (2.28s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (2.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-866600 update-context --alsologtostderr -v=2
functional_test.go:2115: (dbg) Done: out/minikube-windows-amd64.exe -p functional-866600 update-context --alsologtostderr -v=2: (2.9474288s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (2.95s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (2.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-866600 update-context --alsologtostderr -v=2
functional_test.go:2115: (dbg) Done: out/minikube-windows-amd64.exe -p functional-866600 update-context --alsologtostderr -v=2: (2.2997907s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (2.30s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/powershell (39.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/powershell
functional_test.go:495: (dbg) Run:  powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-866600 docker-env | Invoke-Expression ; out/minikube-windows-amd64.exe status -p functional-866600"
functional_test.go:495: (dbg) Done: powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-866600 docker-env | Invoke-Expression ; out/minikube-windows-amd64.exe status -p functional-866600": (25.5085878s)
functional_test.go:518: (dbg) Run:  powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-866600 docker-env | Invoke-Expression ; docker images"
functional_test.go:518: (dbg) Done: powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-866600 docker-env | Invoke-Expression ; docker images": (13.5867637s)
--- PASS: TestFunctional/parallel/DockerEnv/powershell (39.11s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (9.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-866600
functional_test.go:423: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-866600 image save --daemon gcr.io/google-containers/addon-resizer:functional-866600 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-windows-amd64.exe -p functional-866600 image save --daemon gcr.io/google-containers/addon-resizer:functional-866600 --alsologtostderr: (9.0282814s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-866600
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (9.41s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.41s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-866600
--- PASS: TestFunctional/delete_addon-resizer_images (0.41s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.16s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-866600
--- PASS: TestFunctional/delete_my-image_image (0.16s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.15s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-866600
--- PASS: TestFunctional/delete_minikube_cached_images (0.15s)

                                                
                                    
x
+
TestMutliControlPlane/serial/StartCluster (691.76s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-windows-amd64.exe start -p ha-832100 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=hyperv
E0314 18:18:17.930744   11052 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-953400\client.crt: The system cannot find the path specified.
E0314 18:18:38.175229   11052 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-866600\client.crt: The system cannot find the path specified.
E0314 18:18:38.189688   11052 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-866600\client.crt: The system cannot find the path specified.
E0314 18:18:38.205097   11052 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-866600\client.crt: The system cannot find the path specified.
E0314 18:18:38.236619   11052 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-866600\client.crt: The system cannot find the path specified.
E0314 18:18:38.283029   11052 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-866600\client.crt: The system cannot find the path specified.
E0314 18:18:38.377162   11052 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-866600\client.crt: The system cannot find the path specified.
E0314 18:18:38.550091   11052 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-866600\client.crt: The system cannot find the path specified.
E0314 18:18:38.882753   11052 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-866600\client.crt: The system cannot find the path specified.
E0314 18:18:39.531220   11052 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-866600\client.crt: The system cannot find the path specified.
E0314 18:18:40.825913   11052 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-866600\client.crt: The system cannot find the path specified.
E0314 18:18:43.401043   11052 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-866600\client.crt: The system cannot find the path specified.
E0314 18:18:48.525149   11052 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-866600\client.crt: The system cannot find the path specified.
E0314 18:18:58.776596   11052 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-866600\client.crt: The system cannot find the path specified.
E0314 18:19:19.268032   11052 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-866600\client.crt: The system cannot find the path specified.
E0314 18:20:00.232312   11052 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-866600\client.crt: The system cannot find the path specified.
E0314 18:21:22.197502   11052 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-866600\client.crt: The system cannot find the path specified.
E0314 18:23:17.946206   11052 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-953400\client.crt: The system cannot find the path specified.
E0314 18:23:38.200663   11052 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-866600\client.crt: The system cannot find the path specified.
E0314 18:24:06.056333   11052 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-866600\client.crt: The system cannot find the path specified.
E0314 18:26:21.163850   11052 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-953400\client.crt: The system cannot find the path specified.
ha_test.go:101: (dbg) Done: out/minikube-windows-amd64.exe start -p ha-832100 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=hyperv: (10m58.4901674s)
ha_test.go:107: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-832100 status -v=7 --alsologtostderr
ha_test.go:107: (dbg) Done: out/minikube-windows-amd64.exe -p ha-832100 status -v=7 --alsologtostderr: (33.267083s)
--- PASS: TestMutliControlPlane/serial/StartCluster (691.76s)

                                                
                                    
x
+
TestMutliControlPlane/serial/DeployApp (10.11s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-832100 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-832100 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p ha-832100 -- rollout status deployment/busybox: (3.4682847s)
ha_test.go:140: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-832100 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-832100 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-832100 -- exec busybox-5b5d89c9d6-9wj82 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p ha-832100 -- exec busybox-5b5d89c9d6-9wj82 -- nslookup kubernetes.io: (1.6089279s)
ha_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-832100 -- exec busybox-5b5d89c9d6-qjmj7 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-832100 -- exec busybox-5b5d89c9d6-zncln -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-832100 -- exec busybox-5b5d89c9d6-9wj82 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-832100 -- exec busybox-5b5d89c9d6-qjmj7 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-832100 -- exec busybox-5b5d89c9d6-zncln -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-832100 -- exec busybox-5b5d89c9d6-9wj82 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-832100 -- exec busybox-5b5d89c9d6-qjmj7 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-832100 -- exec busybox-5b5d89c9d6-zncln -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMutliControlPlane/serial/DeployApp (10.11s)

                                                
                                    
x
+
TestMutliControlPlane/serial/AddWorkerNode (236.54s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-windows-amd64.exe node add -p ha-832100 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Done: out/minikube-windows-amd64.exe node add -p ha-832100 -v=7 --alsologtostderr: (3m12.1912407s)
ha_test.go:234: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-832100 status -v=7 --alsologtostderr
ha_test.go:234: (dbg) Done: out/minikube-windows-amd64.exe -p ha-832100 status -v=7 --alsologtostderr: (44.3496906s)
--- PASS: TestMutliControlPlane/serial/AddWorkerNode (236.54s)

                                                
                                    
x
+
TestMutliControlPlane/serial/NodeLabels (0.17s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-832100 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMutliControlPlane/serial/NodeLabels (0.17s)

                                                
                                    
x
+
TestMutliControlPlane/serial/HAppyAfterClusterStart (26.05s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
E0314 18:33:17.996047   11052 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-953400\client.crt: The system cannot find the path specified.
ha_test.go:281: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (26.0509859s)
--- PASS: TestMutliControlPlane/serial/HAppyAfterClusterStart (26.05s)

                                                
                                    
x
+
TestMutliControlPlane/serial/CopyFile (574.87s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-832100 status --output json -v=7 --alsologtostderr
E0314 18:33:38.233214   11052 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-866600\client.crt: The system cannot find the path specified.
ha_test.go:326: (dbg) Done: out/minikube-windows-amd64.exe -p ha-832100 status --output json -v=7 --alsologtostderr: (44.1031424s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-832100 cp testdata\cp-test.txt ha-832100:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-832100 cp testdata\cp-test.txt ha-832100:/home/docker/cp-test.txt: (8.6982834s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-832100 ssh -n ha-832100 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-832100 ssh -n ha-832100 "sudo cat /home/docker/cp-test.txt": (8.8080925s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-832100 cp ha-832100:/home/docker/cp-test.txt C:\Users\jenkins.minikube7\AppData\Local\Temp\TestMutliControlPlaneserialCopyFile2068586315\001\cp-test_ha-832100.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-832100 cp ha-832100:/home/docker/cp-test.txt C:\Users\jenkins.minikube7\AppData\Local\Temp\TestMutliControlPlaneserialCopyFile2068586315\001\cp-test_ha-832100.txt: (8.7089513s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-832100 ssh -n ha-832100 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-832100 ssh -n ha-832100 "sudo cat /home/docker/cp-test.txt": (8.7946824s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-832100 cp ha-832100:/home/docker/cp-test.txt ha-832100-m02:/home/docker/cp-test_ha-832100_ha-832100-m02.txt
E0314 18:35:01.472084   11052 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-866600\client.crt: The system cannot find the path specified.
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-832100 cp ha-832100:/home/docker/cp-test.txt ha-832100-m02:/home/docker/cp-test_ha-832100_ha-832100-m02.txt: (15.2590735s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-832100 ssh -n ha-832100 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-832100 ssh -n ha-832100 "sudo cat /home/docker/cp-test.txt": (8.7189382s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-832100 ssh -n ha-832100-m02 "sudo cat /home/docker/cp-test_ha-832100_ha-832100-m02.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-832100 ssh -n ha-832100-m02 "sudo cat /home/docker/cp-test_ha-832100_ha-832100-m02.txt": (8.7377453s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-832100 cp ha-832100:/home/docker/cp-test.txt ha-832100-m03:/home/docker/cp-test_ha-832100_ha-832100-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-832100 cp ha-832100:/home/docker/cp-test.txt ha-832100-m03:/home/docker/cp-test_ha-832100_ha-832100-m03.txt: (15.2510609s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-832100 ssh -n ha-832100 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-832100 ssh -n ha-832100 "sudo cat /home/docker/cp-test.txt": (8.6855266s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-832100 ssh -n ha-832100-m03 "sudo cat /home/docker/cp-test_ha-832100_ha-832100-m03.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-832100 ssh -n ha-832100-m03 "sudo cat /home/docker/cp-test_ha-832100_ha-832100-m03.txt": (8.7089537s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-832100 cp ha-832100:/home/docker/cp-test.txt ha-832100-m04:/home/docker/cp-test_ha-832100_ha-832100-m04.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-832100 cp ha-832100:/home/docker/cp-test.txt ha-832100-m04:/home/docker/cp-test_ha-832100_ha-832100-m04.txt: (15.2229471s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-832100 ssh -n ha-832100 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-832100 ssh -n ha-832100 "sudo cat /home/docker/cp-test.txt": (8.6635429s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-832100 ssh -n ha-832100-m04 "sudo cat /home/docker/cp-test_ha-832100_ha-832100-m04.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-832100 ssh -n ha-832100-m04 "sudo cat /home/docker/cp-test_ha-832100_ha-832100-m04.txt": (8.7839182s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-832100 cp testdata\cp-test.txt ha-832100-m02:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-832100 cp testdata\cp-test.txt ha-832100-m02:/home/docker/cp-test.txt: (8.7867993s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-832100 ssh -n ha-832100-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-832100 ssh -n ha-832100-m02 "sudo cat /home/docker/cp-test.txt": (8.6957138s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-832100 cp ha-832100-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube7\AppData\Local\Temp\TestMutliControlPlaneserialCopyFile2068586315\001\cp-test_ha-832100-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-832100 cp ha-832100-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube7\AppData\Local\Temp\TestMutliControlPlaneserialCopyFile2068586315\001\cp-test_ha-832100-m02.txt: (8.6676886s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-832100 ssh -n ha-832100-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-832100 ssh -n ha-832100-m02 "sudo cat /home/docker/cp-test.txt": (8.6752034s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-832100 cp ha-832100-m02:/home/docker/cp-test.txt ha-832100:/home/docker/cp-test_ha-832100-m02_ha-832100.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-832100 cp ha-832100-m02:/home/docker/cp-test.txt ha-832100:/home/docker/cp-test_ha-832100-m02_ha-832100.txt: (15.202365s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-832100 ssh -n ha-832100-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-832100 ssh -n ha-832100-m02 "sudo cat /home/docker/cp-test.txt": (8.7282949s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-832100 ssh -n ha-832100 "sudo cat /home/docker/cp-test_ha-832100-m02_ha-832100.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-832100 ssh -n ha-832100 "sudo cat /home/docker/cp-test_ha-832100-m02_ha-832100.txt": (8.6752167s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-832100 cp ha-832100-m02:/home/docker/cp-test.txt ha-832100-m03:/home/docker/cp-test_ha-832100-m02_ha-832100-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-832100 cp ha-832100-m02:/home/docker/cp-test.txt ha-832100-m03:/home/docker/cp-test_ha-832100-m02_ha-832100-m03.txt: (15.1535143s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-832100 ssh -n ha-832100-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-832100 ssh -n ha-832100-m02 "sudo cat /home/docker/cp-test.txt": (8.6825157s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-832100 ssh -n ha-832100-m03 "sudo cat /home/docker/cp-test_ha-832100-m02_ha-832100-m03.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-832100 ssh -n ha-832100-m03 "sudo cat /home/docker/cp-test_ha-832100-m02_ha-832100-m03.txt": (8.7430628s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-832100 cp ha-832100-m02:/home/docker/cp-test.txt ha-832100-m04:/home/docker/cp-test_ha-832100-m02_ha-832100-m04.txt
E0314 18:38:18.020566   11052 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-953400\client.crt: The system cannot find the path specified.
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-832100 cp ha-832100-m02:/home/docker/cp-test.txt ha-832100-m04:/home/docker/cp-test_ha-832100-m02_ha-832100-m04.txt: (15.1629667s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-832100 ssh -n ha-832100-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-832100 ssh -n ha-832100-m02 "sudo cat /home/docker/cp-test.txt": (8.7178174s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-832100 ssh -n ha-832100-m04 "sudo cat /home/docker/cp-test_ha-832100-m02_ha-832100-m04.txt"
E0314 18:38:38.263362   11052 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-866600\client.crt: The system cannot find the path specified.
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-832100 ssh -n ha-832100-m04 "sudo cat /home/docker/cp-test_ha-832100-m02_ha-832100-m04.txt": (8.7084171s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-832100 cp testdata\cp-test.txt ha-832100-m03:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-832100 cp testdata\cp-test.txt ha-832100-m03:/home/docker/cp-test.txt: (8.7312806s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-832100 ssh -n ha-832100-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-832100 ssh -n ha-832100-m03 "sudo cat /home/docker/cp-test.txt": (8.7479465s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-832100 cp ha-832100-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube7\AppData\Local\Temp\TestMutliControlPlaneserialCopyFile2068586315\001\cp-test_ha-832100-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-832100 cp ha-832100-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube7\AppData\Local\Temp\TestMutliControlPlaneserialCopyFile2068586315\001\cp-test_ha-832100-m03.txt: (8.7695851s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-832100 ssh -n ha-832100-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-832100 ssh -n ha-832100-m03 "sudo cat /home/docker/cp-test.txt": (8.6983993s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-832100 cp ha-832100-m03:/home/docker/cp-test.txt ha-832100:/home/docker/cp-test_ha-832100-m03_ha-832100.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-832100 cp ha-832100-m03:/home/docker/cp-test.txt ha-832100:/home/docker/cp-test_ha-832100-m03_ha-832100.txt: (15.1968266s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-832100 ssh -n ha-832100-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-832100 ssh -n ha-832100-m03 "sudo cat /home/docker/cp-test.txt": (8.7065675s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-832100 ssh -n ha-832100 "sudo cat /home/docker/cp-test_ha-832100-m03_ha-832100.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-832100 ssh -n ha-832100 "sudo cat /home/docker/cp-test_ha-832100-m03_ha-832100.txt": (8.6600679s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-832100 cp ha-832100-m03:/home/docker/cp-test.txt ha-832100-m02:/home/docker/cp-test_ha-832100-m03_ha-832100-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-832100 cp ha-832100-m03:/home/docker/cp-test.txt ha-832100-m02:/home/docker/cp-test_ha-832100-m03_ha-832100-m02.txt: (15.2392165s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-832100 ssh -n ha-832100-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-832100 ssh -n ha-832100-m03 "sudo cat /home/docker/cp-test.txt": (8.7558202s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-832100 ssh -n ha-832100-m02 "sudo cat /home/docker/cp-test_ha-832100-m03_ha-832100-m02.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-832100 ssh -n ha-832100-m02 "sudo cat /home/docker/cp-test_ha-832100-m03_ha-832100-m02.txt": (8.624575s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-832100 cp ha-832100-m03:/home/docker/cp-test.txt ha-832100-m04:/home/docker/cp-test_ha-832100-m03_ha-832100-m04.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-832100 cp ha-832100-m03:/home/docker/cp-test.txt ha-832100-m04:/home/docker/cp-test_ha-832100-m03_ha-832100-m04.txt: (15.2154847s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-832100 ssh -n ha-832100-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-832100 ssh -n ha-832100-m03 "sudo cat /home/docker/cp-test.txt": (8.640949s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-832100 ssh -n ha-832100-m04 "sudo cat /home/docker/cp-test_ha-832100-m03_ha-832100-m04.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-832100 ssh -n ha-832100-m04 "sudo cat /home/docker/cp-test_ha-832100-m03_ha-832100-m04.txt": (8.6663667s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-832100 cp testdata\cp-test.txt ha-832100-m04:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-832100 cp testdata\cp-test.txt ha-832100-m04:/home/docker/cp-test.txt: (8.6234351s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-832100 ssh -n ha-832100-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-832100 ssh -n ha-832100-m04 "sudo cat /home/docker/cp-test.txt": (8.6901695s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-832100 cp ha-832100-m04:/home/docker/cp-test.txt C:\Users\jenkins.minikube7\AppData\Local\Temp\TestMutliControlPlaneserialCopyFile2068586315\001\cp-test_ha-832100-m04.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-832100 cp ha-832100-m04:/home/docker/cp-test.txt C:\Users\jenkins.minikube7\AppData\Local\Temp\TestMutliControlPlaneserialCopyFile2068586315\001\cp-test_ha-832100-m04.txt: (8.6858046s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-832100 ssh -n ha-832100-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-832100 ssh -n ha-832100-m04 "sudo cat /home/docker/cp-test.txt": (8.661425s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-832100 cp ha-832100-m04:/home/docker/cp-test.txt ha-832100:/home/docker/cp-test_ha-832100-m04_ha-832100.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-832100 cp ha-832100-m04:/home/docker/cp-test.txt ha-832100:/home/docker/cp-test_ha-832100-m04_ha-832100.txt: (15.1901601s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-832100 ssh -n ha-832100-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-832100 ssh -n ha-832100-m04 "sudo cat /home/docker/cp-test.txt": (8.697322s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-832100 ssh -n ha-832100 "sudo cat /home/docker/cp-test_ha-832100-m04_ha-832100.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-832100 ssh -n ha-832100 "sudo cat /home/docker/cp-test_ha-832100-m04_ha-832100.txt": (8.7103802s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-832100 cp ha-832100-m04:/home/docker/cp-test.txt ha-832100-m02:/home/docker/cp-test_ha-832100-m04_ha-832100-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-832100 cp ha-832100-m04:/home/docker/cp-test.txt ha-832100-m02:/home/docker/cp-test_ha-832100-m04_ha-832100-m02.txt: (15.1138476s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-832100 ssh -n ha-832100-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-832100 ssh -n ha-832100-m04 "sudo cat /home/docker/cp-test.txt": (8.6162659s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-832100 ssh -n ha-832100-m02 "sudo cat /home/docker/cp-test_ha-832100-m04_ha-832100-m02.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-832100 ssh -n ha-832100-m02 "sudo cat /home/docker/cp-test_ha-832100-m04_ha-832100-m02.txt": (8.7322202s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-832100 cp ha-832100-m04:/home/docker/cp-test.txt ha-832100-m03:/home/docker/cp-test_ha-832100-m04_ha-832100-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-832100 cp ha-832100-m04:/home/docker/cp-test.txt ha-832100-m03:/home/docker/cp-test_ha-832100-m04_ha-832100-m03.txt: (15.2744793s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-832100 ssh -n ha-832100-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-832100 ssh -n ha-832100-m04 "sudo cat /home/docker/cp-test.txt": (8.7375314s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-832100 ssh -n ha-832100-m03 "sudo cat /home/docker/cp-test_ha-832100-m04_ha-832100-m03.txt"
E0314 18:43:01.240041   11052 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-953400\client.crt: The system cannot find the path specified.
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-832100 ssh -n ha-832100-m03 "sudo cat /home/docker/cp-test_ha-832100-m04_ha-832100-m03.txt": (8.7132283s)
--- PASS: TestMutliControlPlane/serial/CopyFile (574.87s)

                                                
                                    
x
+
TestMutliControlPlane/serial/StopSecondaryNode (67.06s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-832100 node stop m02 -v=7 --alsologtostderr
E0314 18:43:18.043781   11052 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-953400\client.crt: The system cannot find the path specified.
ha_test.go:363: (dbg) Done: out/minikube-windows-amd64.exe -p ha-832100 node stop m02 -v=7 --alsologtostderr: (31.9878698s)
ha_test.go:369: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-832100 status -v=7 --alsologtostderr
E0314 18:43:38.277474   11052 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-866600\client.crt: The system cannot find the path specified.
ha_test.go:369: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-832100 status -v=7 --alsologtostderr: exit status 7 (35.0691077s)

                                                
                                                
-- stdout --
	ha-832100
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-832100-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-832100-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-832100-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0314 18:43:36.204478    2572 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0314 18:43:36.259742    2572 out.go:291] Setting OutFile to fd 1052 ...
	I0314 18:43:36.259742    2572 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 18:43:36.259742    2572 out.go:304] Setting ErrFile to fd 1472...
	I0314 18:43:36.259742    2572 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 18:43:36.273635    2572 out.go:298] Setting JSON to false
	I0314 18:43:36.273709    2572 mustload.go:65] Loading cluster: ha-832100
	I0314 18:43:36.273801    2572 notify.go:220] Checking for updates...
	I0314 18:43:36.274578    2572 config.go:182] Loaded profile config "ha-832100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0314 18:43:36.274578    2572 status.go:255] checking status of ha-832100 ...
	I0314 18:43:36.275189    2572 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-832100 ).state
	I0314 18:43:38.290005    2572 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 18:43:38.290005    2572 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:43:38.290005    2572 status.go:330] ha-832100 host status = "Running" (err=<nil>)
	I0314 18:43:38.290005    2572 host.go:66] Checking if "ha-832100" exists ...
	I0314 18:43:38.290877    2572 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-832100 ).state
	I0314 18:43:40.297890    2572 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 18:43:40.298683    2572 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:43:40.298757    2572 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-832100 ).networkadapters[0]).ipaddresses[0]
	I0314 18:43:42.699410    2572 main.go:141] libmachine: [stdout =====>] : 172.17.90.10
	
	I0314 18:43:42.699485    2572 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:43:42.699485    2572 host.go:66] Checking if "ha-832100" exists ...
	I0314 18:43:42.709190    2572 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0314 18:43:42.709190    2572 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-832100 ).state
	I0314 18:43:44.684776    2572 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 18:43:44.685684    2572 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:43:44.685684    2572 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-832100 ).networkadapters[0]).ipaddresses[0]
	I0314 18:43:47.106584    2572 main.go:141] libmachine: [stdout =====>] : 172.17.90.10
	
	I0314 18:43:47.106584    2572 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:43:47.107612    2572 sshutil.go:53] new ssh client: &{IP:172.17.90.10 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\ha-832100\id_rsa Username:docker}
	I0314 18:43:47.207506    2572 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.4979861s)
	I0314 18:43:47.216738    2572 ssh_runner.go:195] Run: systemctl --version
	I0314 18:43:47.237628    2572 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0314 18:43:47.263430    2572 kubeconfig.go:125] found "ha-832100" server: "https://172.17.95.254:8443"
	I0314 18:43:47.263497    2572 api_server.go:166] Checking apiserver status ...
	I0314 18:43:47.272432    2572 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 18:43:47.305104    2572 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2314/cgroup
	W0314 18:43:47.321883    2572 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2314/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0314 18:43:47.331027    2572 ssh_runner.go:195] Run: ls
	I0314 18:43:47.337091    2572 api_server.go:253] Checking apiserver healthz at https://172.17.95.254:8443/healthz ...
	I0314 18:43:47.345370    2572 api_server.go:279] https://172.17.95.254:8443/healthz returned 200:
	ok
	I0314 18:43:47.345370    2572 status.go:422] ha-832100 apiserver status = Running (err=<nil>)
	I0314 18:43:47.345370    2572 status.go:257] ha-832100 status: &{Name:ha-832100 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0314 18:43:47.346289    2572 status.go:255] checking status of ha-832100-m02 ...
	I0314 18:43:47.347328    2572 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-832100-m02 ).state
	I0314 18:43:49.316886    2572 main.go:141] libmachine: [stdout =====>] : Off
	
	I0314 18:43:49.316886    2572 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:43:49.316886    2572 status.go:330] ha-832100-m02 host status = "Stopped" (err=<nil>)
	I0314 18:43:49.316886    2572 status.go:343] host is not running, skipping remaining checks
	I0314 18:43:49.316886    2572 status.go:257] ha-832100-m02 status: &{Name:ha-832100-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0314 18:43:49.316886    2572 status.go:255] checking status of ha-832100-m03 ...
	I0314 18:43:49.317964    2572 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-832100-m03 ).state
	I0314 18:43:51.313113    2572 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 18:43:51.313113    2572 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:43:51.313113    2572 status.go:330] ha-832100-m03 host status = "Running" (err=<nil>)
	I0314 18:43:51.313516    2572 host.go:66] Checking if "ha-832100-m03" exists ...
	I0314 18:43:51.314138    2572 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-832100-m03 ).state
	I0314 18:43:53.309165    2572 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 18:43:53.309165    2572 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:43:53.309259    2572 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-832100-m03 ).networkadapters[0]).ipaddresses[0]
	I0314 18:43:55.721975    2572 main.go:141] libmachine: [stdout =====>] : 172.17.89.54
	
	I0314 18:43:55.722362    2572 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:43:55.722494    2572 host.go:66] Checking if "ha-832100-m03" exists ...
	I0314 18:43:55.732007    2572 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0314 18:43:55.732007    2572 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-832100-m03 ).state
	I0314 18:43:57.718512    2572 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 18:43:57.718512    2572 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:43:57.718512    2572 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-832100-m03 ).networkadapters[0]).ipaddresses[0]
	I0314 18:44:00.103113    2572 main.go:141] libmachine: [stdout =====>] : 172.17.89.54
	
	I0314 18:44:00.103495    2572 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:44:00.103807    2572 sshutil.go:53] new ssh client: &{IP:172.17.89.54 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\ha-832100-m03\id_rsa Username:docker}
	I0314 18:44:00.204486    2572 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.4721478s)
	I0314 18:44:00.215821    2572 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0314 18:44:00.239547    2572 kubeconfig.go:125] found "ha-832100" server: "https://172.17.95.254:8443"
	I0314 18:44:00.239653    2572 api_server.go:166] Checking apiserver status ...
	I0314 18:44:00.247735    2572 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 18:44:00.286476    2572 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2493/cgroup
	W0314 18:44:00.307941    2572 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2493/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0314 18:44:00.316930    2572 ssh_runner.go:195] Run: ls
	I0314 18:44:00.323849    2572 api_server.go:253] Checking apiserver healthz at https://172.17.95.254:8443/healthz ...
	I0314 18:44:00.337320    2572 api_server.go:279] https://172.17.95.254:8443/healthz returned 200:
	ok
	I0314 18:44:00.337320    2572 status.go:422] ha-832100-m03 apiserver status = Running (err=<nil>)
	I0314 18:44:00.337320    2572 status.go:257] ha-832100-m03 status: &{Name:ha-832100-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0314 18:44:00.337320    2572 status.go:255] checking status of ha-832100-m04 ...
	I0314 18:44:00.337875    2572 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-832100-m04 ).state
	I0314 18:44:02.284846    2572 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 18:44:02.284968    2572 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:44:02.284968    2572 status.go:330] ha-832100-m04 host status = "Running" (err=<nil>)
	I0314 18:44:02.284968    2572 host.go:66] Checking if "ha-832100-m04" exists ...
	I0314 18:44:02.285996    2572 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-832100-m04 ).state
	I0314 18:44:04.282324    2572 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 18:44:04.283078    2572 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:44:04.283078    2572 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-832100-m04 ).networkadapters[0]).ipaddresses[0]
	I0314 18:44:06.640828    2572 main.go:141] libmachine: [stdout =====>] : 172.17.93.81
	
	I0314 18:44:06.640828    2572 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:44:06.641736    2572 host.go:66] Checking if "ha-832100-m04" exists ...
	I0314 18:44:06.651152    2572 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0314 18:44:06.651152    2572 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-832100-m04 ).state
	I0314 18:44:08.623518    2572 main.go:141] libmachine: [stdout =====>] : Running
	
	I0314 18:44:08.623889    2572 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:44:08.623969    2572 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-832100-m04 ).networkadapters[0]).ipaddresses[0]
	I0314 18:44:11.005074    2572 main.go:141] libmachine: [stdout =====>] : 172.17.93.81
	
	I0314 18:44:11.005074    2572 main.go:141] libmachine: [stderr =====>] : 
	I0314 18:44:11.005074    2572 sshutil.go:53] new ssh client: &{IP:172.17.93.81 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\ha-832100-m04\id_rsa Username:docker}
	I0314 18:44:11.109565    2572 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.4580823s)
	I0314 18:44:11.117709    2572 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0314 18:44:11.139058    2572 status.go:257] ha-832100-m04 status: &{Name:ha-832100-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMutliControlPlane/serial/StopSecondaryNode (67.06s)

                                                
                                    
x
+
TestMutliControlPlane/serial/DegradedAfterControlPlaneNodeStop (19.52s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
ha_test.go:390: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (19.5198662s)
--- PASS: TestMutliControlPlane/serial/DegradedAfterControlPlaneNodeStop (19.52s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (187.38s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-windows-amd64.exe start -p image-270500 --driver=hyperv
E0314 18:51:41.558632   11052 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-866600\client.crt: The system cannot find the path specified.
image_test.go:69: (dbg) Done: out/minikube-windows-amd64.exe start -p image-270500 --driver=hyperv: (3m7.3783122s)
--- PASS: TestImageBuild/serial/Setup (187.38s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (8.96s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal -p image-270500
E0314 18:53:18.085118   11052 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-953400\client.crt: The system cannot find the path specified.
image_test.go:78: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal -p image-270500: (8.9560244s)
--- PASS: TestImageBuild/serial/NormalBuild (8.96s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (8.36s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-270500
image_test.go:99: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-270500: (8.3563241s)
--- PASS: TestImageBuild/serial/BuildWithBuildArg (8.36s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (7.18s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-270500
E0314 18:53:38.326422   11052 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-866600\client.crt: The system cannot find the path specified.
image_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-270500: (7.1799183s)
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (7.18s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (6.96s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-270500
image_test.go:88: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-270500: (6.9549177s)
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (6.96s)

                                                
                                    
x
+
TestJSONOutput/start/Command (200.46s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe start -p json-output-707300 --output=json --user=testUser --memory=2200 --wait=true --driver=hyperv
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe start -p json-output-707300 --output=json --user=testUser --memory=2200 --wait=true --driver=hyperv: (3m20.462261s)
--- PASS: TestJSONOutput/start/Command (200.46s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (7.28s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe pause -p json-output-707300 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe pause -p json-output-707300 --output=json --user=testUser: (7.2777258s)
--- PASS: TestJSONOutput/pause/Command (7.28s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (7.29s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p json-output-707300 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe unpause -p json-output-707300 --output=json --user=testUser: (7.2891408s)
--- PASS: TestJSONOutput/unpause/Command (7.29s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (36.88s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe stop -p json-output-707300 --output=json --user=testUser
E0314 18:58:18.110729   11052 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-953400\client.crt: The system cannot find the path specified.
E0314 18:58:38.347487   11052 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-866600\client.crt: The system cannot find the path specified.
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe stop -p json-output-707300 --output=json --user=testUser: (36.8779868s)
--- PASS: TestJSONOutput/stop/Command (36.88s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (1.33s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-windows-amd64.exe start -p json-output-error-561500 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p json-output-error-561500 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (285.8474ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"54cc1546-9a0f-4989-a312-a7dafc3c22be","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-561500] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4046 Build 19045.4046","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"82955eae-07fb-4451-80c9-881ec2212a22","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=C:\\Users\\jenkins.minikube7\\minikube-integration\\kubeconfig"}}
	{"specversion":"1.0","id":"f98f532a-cd60-4c2c-b432-33adb47a1521","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"bba2a21c-86ba-4c37-b2a5-2b8b49a8c631","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube"}}
	{"specversion":"1.0","id":"188b129a-69d1-4103-88d3-0af6366e8920","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18384"}}
	{"specversion":"1.0","id":"5dddbffe-7230-4ff6-925e-f762a694dd43","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"360b377e-47fa-402c-af26-bbc09ef160a2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on windows/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
** stderr ** 
	W0314 18:59:00.145452    4308 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "json-output-error-561500" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p json-output-error-561500
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p json-output-error-561500: (1.047212s)
--- PASS: TestErrorJSONOutput (1.33s)

                                                
                                    
x
+
TestMainNoArgs (0.22s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-windows-amd64.exe
--- PASS: TestMainNoArgs (0.22s)

                                                
                                    
x
+
TestMinikubeProfile (500.67s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p first-725200 --driver=hyperv
E0314 18:59:41.326932   11052 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-953400\client.crt: The system cannot find the path specified.
minikube_profile_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p first-725200 --driver=hyperv: (3m7.9074915s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p second-725200 --driver=hyperv
E0314 19:03:18.121070   11052 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-953400\client.crt: The system cannot find the path specified.
E0314 19:03:38.366511   11052 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-866600\client.crt: The system cannot find the path specified.
minikube_profile_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p second-725200 --driver=hyperv: (3m7.4390584s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe profile first-725200
minikube_profile_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe profile list -ojson
minikube_profile_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe profile list -ojson: (19.9414695s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe profile second-725200
minikube_profile_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe profile list -ojson
minikube_profile_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe profile list -ojson: (19.7968487s)
helpers_test.go:175: Cleaning up "second-725200" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p second-725200
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p second-725200: (44.9258252s)
helpers_test.go:175: Cleaning up "first-725200" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p first-725200
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p first-725200: (39.8795337s)
--- PASS: TestMinikubeProfile (500.67s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (145.81s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-1-049100 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=hyperv
E0314 19:08:18.147741   11052 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-953400\client.crt: The system cannot find the path specified.
E0314 19:08:21.641367   11052 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-866600\client.crt: The system cannot find the path specified.
E0314 19:08:38.401098   11052 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-866600\client.crt: The system cannot find the path specified.
mount_start_test.go:98: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-1-049100 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=hyperv: (2m24.796391s)
--- PASS: TestMountStart/serial/StartWithMountFirst (145.81s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (8.79s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-1-049100 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-1-049100 ssh -- ls /minikube-host: (8.7901424s)
--- PASS: TestMountStart/serial/VerifyMountFirst (8.79s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (146.45s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-2-049100 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=hyperv
mount_start_test.go:98: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-2-049100 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=hyperv: (2m25.4359674s)
--- PASS: TestMountStart/serial/StartWithMountSecond (146.45s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (8.84s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-049100 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-2-049100 ssh -- ls /minikube-host: (8.8358879s)
--- PASS: TestMountStart/serial/VerifyMountSecond (8.84s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (29.21s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-windows-amd64.exe delete -p mount-start-1-049100 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-windows-amd64.exe delete -p mount-start-1-049100 --alsologtostderr -v=5: (29.2117116s)
--- PASS: TestMountStart/serial/DeleteFirst (29.21s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (8.93s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-049100 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-2-049100 ssh -- ls /minikube-host: (8.9309026s)
--- PASS: TestMountStart/serial/VerifyMountPostDelete (8.93s)

                                                
                                    
x
+
TestMountStart/serial/Stop (24.7s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-windows-amd64.exe stop -p mount-start-2-049100
E0314 19:13:18.166971   11052 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-953400\client.crt: The system cannot find the path specified.
mount_start_test.go:155: (dbg) Done: out/minikube-windows-amd64.exe stop -p mount-start-2-049100: (24.7010333s)
--- PASS: TestMountStart/serial/Stop (24.70s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (111.21s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-2-049100
E0314 19:13:38.413877   11052 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-866600\client.crt: The system cannot find the path specified.
mount_start_test.go:166: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-2-049100: (1m50.2030958s)
--- PASS: TestMountStart/serial/RestartStopped (111.21s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (8.79s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-049100 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-2-049100 ssh -- ls /minikube-host: (8.7873245s)
--- PASS: TestMountStart/serial/VerifyMountPostStop (8.79s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (399.19s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-442000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=hyperv
E0314 19:16:21.409526   11052 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-953400\client.crt: The system cannot find the path specified.
E0314 19:18:18.200914   11052 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-953400\client.crt: The system cannot find the path specified.
E0314 19:18:38.444460   11052 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-866600\client.crt: The system cannot find the path specified.
multinode_test.go:96: (dbg) Done: out/minikube-windows-amd64.exe start -p multinode-442000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=hyperv: (6m17.1578384s)
multinode_test.go:102: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-442000 status --alsologtostderr
multinode_test.go:102: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-442000 status --alsologtostderr: (22.0361278s)
--- PASS: TestMultiNode/serial/FreshStart2Nodes (399.19s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (8.77s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-442000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-442000 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-442000 -- rollout status deployment/busybox: (3.1743177s)
multinode_test.go:505: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-442000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-442000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-442000 -- exec busybox-5b5d89c9d6-7446n -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-442000 -- exec busybox-5b5d89c9d6-7446n -- nslookup kubernetes.io: (1.8413547s)
multinode_test.go:536: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-442000 -- exec busybox-5b5d89c9d6-8drpb -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-442000 -- exec busybox-5b5d89c9d6-7446n -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-442000 -- exec busybox-5b5d89c9d6-8drpb -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-442000 -- exec busybox-5b5d89c9d6-7446n -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-442000 -- exec busybox-5b5d89c9d6-8drpb -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (8.77s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (211.54s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-windows-amd64.exe node add -p multinode-442000 -v 3 --alsologtostderr
E0314 19:25:01.725984   11052 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-866600\client.crt: The system cannot find the path specified.
multinode_test.go:121: (dbg) Done: out/minikube-windows-amd64.exe node add -p multinode-442000 -v 3 --alsologtostderr: (2m58.8307604s)
multinode_test.go:127: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-442000 status --alsologtostderr
multinode_test.go:127: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-442000 status --alsologtostderr: (32.7045031s)
--- PASS: TestMultiNode/serial/AddNode (211.54s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.18s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-442000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.18s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (11.34s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
multinode_test.go:143: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (11.3369175s)
--- PASS: TestMultiNode/serial/ProfileList (11.34s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (331.42s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-442000 status --output json --alsologtostderr
multinode_test.go:184: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-442000 status --output json --alsologtostderr: (32.9556859s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-442000 cp testdata\cp-test.txt multinode-442000:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-442000 cp testdata\cp-test.txt multinode-442000:/home/docker/cp-test.txt: (8.6960184s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-442000 ssh -n multinode-442000 "sudo cat /home/docker/cp-test.txt"
E0314 19:28:18.245321   11052 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-953400\client.crt: The system cannot find the path specified.
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-442000 ssh -n multinode-442000 "sudo cat /home/docker/cp-test.txt": (8.6807651s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-442000 cp multinode-442000:/home/docker/cp-test.txt C:\Users\jenkins.minikube7\AppData\Local\Temp\TestMultiNodeserialCopyFile1678027892\001\cp-test_multinode-442000.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-442000 cp multinode-442000:/home/docker/cp-test.txt C:\Users\jenkins.minikube7\AppData\Local\Temp\TestMultiNodeserialCopyFile1678027892\001\cp-test_multinode-442000.txt: (8.6547641s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-442000 ssh -n multinode-442000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-442000 ssh -n multinode-442000 "sudo cat /home/docker/cp-test.txt": (8.6007195s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-442000 cp multinode-442000:/home/docker/cp-test.txt multinode-442000-m02:/home/docker/cp-test_multinode-442000_multinode-442000-m02.txt
E0314 19:28:38.486903   11052 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-866600\client.crt: The system cannot find the path specified.
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-442000 cp multinode-442000:/home/docker/cp-test.txt multinode-442000-m02:/home/docker/cp-test_multinode-442000_multinode-442000-m02.txt: (15.0957001s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-442000 ssh -n multinode-442000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-442000 ssh -n multinode-442000 "sudo cat /home/docker/cp-test.txt": (8.5838743s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-442000 ssh -n multinode-442000-m02 "sudo cat /home/docker/cp-test_multinode-442000_multinode-442000-m02.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-442000 ssh -n multinode-442000-m02 "sudo cat /home/docker/cp-test_multinode-442000_multinode-442000-m02.txt": (8.7231406s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-442000 cp multinode-442000:/home/docker/cp-test.txt multinode-442000-m03:/home/docker/cp-test_multinode-442000_multinode-442000-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-442000 cp multinode-442000:/home/docker/cp-test.txt multinode-442000-m03:/home/docker/cp-test_multinode-442000_multinode-442000-m03.txt: (15.1123103s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-442000 ssh -n multinode-442000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-442000 ssh -n multinode-442000 "sudo cat /home/docker/cp-test.txt": (8.6855218s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-442000 ssh -n multinode-442000-m03 "sudo cat /home/docker/cp-test_multinode-442000_multinode-442000-m03.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-442000 ssh -n multinode-442000-m03 "sudo cat /home/docker/cp-test_multinode-442000_multinode-442000-m03.txt": (8.6116178s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-442000 cp testdata\cp-test.txt multinode-442000-m02:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-442000 cp testdata\cp-test.txt multinode-442000-m02:/home/docker/cp-test.txt: (8.6893619s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-442000 ssh -n multinode-442000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-442000 ssh -n multinode-442000-m02 "sudo cat /home/docker/cp-test.txt": (8.5694782s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-442000 cp multinode-442000-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube7\AppData\Local\Temp\TestMultiNodeserialCopyFile1678027892\001\cp-test_multinode-442000-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-442000 cp multinode-442000-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube7\AppData\Local\Temp\TestMultiNodeserialCopyFile1678027892\001\cp-test_multinode-442000-m02.txt: (8.6588682s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-442000 ssh -n multinode-442000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-442000 ssh -n multinode-442000-m02 "sudo cat /home/docker/cp-test.txt": (8.6570794s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-442000 cp multinode-442000-m02:/home/docker/cp-test.txt multinode-442000:/home/docker/cp-test_multinode-442000-m02_multinode-442000.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-442000 cp multinode-442000-m02:/home/docker/cp-test.txt multinode-442000:/home/docker/cp-test_multinode-442000-m02_multinode-442000.txt: (15.1487372s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-442000 ssh -n multinode-442000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-442000 ssh -n multinode-442000-m02 "sudo cat /home/docker/cp-test.txt": (8.694776s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-442000 ssh -n multinode-442000 "sudo cat /home/docker/cp-test_multinode-442000-m02_multinode-442000.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-442000 ssh -n multinode-442000 "sudo cat /home/docker/cp-test_multinode-442000-m02_multinode-442000.txt": (8.6888613s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-442000 cp multinode-442000-m02:/home/docker/cp-test.txt multinode-442000-m03:/home/docker/cp-test_multinode-442000-m02_multinode-442000-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-442000 cp multinode-442000-m02:/home/docker/cp-test.txt multinode-442000-m03:/home/docker/cp-test_multinode-442000-m02_multinode-442000-m03.txt: (15.1732898s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-442000 ssh -n multinode-442000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-442000 ssh -n multinode-442000-m02 "sudo cat /home/docker/cp-test.txt": (8.7083714s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-442000 ssh -n multinode-442000-m03 "sudo cat /home/docker/cp-test_multinode-442000-m02_multinode-442000-m03.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-442000 ssh -n multinode-442000-m03 "sudo cat /home/docker/cp-test_multinode-442000-m02_multinode-442000-m03.txt": (8.6323462s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-442000 cp testdata\cp-test.txt multinode-442000-m03:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-442000 cp testdata\cp-test.txt multinode-442000-m03:/home/docker/cp-test.txt: (8.642722s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-442000 ssh -n multinode-442000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-442000 ssh -n multinode-442000-m03 "sudo cat /home/docker/cp-test.txt": (8.6311441s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-442000 cp multinode-442000-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube7\AppData\Local\Temp\TestMultiNodeserialCopyFile1678027892\001\cp-test_multinode-442000-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-442000 cp multinode-442000-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube7\AppData\Local\Temp\TestMultiNodeserialCopyFile1678027892\001\cp-test_multinode-442000-m03.txt: (8.626619s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-442000 ssh -n multinode-442000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-442000 ssh -n multinode-442000-m03 "sudo cat /home/docker/cp-test.txt": (8.6638759s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-442000 cp multinode-442000-m03:/home/docker/cp-test.txt multinode-442000:/home/docker/cp-test_multinode-442000-m03_multinode-442000.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-442000 cp multinode-442000-m03:/home/docker/cp-test.txt multinode-442000:/home/docker/cp-test_multinode-442000-m03_multinode-442000.txt: (15.0835887s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-442000 ssh -n multinode-442000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-442000 ssh -n multinode-442000-m03 "sudo cat /home/docker/cp-test.txt": (8.6853104s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-442000 ssh -n multinode-442000 "sudo cat /home/docker/cp-test_multinode-442000-m03_multinode-442000.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-442000 ssh -n multinode-442000 "sudo cat /home/docker/cp-test_multinode-442000-m03_multinode-442000.txt": (8.6805437s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-442000 cp multinode-442000-m03:/home/docker/cp-test.txt multinode-442000-m02:/home/docker/cp-test_multinode-442000-m03_multinode-442000-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-442000 cp multinode-442000-m03:/home/docker/cp-test.txt multinode-442000-m02:/home/docker/cp-test_multinode-442000-m03_multinode-442000-m02.txt: (15.05589s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-442000 ssh -n multinode-442000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-442000 ssh -n multinode-442000-m03 "sudo cat /home/docker/cp-test.txt": (8.6571954s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-442000 ssh -n multinode-442000-m02 "sudo cat /home/docker/cp-test_multinode-442000-m03_multinode-442000-m02.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-442000 ssh -n multinode-442000-m02 "sudo cat /home/docker/cp-test_multinode-442000-m03_multinode-442000-m02.txt": (8.6493796s)
--- PASS: TestMultiNode/serial/CopyFile (331.42s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (171.46s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-442000 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-442000 node start m03 -v=7 --alsologtostderr: (2m18.6427036s)
multinode_test.go:290: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-442000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-442000 status -v=7 --alsologtostderr: (32.6590919s)
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (171.46s)

                                                
                                    
x
+
TestPreload (489.36s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p test-preload-905700 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperv --kubernetes-version=v1.24.4
E0314 19:49:41.573179   11052 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-953400\client.crt: The system cannot find the path specified.
E0314 19:53:18.346491   11052 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-953400\client.crt: The system cannot find the path specified.
E0314 19:53:38.602912   11052 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-866600\client.crt: The system cannot find the path specified.
preload_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p test-preload-905700 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperv --kubernetes-version=v1.24.4: (4m12.946814s)
preload_test.go:52: (dbg) Run:  out/minikube-windows-amd64.exe -p test-preload-905700 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-windows-amd64.exe -p test-preload-905700 image pull gcr.io/k8s-minikube/busybox: (7.7606591s)
preload_test.go:58: (dbg) Run:  out/minikube-windows-amd64.exe stop -p test-preload-905700
preload_test.go:58: (dbg) Done: out/minikube-windows-amd64.exe stop -p test-preload-905700: (37.4655791s)
preload_test.go:66: (dbg) Run:  out/minikube-windows-amd64.exe start -p test-preload-905700 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=hyperv
preload_test.go:66: (dbg) Done: out/minikube-windows-amd64.exe start -p test-preload-905700 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=hyperv: (2m24.7476022s)
preload_test.go:71: (dbg) Run:  out/minikube-windows-amd64.exe -p test-preload-905700 image list
preload_test.go:71: (dbg) Done: out/minikube-windows-amd64.exe -p test-preload-905700 image list: (6.7144353s)
helpers_test.go:175: Cleaning up "test-preload-905700" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p test-preload-905700
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p test-preload-905700: (39.7269777s)
--- PASS: TestPreload (489.36s)

                                                
                                    
x
+
TestScheduledStopWindows (318.54s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-windows-amd64.exe start -p scheduled-stop-622900 --memory=2048 --driver=hyperv
E0314 19:58:18.377277   11052 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-953400\client.crt: The system cannot find the path specified.
E0314 19:58:21.888200   11052 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-866600\client.crt: The system cannot find the path specified.
E0314 19:58:38.618662   11052 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-866600\client.crt: The system cannot find the path specified.
scheduled_stop_test.go:128: (dbg) Done: out/minikube-windows-amd64.exe start -p scheduled-stop-622900 --memory=2048 --driver=hyperv: (3m7.1308859s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-windows-amd64.exe stop -p scheduled-stop-622900 --schedule 5m
scheduled_stop_test.go:137: (dbg) Done: out/minikube-windows-amd64.exe stop -p scheduled-stop-622900 --schedule 5m: (9.883515s)
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.TimeToStop}} -p scheduled-stop-622900 -n scheduled-stop-622900
scheduled_stop_test.go:191: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.TimeToStop}} -p scheduled-stop-622900 -n scheduled-stop-622900: exit status 1 (10.0161405s)

                                                
                                                
** stderr ** 
	W0314 20:00:56.691334    5184 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
scheduled_stop_test.go:191: status error: exit status 1 (may be ok)
scheduled_stop_test.go:54: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p scheduled-stop-622900 -- sudo systemctl show minikube-scheduled-stop --no-page
scheduled_stop_test.go:54: (dbg) Done: out/minikube-windows-amd64.exe ssh -p scheduled-stop-622900 -- sudo systemctl show minikube-scheduled-stop --no-page: (8.8416098s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-windows-amd64.exe stop -p scheduled-stop-622900 --schedule 5s
scheduled_stop_test.go:137: (dbg) Done: out/minikube-windows-amd64.exe stop -p scheduled-stop-622900 --schedule 5s: (9.916395s)
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-windows-amd64.exe status -p scheduled-stop-622900
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status -p scheduled-stop-622900: exit status 7 (2.2060979s)

                                                
                                                
-- stdout --
	scheduled-stop-622900
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0314 20:02:25.468772   12396 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p scheduled-stop-622900 -n scheduled-stop-622900
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p scheduled-stop-622900 -n scheduled-stop-622900: exit status 7 (2.22224s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W0314 20:02:27.685064    6808 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-622900" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p scheduled-stop-622900
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p scheduled-stop-622900: (28.3211486s)
--- PASS: TestScheduledStopWindows (318.54s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (914.55s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  C:\Users\jenkins.minikube7\AppData\Local\Temp\minikube-v1.26.0.3459253390.exe start -p running-upgrade-630600 --memory=2200 --vm-driver=hyperv
version_upgrade_test.go:120: (dbg) Done: C:\Users\jenkins.minikube7\AppData\Local\Temp\minikube-v1.26.0.3459253390.exe start -p running-upgrade-630600 --memory=2200 --vm-driver=hyperv: (6m2.0522647s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-windows-amd64.exe start -p running-upgrade-630600 --memory=2200 --alsologtostderr -v=1 --driver=hyperv
E0314 20:13:18.432599   11052 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-953400\client.crt: The system cannot find the path specified.
E0314 20:13:38.680275   11052 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-866600\client.crt: The system cannot find the path specified.
E0314 20:15:01.964785   11052 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-866600\client.crt: The system cannot find the path specified.
version_upgrade_test.go:130: (dbg) Done: out/minikube-windows-amd64.exe start -p running-upgrade-630600 --memory=2200 --alsologtostderr -v=1 --driver=hyperv: (8m7.9478936s)
helpers_test.go:175: Cleaning up "running-upgrade-630600" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p running-upgrade-630600
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p running-upgrade-630600: (1m3.55327s)
--- PASS: TestRunningBinaryUpgrade (914.55s)

                                                
                                    
x
+
TestKubernetesUpgrade (1082.3s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-394100 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=hyperv
E0314 20:08:18.425066   11052 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-953400\client.crt: The system cannot find the path specified.
version_upgrade_test.go:222: (dbg) Done: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-394100 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=hyperv: (7m9.1063094s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-windows-amd64.exe stop -p kubernetes-upgrade-394100
version_upgrade_test.go:227: (dbg) Done: out/minikube-windows-amd64.exe stop -p kubernetes-upgrade-394100: (31.7262589s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-windows-amd64.exe -p kubernetes-upgrade-394100 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p kubernetes-upgrade-394100 status --format={{.Host}}: exit status 7 (2.2258448s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W0314 20:15:56.031995    7828 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-394100 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=hyperv
version_upgrade_test.go:243: (dbg) Done: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-394100 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=hyperv: (6m4.395092s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-394100 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-394100 --memory=2200 --kubernetes-version=v1.20.0 --driver=hyperv
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-394100 --memory=2200 --kubernetes-version=v1.20.0 --driver=hyperv: exit status 106 (1.8180976s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-394100] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4046 Build 19045.4046
	  - KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=18384
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0314 20:22:02.858702    1336 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.29.0-rc.2 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-394100
	    minikube start -p kubernetes-upgrade-394100 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-3941002 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.29.0-rc.2, by running:
	    
	    minikube start -p kubernetes-upgrade-394100 --kubernetes-version=v1.29.0-rc.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-394100 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=hyperv
version_upgrade_test.go:275: (dbg) Done: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-394100 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=hyperv: (3m27.7694657s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-394100" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p kubernetes-upgrade-394100
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p kubernetes-upgrade-394100: (45.0994075s)
--- PASS: TestKubernetesUpgrade (1082.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.34s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-956500 --no-kubernetes --kubernetes-version=1.20 --driver=hyperv
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p NoKubernetes-956500 --no-kubernetes --kubernetes-version=1.20 --driver=hyperv: exit status 14 (342.1646ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-956500] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4046 Build 19045.4046
	  - KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=18384
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0314 20:02:58.238447   14136 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.34s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.85s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.85s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (863.43s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  C:\Users\jenkins.minikube7\AppData\Local\Temp\minikube-v1.26.0.161921422.exe start -p stopped-upgrade-326500 --memory=2200 --vm-driver=hyperv
version_upgrade_test.go:183: (dbg) Done: C:\Users\jenkins.minikube7\AppData\Local\Temp\minikube-v1.26.0.161921422.exe start -p stopped-upgrade-326500 --memory=2200 --vm-driver=hyperv: (8m6.3662058s)
version_upgrade_test.go:192: (dbg) Run:  C:\Users\jenkins.minikube7\AppData\Local\Temp\minikube-v1.26.0.161921422.exe -p stopped-upgrade-326500 stop
version_upgrade_test.go:192: (dbg) Done: C:\Users\jenkins.minikube7\AppData\Local\Temp\minikube-v1.26.0.161921422.exe -p stopped-upgrade-326500 stop: (32.9092661s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-windows-amd64.exe start -p stopped-upgrade-326500 --memory=2200 --alsologtostderr -v=1 --driver=hyperv
E0314 20:18:18.462205   11052 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-953400\client.crt: The system cannot find the path specified.
E0314 20:18:38.713743   11052 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-866600\client.crt: The system cannot find the path specified.
version_upgrade_test.go:198: (dbg) Done: out/minikube-windows-amd64.exe start -p stopped-upgrade-326500 --memory=2200 --alsologtostderr -v=1 --driver=hyperv: (5m44.148364s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (863.43s)

                                                
                                    
x
+
TestPause/serial/Start (505.33s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-windows-amd64.exe start -p pause-478200 --memory=2048 --install-addons=false --wait=all --driver=hyperv
pause_test.go:80: (dbg) Done: out/minikube-windows-amd64.exe start -p pause-478200 --memory=2048 --install-addons=false --wait=all --driver=hyperv: (8m25.3299994s)
--- PASS: TestPause/serial/Start (505.33s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (260.51s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-windows-amd64.exe start -p pause-478200 --alsologtostderr -v=1 --driver=hyperv
pause_test.go:92: (dbg) Done: out/minikube-windows-amd64.exe start -p pause-478200 --alsologtostderr -v=1 --driver=hyperv: (4m20.4794429s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (260.51s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (9.26s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-windows-amd64.exe logs -p stopped-upgrade-326500
version_upgrade_test.go:206: (dbg) Done: out/minikube-windows-amd64.exe logs -p stopped-upgrade-326500: (9.2622118s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (9.26s)

                                                
                                    
x
+
TestPause/serial/Pause (8.51s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe pause -p pause-478200 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe pause -p pause-478200 --alsologtostderr -v=5: (8.5063346s)
--- PASS: TestPause/serial/Pause (8.51s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (12.51s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-windows-amd64.exe status -p pause-478200 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status -p pause-478200 --output=json --layout=cluster: exit status 2 (12.5060945s)

                                                
                                                
-- stdout --
	{"Name":"pause-478200","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-478200","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	W0314 20:24:48.834230    1696 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
--- PASS: TestPause/serial/VerifyStatus (12.51s)

                                                
                                    
x
+
TestPause/serial/Unpause (7.48s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p pause-478200 --alsologtostderr -v=5
pause_test.go:121: (dbg) Done: out/minikube-windows-amd64.exe unpause -p pause-478200 --alsologtostderr -v=5: (7.4742941s)
--- PASS: TestPause/serial/Unpause (7.48s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (7.57s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe pause -p pause-478200 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe pause -p pause-478200 --alsologtostderr -v=5: (7.5691237s)
--- PASS: TestPause/serial/PauseAgain (7.57s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (47.78s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-windows-amd64.exe delete -p pause-478200 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-windows-amd64.exe delete -p pause-478200 --alsologtostderr -v=5: (47.7849965s)
--- PASS: TestPause/serial/DeletePaused (47.78s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (9.45s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (9.4462519s)
--- PASS: TestPause/serial/VerifyDeletedResources (9.45s)

                                                
                                    

Test skip (32/217)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.4/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.4/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:498: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false windows amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (300.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-windows-amd64.exe dashboard --url --port 36195 -p functional-866600 --alsologtostderr -v=1]
functional_test.go:912: output didn't produce a URL
functional_test.go:906: (dbg) stopping [out/minikube-windows-amd64.exe dashboard --url --port 36195 -p functional-866600 --alsologtostderr -v=1] ...
helpers_test.go:502: unable to terminate pid 12680: Access is denied.
--- SKIP: TestFunctional/parallel/DashboardCmd (300.01s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (5.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-866600 --dry-run --memory 250MB --alsologtostderr --driver=hyperv
functional_test.go:970: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-866600 --dry-run --memory 250MB --alsologtostderr --driver=hyperv: exit status 1 (5.0340652s)

                                                
                                                
-- stdout --
	* [functional-866600] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4046 Build 19045.4046
	  - KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=18384
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true

                                                
                                                
-- /stdout --
** stderr ** 
	W0314 18:09:50.053717   14068 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0314 18:09:50.104659   14068 out.go:291] Setting OutFile to fd 1196 ...
	I0314 18:09:50.105074   14068 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 18:09:50.105074   14068 out.go:304] Setting ErrFile to fd 696...
	I0314 18:09:50.105074   14068 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 18:09:50.125909   14068 out.go:298] Setting JSON to false
	I0314 18:09:50.129981   14068 start.go:129] hostinfo: {"hostname":"minikube7","uptime":61594,"bootTime":1710378195,"procs":200,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4046 Build 19045.4046","kernelVersion":"10.0.19045.4046 Build 19045.4046","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f66ed2ea-6c04-4a6b-8eea-b2fb0953e990"}
	W0314 18:09:50.129981   14068 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0314 18:09:50.156888   14068 out.go:177] * [functional-866600] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4046 Build 19045.4046
	I0314 18:09:50.160566   14068 notify.go:220] Checking for updates...
	I0314 18:09:50.162637   14068 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0314 18:09:50.164471   14068 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0314 18:09:50.166978   14068 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	I0314 18:09:50.169152   14068 out.go:177]   - MINIKUBE_LOCATION=18384
	I0314 18:09:50.171591   14068 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0314 18:09:50.174954   14068 config.go:182] Loaded profile config "functional-866600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0314 18:09:50.175961   14068 driver.go:392] Setting default libvirt URI to qemu:///system

                                                
                                                
** /stderr **
functional_test.go:976: skipping this error on HyperV till this issue is solved https://github.com/kubernetes/minikube/issues/9785
--- SKIP: TestFunctional/parallel/DryRun (5.04s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (5.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-866600 --dry-run --memory 250MB --alsologtostderr --driver=hyperv
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-866600 --dry-run --memory 250MB --alsologtostderr --driver=hyperv: exit status 1 (5.0533869s)

                                                
                                                
-- stdout --
	* [functional-866600] minikube v1.32.0 sur Microsoft Windows 10 Enterprise N 10.0.19045.4046 Build 19045.4046
	  - KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=18384
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true

                                                
                                                
-- /stdout --
** stderr ** 
	W0314 18:09:33.964183    7224 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0314 18:09:34.021185    7224 out.go:291] Setting OutFile to fd 812 ...
	I0314 18:09:34.021185    7224 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 18:09:34.021185    7224 out.go:304] Setting ErrFile to fd 1080...
	I0314 18:09:34.021185    7224 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 18:09:34.041187    7224 out.go:298] Setting JSON to false
	I0314 18:09:34.044629    7224 start.go:129] hostinfo: {"hostname":"minikube7","uptime":61578,"bootTime":1710378195,"procs":202,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4046 Build 19045.4046","kernelVersion":"10.0.19045.4046 Build 19045.4046","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f66ed2ea-6c04-4a6b-8eea-b2fb0953e990"}
	W0314 18:09:34.044629    7224 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0314 18:09:34.051493    7224 out.go:177] * [functional-866600] minikube v1.32.0 sur Microsoft Windows 10 Enterprise N 10.0.19045.4046 Build 19045.4046
	I0314 18:09:34.053837    7224 notify.go:220] Checking for updates...
	I0314 18:09:34.056546    7224 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0314 18:09:34.058631    7224 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0314 18:09:34.061554    7224 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	I0314 18:09:34.064609    7224 out.go:177]   - MINIKUBE_LOCATION=18384
	I0314 18:09:34.067479    7224 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0314 18:09:34.070369    7224 config.go:182] Loaded profile config "functional-866600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0314 18:09:34.071816    7224 driver.go:392] Setting default libvirt URI to qemu:///system

                                                
                                                
** /stderr **
functional_test.go:1021: skipping this error on HyperV till this issue is solved https://github.com/kubernetes/minikube/issues/9785
--- SKIP: TestFunctional/parallel/InternationalLanguage (5.05s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd
=== PAUSE TestFunctional/parallel/MountCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd
functional_test_mount_test.go:57: skipping: mount broken on hyperv: https://github.com/kubernetes/minikube/issues/5029
--- SKIP: TestFunctional/parallel/MountCmd (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:230: The test WaitService/IngressIP is broken on hyperv https://github.com/kubernetes/minikube/issues/8381
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:258: skipping: access direct test is broken on windows: https://github.com/kubernetes/minikube/issues/8304
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopUnix (0s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:76: test only runs on unix
--- SKIP: TestScheduledStopUnix (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:39: skipping due to https://github.com/kubernetes/minikube/issues/14232
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
Copied to clipboard